# Using virtualenv in production

2014-02-10 2 min read

One of my favorite things about Python is being able to use virtualenv to create isolated environments. It’s extremely simple to use and allows you to have different versions of Python libraries used by different projects.

The thing that’s tricky is getting virtualenv set up on a production environment under different services since each one requires a slightly different configuration. I’ve gone through my projects and collected the various ways I’ve gotten it running for different services. I’m sure I could have done it differently but the following worked for me and will hopefully come in handy to others. If you have any questions or I’m not being clear enough let me know and I’ll updat the post with more information.

• Nginx and Gunicorn under Supervisor.

Nginx - The configuration isn't anything different than normal except that you may need to specify some specific paths that are within your virtualenv

  Static files needs to point to virtualenv directory
autoindex on;
}

Gunicorn - I have a shell script here that's used to set the various paths and options that configure Gunicorn

#!/bin/bash
set -e
DJANGODIR=/home/ubuntu/app
DJANGO_SETTINGS_MODULE=app.settings.prod

LOGFILE=/var/log/gunicorn/guni-app.log
LOGDIR=$(dirname$LOGFILE)
NUM_WORKERS=2
# user/group to run as
USER=ubuntu
GROUP=ubuntu
cd /home/ubuntu/app
source /home/ubuntu/app/venv/bin/activate

export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$DJANGODIR:$PYTHONPATH test -d$LOGDIR || mkdir -p $LOGDIR exec /home/ubuntu/app/venv/bin/gunicorn_django -w$NUM_WORKERS \
--user=$USER --group=$GROUP --log-level=debug \
--log-file=$LOGFILE -b 0.0.0.0:8000 2>>$LOGFILE

Supevisor - Here we just point our configuration file to the shell script for Gunicorn

[program:gunicorn-myapp]
directory = /home/ubuntu/myapp
user = ubuntu
command = /home/ubuntu/myapp/scripts/start.sh
stdout_logfile = /var/log/gunicorn/myapp-std.log
stderr_logfile = /var/log/gunicorn/myapp-err.log
• Celery under Supervisor.

In this case we just configure Supervisor to start virtualenv path for celery. A cool feature is being able to specify the environment variables - in my case to pass in the Django settings module.

[program:celery]
; Set full path to celery program if using virtualenv
command=/home/ubuntu/myapp/venv/bin/celery worker -A myapp --loglevel=INFO

directory=/home/ubuntu/myapp
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10

environment =
DJANGO_SETTINGS_MODULE=myapp.settings.prod
• Fabric.

The idea here is to make sure all our remote install commands are run after activiating the virtualenv.

from __future__ import with_statement
from fabric.api import *
from contextlib import contextmanager as _contextmanager

env.activate = 'source /home/ubuntu/myapp/venv/bin/activate'
env.directory = '/home/ubuntu/myapp'

@_contextmanager
def virtualenv():
with cd(env.directory):
with prefix(env.activate):
yield

@hosts(env.roledefs['db'])
def rebuild_index():
with virtualenv():
run("python manage.py rebuild_index")