Andrew is one of the http://ep.io founders. “ep.io, smart python hosting”.
Last year he spoke too fast. He quoted from my blog from last year, “Andrew speaks English like a machine gun speaks bullets”. He behaved himself admirably this year. Bit fast towards the end, but perfectly clear :-)
The basic architecture is pretty simple. Two balancers up front, a whole lot of runners behind it and databases and so behind that. Everything is redundant. But distributed programming is hard.
The hardware is partially real colo’d machined (pretty reliable), linode (pretty reliable) and ec2 (pretty unreliable). They’re slowly moving everything to real machines as it is handy to be able to drive over to the physical machine and whack it with a stick.
They used to use Redis, now everything runs ZeroMQ as it eleminates a single point of failure. ZeroMQ is used for control messages (REQ/XREP), stats, logs (PUSH/PULL), heatbeats, locking (PUB/SUB).
The big part is of course the runners that run the actual websites. Every app runs in its own virtualenv. All logging is done asynchronously using ZeroMQ. Logs are also written to the filesystem.
The load balancers intercept all incoming http requests. They’re fully http 1.1 compliant, which mean web sockets actually work.
They have various databases. Cheap postgres shared databases. Dedicated databases, needed for for instance redis which doesn’t support various users. And the top-of-the-line postgres offerings.
So, that’s the basics for the infrastructure. What about the actual python code? Well, wsgi. Wsgi is standard, but wsgi alone is not enough. Python code means dependencies. Virtualenv/pip/buildout. And you need to host static files outside of the regular wsgi process.
And... how do you deal with settings? Per-host + local changes + basic settings. And how do you deal with python paths? Project level imports, app imports, reusable apps. It all has to work. For hundreds of sites.
Databases are a bit simpler. SQL just works. If it is SQL, it is postgres. Redis for key/value, but mongodb will be supported soon.
High availability is not terribly easy with shared DBs. Postgres 9’s warm standby works pretty well. Redis has SLAVEOF.
Oh, and “high availability” doesn’t mean “backups”. You still need backups. They use btrfs for consistent snapshotting: rsync isn’t enough. These snapshots are rsynced remotely. And there’s no access to the backups from the server as it is too easy to destroy the backup that way after doing something wrong on the server.
For wsgi, they use gunicorn. Very stable. Supports long-running requests. As http server they use ngnix. Extremely fast, low memory footprint, extremely high quality and extremely stable.
For the load balancer, they used to use HAProxy, but they’ve rewritten it to a cutom python daemon using eventlet. Note that they couldn’t use ngnix here as it doesn’t do http 1.1 for backends. Their implementation is not terribly fast, but fast enough at the moment. But they’re looking for improvements.
As a task queue they use celery (see yesterday’s talk).
Management commands were run from subprocesses first. It went to a custom PTY module, now it is a pty-wrapping subprocess.
Some general advice if you’re crazy enough to do this (this=serving loads of sites).
My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):