Infrastructure with Python

Like Python? Into DevOps?

Here are some tools I use make my life a little easier.


THE command line interface if you’re using AWS. Really nice documentation on how to talk to the different AWS services. I use this a lot as a glue library. For example, once Jenkins runs tests on a project, we tar.gz an artifact and use awscli to upload it to S3.


If you’re using AWS and need to get state or resources at runtime boto is what you want. The API is large, but well documented and composed.

botocore is a smaller low-level alternative, but I don’t care much about size when boto’s docs are good. botocore is the foundation for awscli.


Fabric is my go-to tool for remote execution. PyChef fetches the hosts I need and passes them to Fabric, which takes care of running any commands. This is a popular lib that has been around for a while; a lot of well-tested features.


A web framework? Yes. A lot of the headache of managing infrastructure can be alleviated by having good visibility on what your system is doing. Flask is great for building small simple dashboards to present information on various subsystems.


A Python API for interacting with your Chef server. I use this in our deployment and monitoring tooling. When I want to deploy an app, I use PyChef to search for the nodes that the app is currently on and use Fabric to deploy to those hosts.


My team uses Hipchat. We have rooms set up that alert us to problems across our infrastructure. This library is simple and just works.


Lots of my infrastucture work is tying different services together. If the data I need is behind a HTTP API, requests is usually the easiest way to get it. The context here is important.

If there is an API wrapper library already written, I’ll use it if I’ll be using several different endpoints. If I’m only hitting a few endpoints or they are simple, I find it’s usually easier to just use requests instead.


I’m not a fan of writing init scripts in the archaic init.d syntax, and Upstart limits you to Ubuntu. supervisor makes it easy to run Python code inside virtual environments. Sold.


troposphere allows you to describe AWS CloudFormation stacks in Python. You can then generate your JSON. The main advantage here for me was keeping my stack definitions DRY. Instead of doing the same thing over and over again in JSON, I just define it once in Python and import it when I need it.


When I need a application server, I reach for uwsgi. It’s simple, highly configurable and has built in support for virtualenvs. It also works well with supervisor and nginx (my proxy of choice). There are a lot of alternatives to uwsgi, but I don’t need async so they aren’t very compelling.

 Okay great, more tools. How do they work together?

Here’s an example of how I set up a continuous deployment workflow using these tools.

Some webserver instances running a Flask app running in a uwsgi process via supervisor is up on AWS. This stack has been defined in troposphere, which generates the JSON file CloudFormation needs.

A teammate pushes a new feature for the Flask app to Github. A Jenkins job is triggered and runs the tests and linting for the app.

If the build is successful, a job is run that compresses that commit as a build artifact and uploads it to S3 using awscli.

Once the artifact has been pushed to S3, another Jenkins job runs to deploy that artifact to instances in our dev environment. This job uses PyChef to query for nodes with attributes corresponding to that app. Now we can use Fabric to connect to the nodes via hostname. Fabric pulls the artifact from S3 onto the node using awscli, extracts it to the app directory, installs updated requirements into the app’s virtualenv and then restarts the uswgi and nginx processes running under supervisor. Some info on the deploy is recorded in Mongo. If the deploy failed, an alert is sent to Hipchat with python-simple-hipchat.

We have a deploy dashboard, a Flask app running in a uswgi process via supervisor, that presents the deploy metadata stored in MongoDB in a web frontend for easy analysis.

This flow took a while to figure out and we’re constantly improving it, but it works.

It’s easy to succumb to choice paralysis when tooling your infrastructure; if you’re using Python you already have the tools to compose many infrastructure workflows. In my experience, having the whole team work on infrastructure tooling can pay off in unexpected ways. For example, creating a Fabric workflow will help you understand decorators better. You can then use that new knowledge in other projects you work on.

What Python tools have you found helpful for infrastructure tooling?


Now read this

Testing Python command line apps

Recently I wrote a Python command line app and had some trouble finding a good example of how to test it using best practices. Some ideas I ran across in my search: Use a 3rd party CLI-testing solution like ScriptTest or a framework that... Continue →