Svbtle

 

Svbtle

Chef integration testing with serverspec

Most resources discussing testing with Chef deal with unit testing. Unit testing your Chef recipes is a very good idea, but will only get you so far. You can have 100s of passing unit tests and your Chef converges will still fail. Integration testing can greatly increase the confidence you have in your Chef code. In this post, I’ll walk through how we’re currently doing it at Essess.

First, a clarification on Chef testing terms

I didn’t really understand how chefspec unit tests fit into the Chef testing process until I watched Seth Vargo’s talk at a June meetup on it. If you use Chef and haven’t watched it yet, check it out. He broke it down like this:

Unit testing in Chef is intention testing. When you run your chefspec tests, you are only testing whether you are telling Chef to do what you expected. When is this useful? When you have some complicated logic in your recipe or if you want to make sure you don’t introduce regressions. This kind of testing doesn’t tell you if your Chef converge will succeed or not. For that, we need to go beyond unit tests.

Enter integration testing

test-kitchen is what you need to do integration testing with Chef. It is still beta-state, but that doesn’t mean it’s not ready to use. It does mean that the documentation for the project is not great, and you might need to read through the project’s Github repo to figure out what’s going on.

Basically, this is what test-kitchen does:

  • Launches a test machine (vagrant box, ec2 instance, LXC container, etc)
  • Converges a given Chef run list on the machine
  • Runs tests afterwards to make sure the box is in the state you expect

The tests are run by so-called bussers, test setup and execution frameworks.

test-kitchen ships with the bats busser, which allows you to write bash tests for your converged box. I got very frustrated with bats though; unless you’re a unix wizard it is very slow and unintuitive to write these tests in bash. There is a better way.

serverspec, it’s better

serverspec let’s you write your tests in Ruby in a more intuitive way. It’s syntax is similar to chefspec.

There are several resources types available to use.

The busser that test-kitchen uses depends on the folder structure you’re using. After test-kitchen converges, it will install the appropriate busser and use it to run your tests.

Enough theory, an example

Let’s say we want to launch an Ubuntu 12.04 server that is running MongoDB and RabbitMQ. This setup is arbitrary; you can use test-kitchen and serverspec to test any box you bring up.

This code is up at https://github.com/dustinmm80/serverspec-example.

You’ll need the vagrant-berkshelf plugin for test-kitchen to integrate with berkshelf. The setup.sh script in the repo will take care of this for you.

So, our .kitchen.yml file looks like this.

---
driver_plugin: vagrant
driver_config:
  require_chef_omnibus: true
  use_vagrant_berkshelf_plugin: true
  customize:
    memory: 1512
    cpus: 4

platforms:
- name: ubuntu12.04
  driver_config:
    box: ubuntu12.04
    box_url: https://opscode-vm-bento.s3.amazonaws.com/vagrant/opscode_ubuntu-12.04_provisionerless.box

suites:
- name: mongorabbit
  run_list: 
    - recipe[mongodb::10gen_repo]
    - recipe[mongodb]
    - recipe[rabbitmq]
  attributes:

We’re using the vagrant driver to launch a local virtualbox VM here.

Our serverspec tests will be located here: test/integration/mongorabbit/serverspec/. The mongorabbit dir maps to the suite we’re running tests against, and the serverspec dir lets test-kitchen know what busser it needs to run these tests.

We have 2 serverspec files here, one to test mongo and the other for rabbitmq. It’s a good idea to split functional parts of your server up into different test files. I test roles this way, so sometimes there are several different spec files.

mongo_spec.rb

require 'spec_helper'

# Mongo service
describe service('mongodb') do
    it { should be_enabled }
    it { should be_running }
end

ports = [27017, 28017]

ports.each do |port|
    describe port(port) do
        it { should be_listening }
    end
end

rabbitmq_spec.rb

require 'spec_helper'

describe service('rabbitmq-server') do
    it { should be_enabled }
    it { should be_running }
end

describe port(5672) do
    it { should be_listening }
end

For each service, we are testing that it is enabled for boot, running and listening on the ports it should be.

We also have spec_helper.rb, which configures how serverspec will run. Different options can be found in the serverspec docs. I just copied the one opscode uses in some of their cookbooks.

test-kitchen workflow

Since the docs around test-kitchen are not great yet, I’ll share my workflow when I’m writing serverspec tests.

I launch my test-kitchen box with this command:

test-kitchen -d never

The -d flag tells test-kitchen when to destroy the box. By default, it will destroy the box after a successful test run. This makes sense from a CI perspective, but not when developing tests. So I’m saying -d never to keep the box up. Otherwise, I’d have to wait for a full converge to run my tests again.

Once the box is up and converged, I write my tests and run this command to run them:

test-kitchen verify

If I want to log into the box to poke around I need to specify which box I mean. The naming schema is suite name + platform name, so in my example mongorabbit-ubuntu12.04. Use this command:

test-kitchen login mongorabbit-ubuntu12.04

Finally, when I am satisfied my tests are passing and I’m done working I destroy all the boxes.

test-kitchen destroy

Continuous integration with test-kitchen+serverspec

Launching Vagrant boxes on your CI server might not be the best or safest way to run your test-kitchen suites. I am using the kitchen-ec2 driver to launch EC2 instances and run the tests on them from Jenkins. test-kitchen runs after a cookbook has been pushed and its unit tests pass.

I’m sure there are many ways to do this and my process will surely look quite different a couple months from now.

Conclusion

test-kitchen and serverspec are awesome tools and let you really test your Chef code before releasing it out into the wild. Since adopting this approach, my team is much more confident that our servers are in a state to run our code when we push it.

Are you using a similar approach? What busser have you found to be the most useful? I would love to see more discussion in this area as these tools become better and more widely used.

 
14
Kudos
 
14
Kudos