Postmodern Sysadmin

A blog about servers and junk

A Configuration Management Rosetta Stone: Configuring Sensu With Puppet, Chef, Ansible and Salt

I recently finished my Intermediate Sensu Training on Udemy. It was a ton of work but I’m glad I got it all together. Part of that training includes how to deploy and configure Sensu with four of the most popular open-source configuration management tools: Puppet, Chef, Ansible, and Salt.

The Sensu Decree

In order to do the training I had to learn each of these tools enough so I could install a baseline Sensu installation. Here is what I reproduced with each iteration:

  • A Sensu client, Server, and API Setup and Running
  • RabbitMQ Server, User, and Sensu Vhost ready for use. (no SSL)
  • Redis installed and running for state
  • A Sensu check (check_disk and/or check_apache)
  • The Sensu Mail handler to send emails for alerts
  • The Uchiwa Dashboard
  • All on one host (localhost)

This was no small feat, and required using a non-trivial number of features of each configuration management system to get the job done.

Here were some other guidelines that I followed in this exercise:

  • Always use 3rd party modules/cookbooks/etc. Use official ones if possible.
  • Use the local-execution mode provided by the configuration management tool (no client/server setup)
  • Follow official docs when available for general guidelines for things like installation.
  • Differences in things like config file names or versions of Redis are inconsequential. As long as Sensu behaved the same I considered it complete.
  • No considerations for security (out of scope for this exercise)

Review of Each Tool


Puppet In General

Puppet is my “native language” when it comes to configuration management, so it is a little hard for me to imagine what it is like to not know what it is like to know exactly how it works.

Puppet has a custom DSL to describe configuration in terms of “types”. These are the primitives that you can build infrastructure upon, things like “files, “package”, and “service”. Third party modules can extend that language with custom types, allowing you to abstract over the “raw” types. For example, the RabbitMQ has a type for providing rabbitmq_users, which do not correspond to a particular config file or anything, but instead can only be added by special invocations of the rabbitmqctl command.

Puppet strongly emphasizes code-reuse. The Puppet Forge is the registry where you can upload and share modules. The Forge has a number of methods to help indicate code quality. It also exposes “officially supported” and “officially approved” modules, for extra approval stamps. While the forge may have a very “long tail” of modules that do very common tasks, the set of officially-supported and officially-approved modules leaves behind a good selection of high-quality modules ready for re-use.

A common criticism of Puppet is that it does not apply resources in the order that they are declared in the manifest. Instead, Puppet internally calculates a directed graph of resources and their dependencies, and executes them in a dependent order, which is not necessarily in the order in which they are parsed. This is similar to how Linux package managers install packages. If you run apt-get install apache libc libssl, the packages will not necessarily get installed in the order that they were specified on the command line.

Puppet also comes with Hiera, a convenient hierarchical key/value store. This store allows users to override and set site-specific settings to Puppet modules without having to fork or modify them. Hiera encourages custom hierarchies that meet your business needs, allowing users to specify settings in a way that makes the most sense for their environments. And example hierarchy might look something like:

├── common.yaml
├── environment
│   ├── dev.yaml
│   └── prod.yaml
├── datacenter
│   ├── dc1.yaml
│   └── dc2.yaml
└── hostname
    ├── web1.yaml
    └── web2.yaml

Then Hiera looks up parameters from most-specific (hostname) to least-specific (common), and returns the first value that is available.

Review of the Sensu Puppet Module

The sensu-puppet module is a first-class citizen in the Sensu world. It has native types for the Sensu JSON files that it manages, as well as a sensu-gem type for easily installing rubygems with the embedded Sensu ruby.

The Sensu Puppet module only manages Sensu, and has no integration with any other RabbitMQ, Redis, or any other module. To me this is expected, in the Puppet world it would be the job of a profile to combine the Sensu module with RabbitMQ and other things. For the most part this integration is left as an exercise to the reader.

The Sensu Puppet module also doesn’t manage Uchiwa. That requires a different puppet module. Again to me this is a good thing, I hate it when tools try to do too much.

The actual codebase is actively maintained and reasonably active, with a few releases per year. The Puppet Forge rates it almost perfectly for module quality. The code has excellent unit test and acceptance test coverage. As far as Puppet modules go, the Sensu Puppet module is a great example of a well-maintained piece of code.

One downside the “completeness” of the module is that sometimes new features of Sensu are released, and the puppet-module will lag. The configuration inputs to the puppet module are well-typed, and not just free-form hashes. This gives a lot of guardrails and helps ensure config files are correct before they hit the disk, but it means that some features are not usable until the Puppet module can account for them.

Although the code worked, there was a significant bug that prevented the module from ever converging. This was annoying but allowed me to test the code. This bug looks to be fixed in master.


Chef in General

Chef is not as old as Puppet, but is certainly a mature product. Chef is “just ruby” when it comes to its configuration language. The upside to this is that Ruby developers can theoretically dive in and hack on stuff. The downside to this is that being “just ruby”, “leaves a lot of rope to hang yourself”.

One nice feature provided by the Chef company is their hosted chef solution, which allows people to get started without hosting a Chef-server.

The Chef toolset also comes with the knife command, which is a great command line tool for interacting with the Chef-server. It also is a parallel-ssh tool, manipulates chef cookbooks, and can also launch ec2 (and other) instances. (did they take the kitchen-sink metaphor too far?)

The Chef Supermarket serves as the public registry for Chef cookbooks. There are not too many quality indicators to see, to help find which cookbooks are any good. The best metric I could see is just sorting by “followers”. This is made up by the fact that there are over a hundred officially supported cookbooks.

Probably the most difficult aspect of Chef for me to understand was how attributes interact. This confusion is probably most obvious when you look at Chef’s 15 levels of attribute precedence. It seems to me that there should be a more obvious way for intent to flow, but I could be just spoiled by Puppet’s Hiera.

Review of the Chef-Sensu Cookbook

The Sensu Chef Cookbook is also a first-class citizen in the Sensu-world. Chef is the “native config language” of Sean Porter, the main author of Sensu. This gives a lot of credibility to the Cookbook, and shows in the contributor page.

The Cookbook itself is feature complete, with recipes for installing and configuring all aspects of Sensu.

The scope of the cookbook includes all Sensu related technologies, including RabbitMQ, Redis, and Uchiwa. It is certainly “batteries included” and on by default. It even downloads and compiles Redis from source for you.

Another example of this “batteries included” design is the RabbitMQ module setting Apt attributes. Like the above Redis example, this behavior surprised me, but technically it is not related to the Sensu chef cookbook.

At the same time, wrapper cookbooks are recommended as a method to combine multiple cookbooks together in a coherent way. I think in general I just expected the wrapper cookbooks to do more and the main Sensu cookbook to do less.

The cookbook does have an integration test suite, but it is not run via Travis. The code is under active development, and does multiple releases a year. It has native support for Chef data bags for transporting the RabbitMQ SSL support, which is a nice touch (Not tested in this review).


Ansible in General

Ansible is a relative newcomer to the configuration management space. Ansible uses yaml files to define desired state. The yamls files are a nice way to represent things, but it would be misleading to think that Ansible is just yaml files. Ansible has its own DSL and uses Jinja2 templating, which is parsed over the contents of the yaml.

The Ansible Galaxy is the community registry for uploading shared roles. You can sort by rating to try to get a better idea about which roles are potentially higher quality than others.

There doesn’t seem to be any official roles/playbooks. The closest there is to official roles is the ansible-examples repository. But click the link and look at the lamp_simple example. There is no code-reuse at all! Every example re-invents how to install apache, install ntp, configure iptables, etc. What’s up with that?

While the yaml files may make it very easy for beginners to make playbooks that get things done quickly, I don’t think they will work out great as infrastructure expands. The abstractions just are not there

Another sign, to me, that Ansible has the wrong abstractions is that so many roles are distro specific. Not many have the necessary code to work on both “CentOS” and “Debian”. There is a generic package type, but very few roles use it? Check out the original author’s opinion on the subject. Look at the examples! They all only work on yum based distributions.

I’ve read lots of posts of people migrating to Ansible and loving it. Personally, I don’t get it. The abstractions are too low-level. If you are lucky, then the Ansible core has a Module to manipulate the resources on the host, like RabbitMQ stuff. If you are unlucky, then the only primitives you have available are yaml files and running commands and parsing stdout. Or you can write your own module.

Ansible Sensu Playbook Review

There is no official Sensu Ansible playbook. I was not able to find any playbooks that support RedHat-based distributions.

Luckily, I was able to use Mayeu’s ansible playbook, in conjunction with this RabbitMQ playbook on my Ubuntu server.

The sensu_check module is part of the “Extras”, but it is only a very small part of deploying Sensu, and it has no cohesion with the playbook that actually deploys Sensu itself. There is no way to extend sensu_check without forking ansible-modules-extras. It can’t consume arbitrary check metadata.

In the end, to meet my needs I had to construct hashes myself and deploy them to disk as JSON. The playbook-provided way to deploy sensu checks is to have them all contained in the single sensu_checks variable.


Salt in General

Salt is also a relative new-comer to the configuration management world. As a user, Salt feels very similar to Ansible. They both use yaml files to represent the desired state of the system. Both use Jinja templates. Both require the “advanced” system interaction to happen with the core stuff, and the Salt formulas can be just yaml with no real code.

Salt takes a different approach to sharing community code compared to the other configuration management systems. Salt keeps all the official formulas in one GitHub project. The docs recommend forking the formula for your own use. On the plus side, having “canonical” formulas for common tasks reduces duplication and encourages code re-use. The downside is that… it encourages forking? These formulas in general are not that extensive. They don’t have releases or any kind of testing in place.

Salt’s Pillar is a powerful tool for separating configuration from code. It is similar to Puppet’s Hiera. Pro: separate config from code; keep the site-specific variables in a separate folder than the formulas. Con: formulas have to be “pillar-aware”. There is no equivalent to Puppet’s automatic parameter lookup.

Sensu Salt Formula Review

For my testing, I used the official Salt-formula. There is a sensu-salt repo on the official Sensu project, but it is not really suitable for production use in my opinion.

For the most part, the formula did what it said on the tin. Of course, like Ansible, the only way I was able to deploy checks in a flexible way was to construct my own Hashes and deploy them as JSON directly. There is no such thing as a sensu_check type in Salt.

I was not able to get rid of the hard-coded cron check. I guess goes with the idea that they expect you to fork the repo and make your own local changes to meet your needs. I thought I should maybe open an issue for this, but the file has been there for a year and nobody else has complained. I figured it was just me, and maybe I should get over myself and accept the fact that I got a free cron check!

In my own testing, I used the native gem provider with a special path to Sensu’s gem binary to install Sensu gems. But then I discovered that the formula did this too, but in two different ways, using the method instead of the native gem method. I didn’t really like this, but at the same time, this is the first time I’ve ever used Salt.

As far as I can tell, to do more advanced Sensu config things, like filters or mutators, you are expected to fork the formula and drop in the json file into the right directory.


A rough opinionated comparison between the tools, with regards the tool itself and the tool in conjunction with Sensu. “High” doesn’t necessarily mean “good” here:

  Puppet Chef Ansible Salt
Review of The Config Management Tool in General        
Version used 3.4.3 12.4.1 1.5.4 2015.5.3
Third Party Module Easy of Use High High Medium Low
Official Sensu Support for the Tool High High Low Low
Reproducibility High High High High
Easy of use getting started Medium Medium High Medium
Language extensibility High High Low Low
Separation between config data and code Hiera Databags/Attributes just variables? Pillar
Module re-usability? High High Low Low
Review of the Sensu Module/Cookbook/Etc        
Version of the module Used 1.5.5 2.10.0 0.1.0 c6324b3
Sensu Module Feature Completeness High High Medium Medium
Sensu Module Integration with Other Modules Low Extreme? None None
Sensu Module Flexibility High High Medium Low
Sensu Module Re-usability High High High Low
How Opinionated Was It? Low High Low Medium
Usability with Sensu’s Embedded Ruby Yes Yes Not natively Sorta


The way I see it, there are two camps. Chef and Puppet both provide a rich language to build modules with. For example, the PuppetLabs RabbitMQ module contains all the code to interact with RabbitMQ. The main Puppet codebase doesn’t know anything about RabbitMQ. The same goes for Chef. Both Chef and Puppet also have their own DSL. Puppet uses yaml files for Hiera, but they are for config only, unlike Ansible/Salt.

In the other camp is Ansible and Salt. They have a simplified config language, and require the help from the core software to do the “heavy lifting” of the raw types. For example, the Salt RabbitMQ formula requires the help of core Salt RabbitMQ module to provide the primitives.

Final Thoughts

  • Puppet
    • Directed graph dependency ordering, not parse-order driven
    • Type/Provider system and defined types provide the right abstraction layers to build upon.
    • Hiera provides a good separation of config/code, making it easier to reuse modules without modification.
    • Strong culture of testing
    • Lots of good supported modules
    • High deployment overhead and language learning curve
  • Chef
    • LWRP system provides the right abstraction layers to build upon.
    • Knife tool does do a lot of cool stuff
    • Lots of good supported cookbooks
    • Strong culture of testing
    • “Just ruby”
    • 15 levels of attribute precedence is insane
  • Ansible
    • Low deployment overhead and low learning curve
    • “Just yaml files”
    • Lack of type/providers means that playbooks use “apt” and “yum” directly, which kinda sucks
  • Salt
    • Pillar provides a nice separation of config/code, which is good for formula-reuse, if the formula is pillar-aware
    • Centralized formulas emphasize consolidated development effort
    • No strong state testing emphasis or framework

Going Further

If you want to know more about Sensu, of course you can take my training course:

Or you can tell me I’m wrong. You can raise and issue or make a pull-request for the blog post or investigate my actual training material and code on Github.

A Comparison of Image to ASCII Conversion Tools

Inspired by ponysay, I think wicked ascii/ansi artwork on the terminal is great.

I decided to survey all the tools I could find that aid in this conversion to see if there were any dramatic differences in results.


For these tests I used an image with a 160px width, twice that of a standard terminal. Then I cat‘d the image in plain xterm and took a screenshot of the results.

The original has been scaled up (6X) to be the same relative size as the resulting screenshots.

My entire methodology is on github if you wish to see exactly how I made these images. In theory it is 100% reproducible from make. (assuming on a linux desktop)

Tools Compared



bender.png converted using original bender.png converted using img2xterm bender.png converted using util-say bender.png converted using catimg bender.png converted using catimg-bash bender.png converted using img-cat bender.png converted using img2txt bender.png converted using jp2a


lenna.png converted using original lenna.png converted using img2xterm lenna.png converted using util-say lenna.png converted using catimg lenna.png converted using catimg-bash lenna.png converted using img-cat lenna.png converted using img2txt lenna.png converted using jp2a


nyan.png converted using original nyan.png converted using img2xterm nyan.png converted using util-say nyan.png converted using catimg nyan.png converted using catimg-bash nyan.png converted using img-cat nyan.png converted using img2txt nyan.png converted using jp2a


img2xterm stands out to me as the most accurate and true to the original, with util-say as a close second. Both of these tools understand “half-block” characters with two colors, effectively doubling the horizontal resolution of the resulting characters. (two colors per “pixel”)

catimg and img-cat both have good color representation, but lack the additional resolution compared to the other tools, giving it a more “pixelated” look.

img2txt and jp2a are “true ascii” tools, they are really not in the same league as the others. I included them here for completeness.

Playing With IPv6 Over Bluetooth Low Energy (6LoWPAN)

I like Bluetooth Low Energy (BTLE). I also like IPv6. Did you know you could but both together?

Technically 6LoWPAN


modprobe bluetooth_6lowpan
echo 'bluetooth_6lowpan' >> /etc/modules

Establishing the Connection

Set the Bluetooth L2CAP PSM

First you need to set the Protocol/Service Multiplexer value on both sides to “62” (0x3E) on both sides:

echo 62 > /sys/kernel/debug/bluetooth/6lowpan_psm

This PSM value lets the driver know that you are going to multiplex this special new protocol on top of whatever your bluetooth device mith also be doing.

0x25 is the magic value for “Internet Protocol Support Profile” which I think is supposed to be the correct value?

0x3E is some sort of temporary value I had to use to get this working, as 0x25 ended up as a being not supported per the messages in my wireshark dump.

I’m not aware of any other way to set it other than this kernel debug setting.

Making the slave advertise

The slave must be doing Low-Energy advertisements in order for the master to connect to it.

hciconfig hci0 leadv


On the master you should be able to watch the slave advertise:

⮀hcitool lescan
LE Scan ...
C4:85:08:31:XX:XX (unknown)
C4:85:08:31:XX:XX ubuntu-0

Establish a connection from the master to the slave:

echo "connect C4:85:08:31:XX:XX 1" >/sys/kernel/debug/bluetooth/6lowpan_control

Afterwards a bt0 device should show up in ifconfig. Run hcitool conn to verify a connection is actually established. Use wireshark on bluetooth mon mode on the hci device to confirm commands are being sent.

The proof is in the ping:

~ ⮀ # ⮀ping6 fe80::1610:9fff:fee0:1432%bt0
PING fe80::1610:9fff:fee0:1432%bt0(fe80::1610:9fff:fee0:1432) 56 data bytes
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=1 ttl=64 time=158 ms
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=2 ttl=64 time=236 ms
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=3 ttl=64 time=113 ms


After a small number of packets, the connection seems to drop, and on the master side I get:

[  368.947193] Bluetooth: hci0 link tx timeout
[  368.947202] Bluetooth: hci0 killing stalled connection c4:85:08:31:XX:XX

No matter what rmmod or stopping I tried, a reboot was the only thing I could to rebuild the connection. Obviously this is pretty new stuff, hopefully it will stabilize in later versions of the kernel.

At this time though, on 3.19.0-21-generic (Ubuntu Vivid), this feature is not yet usable.

Etherhouse Part 2 - Software

The software that powers the Etherhouse project is open source. This blog post describes that software and how it interacts with all the pieces.


You can see the Client software that runs on the Arduino. This uses one external library and is in the native Arduino C++.

The Arduino runs a limited TCP/IP stack and interacts with the http api.

The code plenty of defensive code in place to ensure the client continues to run without interruption or interaction. No one should need to “turn it off and on again.”


The Server software is also open source.

In designing the software, I aimed for longevity. I want the software to continue to run for many years without maintenance. I decided to use golang.

  • Go binaries are statically compiled, which means the same binary I compile now will continue to run on new platforms for years to come.
  • With godeps I can include all compatible libraries together with no external dependencies, regardless of their long term state.
  • I use Heroku to deploy the code. Heroku is free for small installs and a stable platform. They can probably keep this server up better than I can.
  • I use a DNS name I can control for service discovery. This gives me the flexibility to change platforms over time if necissary.

Etherhouse Part 1 - Hardware

Etherhouse a project of mine involving eight Christmas gifts. Each gift involved a display of some model houses made from folded paper, each representing the home of a friend or family member.

The houses light up, depending on whether that family member is home or not. Their presence is detected based on if their smartphone is on the same network the etherhouse is on.

See the GitHub page for more details.

Getting Started Puppet Acceptance Tests With Beaker

Beaker is a test framework created by Puppetlabs to run tests against puppet modules on real servers (vm, containers whatever) and test that they do what they say they should do.

This is a quick tutorial on how to use this framework. At the time of this writing, Beaker is under heavy development, so this could all change.

The Gem

The first thing you need to do is install beaker. Usually this is as simple as adding it to your Gemfile and running bundle install.

gem 'beaker'
gem 'beaker-rspec'

I recommend using grethr’s puppet module skeleton Gemfile , which includes Beaker already.

Now install it:

bundle install

Acceptance Boilerplate

Rspec and the Puppetlabs Helper

This tutorial assumes you already have the puppetlabs_spec_helper installed, rake, rspec, etc.

Folder For Tests

You need a place to put acceptance tests. They must go in


See puppetlabs-mysql for an example of what it looks like.


You must have at least a default.yml in the nodesets folder inside your acceptance folder. Here is an example:

# consul/spec/acceptance/nodesets/default.yml
    platform: ubuntu-12.04-x64
    image: solarkennedy/ubuntu-12.04-puppet
    hypervisor: docker
  type: foss

You can have different yaml files for different platforms you wish to test against. The format is described in the Beaker wiki

Note: I use my own docker files for speed, as they come preinstalled with the the Beaker Host Requirements

Warning: If you use docker, you cannot test service things because there is no init running inside the container. For comprehensive testing against things like services, firewalls, etc, you must use a true hypervisor with Vagrant.

Acceptance Spec Helper

# consul/spec/spec_helper_acceptance.rb
require 'beaker-rspec'

# Not needed for this example as our docker files have puppet installed already
#hosts.each do |host|
#  # Install Puppet #  install_puppet

RSpec.configure do |c|
  # Project root
  proj_root = File.expand_path(File.join(File.dirname(__FILE__), '..'))

  # Readable test descriptions
  c.formatter = :documentation

  # Configure all nodes in nodeset
  c.before :suite do
    # Install module and dependencies
    puppet_module_install(:source => proj_root, :module_name => 'consul')
    hosts.each do |host|
      # Needed for the consul module to download the binary per the modulefile
      on host, puppet('module', 'install', 'puppetlabs-stdlib'), { :acceptable_exit_codes => [0,1] }
      on host, puppet('module', 'install', 'nanliu/staging'), { :acceptable_exit_codes => [0,1] }

The spec helper does the tasks needed in order to prepare your SUT (system under test). This might include installing puppet, installing your puppet module dependencies, etc.

Example Acceptance Test

# module_root/spec/acceptance/standard_spec.rb
require 'spec_helper_acceptance'

describe 'consul class' do

  context 'default parameters' do
    # Using puppet_apply as a helper
    it 'should work with no errors based on the example' do
      pp = <<-EOS
        file { '/opt/consul/':
          ensure => 'directory',
          owner  => 'consul',
          group  => 'root',
        } ->
        class { 'consul':
          config_hash => {
              'datacenter' => 'east-aws',
              'data_dir'   => '/opt/consul',
              'log_level'  => 'INFO',
              'node_name'  => 'foobar',
              'server'     => true

      # Run it twice and test for idempotency
      expect(apply_manifest(pp).exit_code).to_not eq(1)
      expect(apply_manifest(pp).exit_code).to eq(0)

    describe service('consul') do
      it { should be_enabled }

    describe command('consul version') do
      it { should return_stdout /Consul v0\.2\.0/ }


The filename is important, it must end in _spec.rb in order for the test harness to detect it. You can see that there are many matchers you can use to run pretty much any kind of test you can think of.

See the puppetlabs-mysql collection again for some great examples.

Running Them

bundle exec rake acceptance

This command will spin up your described servers in nodesets, install your puppet modules and dependencies, and test your assertions.


Acceptance tests should be used sparingly, they are the top of the testing testing pyramid.

They are slow, touch the disks and network, and depend on external resources. The example mysql acceptance tests literally install mysql, install and configure databases, and assert that they exist.

They will may be slow, but they can be very helpful, and potentially the only way to really test functionality of a puppet module in an end-to-end way.

Puppet is a system configuration management tool. Unit tests can only go so far to make sure the compiled catalog is “correct”. Puppet acceptance tests can help you go 100% and ensure that your module literally does what it says it does by running tests against actual systems, files, packages, and services.

Managing Ssh Known Hosts With-Serf

Serf is a very interesting service discovery mechanism. Its dynamic membership and tags capability make it very flexible. Can we use it to generate a centralized ssh_known_hosts file?

Installing and Configuring Serf

I like to use configuration management to manage servers. Here I use a Puppet module to install and configure Serf:

class { 'serf':
  config_hash   => {
    'node_name'  => $::fqdn,
    'tags'       => {
      'sshrsakey' => $::sshrsakey
    'discover'   => 'cluster',

This particular module uses a hash to translate directly into the config.json file on disk. Notice how I’m using the new tags feature, and adding a sshrsakey tag, populated by Puppet’s facts.

Querying The Cluster

Once the servers have Serf installed and configured, the cluster can be queried using the serf command line tool:

$ serf members    alive    sshrsakey=AAAA...    alive    sshrsakey=AAAA...

Using the Data

Lets use this data to write out our /etc/ssh/ssh\_known\_hosts file, emulating the the functionality of ssh-keyscan:

$ serf members -format=json | jq -r '.members | .[] | "\(.name) ssh-rsa \(.tags[])" ' | tee /etc/ssh/ssh_known_hosts ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTfPpmHhc+LoD05puxC... ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmzk+Chzrq73c5ytU9I...

So… you can see I’m using jq to manipulate the JSON ouput of the serf command. I’m not super proud of it, but it works.

Lets see if we can use a script instead? Serf provides and RPC protocol to interact with it programmatically:

#!/usr/bin/env ruby
require 'serf/client'
client = Serf::Client.connect address: '', port: 7373
members = client.members.value.body['Members']
puts members.collect { |x| x['Name'] + ' ssh-rsa ' +  x['Tags']['sshrsakey'] }

Of course, no error handling or anything. This script achieves the same result using the serf-client ruby gem.

There are libraries to connect to the Serf RPC directly for many languages, or you can do it yourself using the msgpack RPC library to communicate directly on the tcp socket.


This is just the beginning. Serf allows retrieving the status of members, but also can spawn programs (handlers) whenever members join or leave.

Additionally you can invoke custom events for your own uses, like code deploys.

If you can deal with an AP discovery and orchistration system, then Serf could be a foundation for building great things!

What Happens When You Run Puppet Tests

Breaking down bundle exec rake spec

What is happening when you run:

bundle exec rake spec


The first command you are running is bundle. Bundle is kinda like virtualenv for Ruby. It makes sure that you use the same ruby libraries that you, everyone, and puppetmasters use.

Bundle uses a Gemfile, and searches downwards. As long as you have the Gemfile in the puppet repo, it will work.


The second part is exec. Exec is an argument to bundle, it simply means run a command. Because you are running it in a “bundled” environment, it runs the next command that is part of your bundle, with the ruby libraries in your Gemfile.


The third part is rake. Rake is like Make for Ruby. It requires a Rakefile. Each puppet module needs a Rakefile.

You don’t need to re-invent the Rakefile, simply have this in it:

require 'puppetlabs_spec_helper/rake_tasks'

This ensures that we are all running tests in the same way.


Spec is a “rake task” that runs Rspec. Rspec is a ruby testing framework. Rspec + puppet-rspec is a whole other thing described Next Section.

How does Rspec Test Puppet Code?

If you are running bundle exec rake spec, rspec takes over in the environment provided by bundler. It gives you all the gems necissary to do the job, but how does Rspec know about Puppet Code?

If you are including the puppetlabs_spec_helper/rake_tasks, your exact task includes the prep/test/clean stuff.

You need some boilerplate files in place for rspec-puppet tests to run. You can either run


Or you can manually setup the files and folders. Here I will describe the minimal set of files you need:


.fixtures.yml is a puppet_spec_helper construct that allows you to symlink in other modules that might be required to test your code. For example you might require functions from the stdlib. How does Rspec know where stdlib is?

    stdlib: "git://"
    your_module: "#{source_dir}"

When rspec runs the preparation parts, the spec_helper will create symlinks, or clone repos, or whatever.


spec/spec_helper.rb is a file you need in place for your rspec tests to reference. If you are using the puppetlabs_spec_helper gem, it is only one line:

require 'puppetlabs_spec_helper/module_spec_helper'

This spec_helper.rb file can now be referenced, and by doing so will allow Ruby to import all of the puppet-specific Rspec matchers it needs to function.

For example, at the top of every Rspec ruby file you should see something like this:

require 'spec_helper'

describe 'my_module' do

  it { should compile }


Directory structure

Putting files in the right places allows Rspec to autodetect them. Giving them a conventional name allows rspec to glob them.

As the scope of your testing increases, a well-organized directory structure is essential:

├── spec
│   ├── classes
│   │   └── example_spec.rb
│   ├── defines
│   ├── functions
│   ├── hosts
│   ├── spec_helper.rb
│   ├── spec_helper_system.rb
│   └── system
│       └── basic_spec.rb

The Tests

How to write puppet tests is outside the scope of this particular blog post.

I recommend looking at solid examples from puppetlabs’ github, or right from the offical documentation.

But essentially, Rspec runs puppet in a noop mode, only generating a catelog of what it would do. Then the rspec tests use matchers to describe assertions against the catelog.

Writing Purgable Puppet Code

Whenever possible, I try to write Puppet code that is purgable and “Comment Safe”. That is not a very good description. What I mean is, Puppet code that removes resources from a system when the corresponding Puppet code is “Commented” out of a manifest. Lets look at a few examples.

Example: Managed Sudo

Lets say you used this popular sudo module with the following params:

class { 'sudo':
  purge => true,

Great start. All future sudo::conf blocks you write will automatically disappear from the host:

sudo::conf { 'web':
   source => 'puppet:///files/etc/sudoers.d/web',

# Commenting out for now. Automatically is purged from the server
# sudo::conf { 'admins':
#   priority => 10,
#   content  => "%admins ALL=(ALL) NOPASSWD: ALL",
# }

Good stuff. Do this.

Example: Managed Firewall

How about another example with the Puppetlabs Firewall module?

# Automatically remove rules that are not declared
resources { "firewall":
  purge => true

# Production needs 111 open
firewall { '111 open port 111':
  dport => 111
# Tried this but didn't work. Commenting out for now
# Automatically removed from the server when I commented it out
# firewall { '112 open port 112':
#   dport => 112
# }

The Point?

The point here is that we should encourage a culture of purging. Having resources get automatically purged when you comment them out from puppet is great.

Of course, this is obsoleted in the short-lived world of docker or possibly Amazon EC2. But for those engineers who work on long lived servers, this prevents cruft.

Going Further: Purging Packages

I want to purge packages. If someone installs a package not controlled by Puppet, I want puppet to purge it. Crazy I know.

package { 'apache': ensure => installed }

# No longer using php
# But puppet leaves this behind!
# package { 'php5': ensure => installed }

Of course puppet will leave the package behind. I should be doing ensure => purged right?

But what if the package is deep within nested classes or simply manually installed?

Some day I would like to get to the point where I at least get notified when puppet detects packages that don’t need to be there. I’m open to suggestions on how to do this.

Going Further: Purging /etc/

Most of the time stale configuration leftover in /etc/ causes no harm.

But what about cron jobs in /etc/cron.d? I would love to purge them, but there are non-puppet controlled things installed by system packages. If everything was a puppet module this could eventually be achieved, but it would be too hard to keep in sync with upstream package changes.

Purgin on a per-app basis with things like sensu, apache, and sudo are a great start.

Crossing the Line: Purging /var/lib/mysql

Seems like if you asked puppet to install mysql databases, and then commented them out, you would not want puppet to purge them.

The subtle difference here might be the difference between configuration and data.


Whenever possible I try to purge => true on whatever I can. I would like to see this as the default in new puppet modules.

Someday I would like us to purge more than just files and iptables rules.

Introducing Sensu-shell-helper!

The Problem

The barrier to writing Nagios checks is high. I dare say very high. You have to think about check intervals, host groups, service groups, config files, etc.

But, I know my servers are not behaving, if only there was a way to check them! They run commands for me all the time. In the worst case they fail and no one knows. The best case is that they end up in my cron spam folder….

A Solution!

Sensu-shell-helper. It is a small script I wrote to make it easier to monitor arbitrary commands with Sensu. Here is how you use it

sensu-shell-helper apt-get update

Yes. That is it. No mandatory config options. Good defaults. Minimal overhead. What does this check look like in the dashboard when it fails?

Exactly what I wanted. And of course, when apt-get update begins to work again, the check will resolve itself.

Under The Hood

sensu-shell-helper reall just takes in the output of the command you ask for, tail’s it, then sends the result to localhost:3030, which the sensu-client listens on.

By default it does not specify any handlers. (But they can be specified on the command line with -H) For the check-name it takes the full command and munges it to pass the sensu validator. Duplicate instances of the exact same command on a particular host will be seen as a single “check”.

Most commands do not return 0,1,2,3 according to the Sensu / Nagios plugin API, so the sensu-shell-helper will emit 2 (critical) in the event that the shell command returns anything non-zero. This behavior can be overridden with -N in the case that your command does conform to the 0,1,2,3 spec.