Postmodern Sysadmin

A blog about servers and junk

Home Made Lichtenberg Figures

For winter 2016 I made Lichenberg Figures. I used a 5kV 10mA (50W) neon light transformer. I also experimented with a 2kW microwave oven transformer, but found that the lower powered neon transformer produced finer, better, and safer results.

To produce the figures, I would first apply the electricity to the wood, often at the corners. Initially the resistance of the wood is not sufficient to allow any burning. Then I would use a spray bottle full of water/baking soda to moisten the surface of the wood until the electricity could find the path of least resistance and start the burning reaction. With the low-power neon transformer the burning is slow and takes hours.

To guide the reaction in an “aesthetically pleasing way”, I used a heat gun to temporarily dry out parts of the wood, creating channels of low-resistance surface water. This technique is most evident on piece #10. It was also used on #15 to evenly cover the entire (large) piece.

After the electrical treatment, each piece was finished with varnish, matted, framed, and shipped. Below is a gallery of final results. Each was given to a friend or family member as a winter gift:

Piece 01: 14”x7” Mahogany

Piece 02: 12”x7” Birch

Piece 03: 12”x7” Birch

Piece 04: 12”x7” Mahogany

Piece 05: 14”x7” Mahogany

Piece 06: 7.5”x7.5” Birch

Piece 07: 7.5”x7.5” Mahogany

Piece 08: 7.5”x7.5” Mahogany

Piece 09: 7.5”x7.5” Mahogany
(Image not available)

Piece 10: 18”x7.5” Mahogany

Piece 11: 18”x7.5” Mahogany

Piece 12: 24”x7.5” Mahogany

Piece 13: 24”x7.5” Mahogany

Piece 14: 24”x5.5” Oak

Piece 15: 24”x24” MDF

Piece 16: 24”x5.5” Poplar

Piece 17: 24”x7.5” Birch

Piece 18: 37”x5.5” Mahogany

Piece 20: 18”x5.5” Mahogany

A Comparison of Text-Based Web Browsers


Who browses on the terminal now-a-days? However you are, you are crazy, but you might appreciate this comparison of text-based web browsers, with screenshots of a few different popular sites.

I wanted to test these browsers with more than just simple pages, so where possible I actually logged into places and took screenshots of the actual webpage in a realistic state.


All browsers were set to use xterm with TERM=xterm-256color. The following browsers were used with these settings:

  • retawq (0.2.6c)
    • Enable SSL support
  • elinks (0.12~pre6-11build2)
    • underline
    • linux frames
    • 256 color
    • utf8
  • links2 (2.12-1)
    • Linux frames
    • Color
  • w3m (0.5.3-26build1)
    • Render frames
  • lynx (2.8.9dev8-4ubuntu1)
    • underline links
    • Always allow cookies

Check out the code for the exact commands used to generate everything.


Wikipedia Rule_110

Wikipedia has great text-based browsing support in general. I did not try editing anything. All browsers had no trouble rendering the data in a readable way.\_110 rendered using retawq\_110 rendered using elinks\_110 rendered using links2\_110 rendered using w3m\_110 rendered using lynx\_110 rendered using Original \(surf\)

Hacker News

Hacker News is mostly text-based, so these browsers had no trouble with it in general. I appreciate elinks’s support for colors that are true to the original. rendered using retawq rendered using elinks rendered using links2 rendered using w3m rendered using lynx rendered using Original \(surf\)


I could not actually log into facebook with any text-based browser. rendered using retawq rendered using elinks rendered using links2 rendered using w3m rendered using lynx rendered using Original \(surf\)


Twitter looks “ok” on text-based browsers, although for that particular application you might want to consider a dedicated application built for the terminal.

retawq was unable to log in for some reason. rendered using retawq rendered using elinks rendered using links2 rendered using w3m rendered using lynx rendered using Original \(surf\)


Gmail is a tall order for a text based browser. Only elinks, w3m, and lynx could pull it off.

elinks shines again with great CSS support, with w3m with second place. These were all rendered using the basic HTML version. Luckily I didn’t get a CAPTCHA? rendered using retawq rendered using elinks rendered using links2 rendered using w3m rendered using lynx rendered using Original \(surf\)


elinks is my favorite of the bunch because of color support.

This blog post is about 10 years too late, and mostly serves to remind myself which version of “links” I like and why.

Kyle’s (Fashion) Style Guide

I recently read “Why Are SO Many Millennials SO Uncool?”. Let’s start with a quote:

*For the purpose of this writing, I’m defining “cool” as those who don’t conform, who don’t always fit in nor do they try to, and who follow their own path; and “uncool” as those who dress, act, and have the same tastes as the masses and are vulnerable to corporate influences.

Now, I’m by no means some sort of authority on coolness. By this definition there is certainly some degree of subjectivity, but this definition has a hint of personal-values embedded into it.

In otherwords, this is more than “I don’t like black socks and sandles”, but more like “I value non-corporate-sellouts.” At least this value extends beyond just personal taste.

Individuality Versus Popularity

Anyone can choose to adopt this value. I can appreciate it.

If fully adopted, it seems like this would encompass normal corporate branding stuff, as well as things that are simply “popular”. By this definition, wearing a popular brand name or adopting a trendy style is “uncool”. This is at odds with the definition of “cool” that I learned in middle-school. In fact, in middle-school the definition of cool was the exact opposite of the author’s definition.

This is fine. As we mature into adults, some people outgrow this definition of coolness. Others do not.

Gucci Bag Etsy Purse
Corporate Gucci Bag: Uncool Handmade Etsy Bag: Cool

I can get behind this. I also individuality over popularity. I also dislike corporate influences. (or heck, external influences in general)

Examining My (Tech) Wardrobe

One of my other personal values is consistency. If I’m going to adopt this value and be consistent, then perhaps I should examine my wardrobe…

What external corporate ends am I promoting with my wardrobe? Well let’s start with all these technology tshirts:

Docker Shirt OpenSSL Shirt
Docker Shirt: Uncool OpenSSL Shirt: Cool

Both Docker and OpenSSL are open source, but wearing a Docker shirt implicitly promotes the Docker Company. On the other hand, OpenSSL is goverend by the OpenSSL Software Foundation. Is wearing a Docker shirt on par with showing off your Gucci bag?

Ubuntu Shirt Debian Shirt
Ubuntu Shirt: Uncool Debian Shirt: Cool

Ubuntu is a product of Canonical. Debian doesn’t have any corporate counterpart. Is wearing an Ubuntu shirt uncool because you are providing free advertising for a corporate entity?

AWS Shirt Openstack Shirt
AWS Shirt: Uncool Openstack Shirt: Uncool too

I don’t know man, I don’t think Openstack shirts are cool either….


These above examples are given mostly because the represent a large portion of my wardrobe. In general the same principle of rejecting corporate sponsors carries over to non-tech shirts.

I dare say that even wearing shirts with logos of your current or previous employers are not cool.


In general, wearing something that promotes another company’s products, I guess is uncool, even if you like the product or even contribute to it. The root cause is that you are allowing yourself to be treated as a means to their promotion?

Of course the act of trying to be cool in uncool in itself, so I’m pretty sure I’m forever destined to remain… uncool.

A Configuration Management Rosetta Stone: Configuring Sensu With Puppet, Chef, Ansible and Salt

I recently finished my Intermediate Sensu Training on Udemy. It was a ton of work but I’m glad I got it all together. Part of that training includes how to deploy and configure Sensu with four of the most popular open-source configuration management tools: Puppet, Chef, Ansible, and Salt.

The Sensu Decree

In order to do the training I had to learn each of these tools enough so I could install a baseline Sensu installation. Here is what I reproduced with each iteration:

  • A Sensu client, Server, and API Setup and Running
  • RabbitMQ Server, User, and Sensu Vhost ready for use. (no SSL)
  • Redis installed and running for state
  • A Sensu check (check_disk and/or check_apache)
  • The Sensu Mail handler to send emails for alerts
  • The Uchiwa Dashboard
  • All on one host (localhost)

This was no small feat, and required using a non-trivial number of features of each configuration management system to get the job done.

Here were some other guidelines that I followed in this exercise:

  • Always use 3rd party modules/cookbooks/etc. Use official ones if possible.
  • Use the local-execution mode provided by the configuration management tool (no client/server setup)
  • Follow official docs when available for general guidelines for things like installation.
  • Differences in things like config file names or versions of Redis are inconsequential. As long as Sensu behaved the same I considered it complete.
  • No considerations for security (out of scope for this exercise)

Review of Each Tool


Puppet In General

Puppet is my “native language” when it comes to configuration management, so it is a little hard for me to imagine what it is like to not know what it is like to know exactly how it works.

Puppet has a custom DSL to describe configuration in terms of “types”. These are the primitives that you can build infrastructure upon, things like “files, “package”, and “service”. Third party modules can extend that language with custom types, allowing you to abstract over the “raw” types. For example, the RabbitMQ has a type for providing rabbitmq_users, which do not correspond to a particular config file or anything, but instead can only be added by special invocations of the rabbitmqctl command.

Puppet strongly emphasizes code-reuse. The Puppet Forge is the registry where you can upload and share modules. The Forge has a number of methods to help indicate code quality. It also exposes “officially supported” and “officially approved” modules, for extra approval stamps. While the forge may have a very “long tail” of modules that do very common tasks, the set of officially-supported and officially-approved modules leaves behind a good selection of high-quality modules ready for re-use.

A common criticism of Puppet is that it does not apply resources in the order that they are declared in the manifest. Instead, Puppet internally calculates a directed graph of resources and their dependencies, and executes them in a dependent order, which is not necessarily in the order in which they are parsed. This is similar to how Linux package managers install packages. If you run apt-get install apache libc libssl, the packages will not necessarily get installed in the order that they were specified on the command line.

Puppet also comes with Hiera, a convenient hierarchical key/value store. This store allows users to override and set site-specific settings to Puppet modules without having to fork or modify them. Hiera encourages custom hierarchies that meet your business needs, allowing users to specify settings in a way that makes the most sense for their environments. And example hierarchy might look something like:

├── common.yaml
├── environment
│   ├── dev.yaml
│   └── prod.yaml
├── datacenter
│   ├── dc1.yaml
│   └── dc2.yaml
└── hostname
    ├── web1.yaml
    └── web2.yaml

Then Hiera looks up parameters from most-specific (hostname) to least-specific (common), and returns the first value that is available.

Review of the Sensu Puppet Module

The sensu-puppet module is a first-class citizen in the Sensu world. It has native types for the Sensu JSON files that it manages, as well as a sensu-gem type for easily installing rubygems with the embedded Sensu ruby.

The Sensu Puppet module only manages Sensu, and has no integration with any other RabbitMQ, Redis, or any other module. To me this is expected, in the Puppet world it would be the job of a profile to combine the Sensu module with RabbitMQ and other things. For the most part this integration is left as an exercise to the reader.

The Sensu Puppet module also doesn’t manage Uchiwa. That requires a different puppet module. Again to me this is a good thing, I hate it when tools try to do too much.

The actual codebase is actively maintained and reasonably active, with a few releases per year. The Puppet Forge rates it almost perfectly for module quality. The code has excellent unit test and acceptance test coverage. As far as Puppet modules go, the Sensu Puppet module is a great example of a well-maintained piece of code.

One downside the “completeness” of the module is that sometimes new features of Sensu are released, and the puppet-module will lag. The configuration inputs to the puppet module are well-typed, and not just free-form hashes. This gives a lot of guardrails and helps ensure config files are correct before they hit the disk, but it means that some features are not usable until the Puppet module can account for them.

Although the code worked, there was a significant bug that prevented the module from ever converging. This was annoying but allowed me to test the code. This bug looks to be fixed in master.


Chef in General

Chef is not as old as Puppet, but is certainly a mature product. Chef is “just ruby” when it comes to its configuration language. The upside to this is that Ruby developers can theoretically dive in and hack on stuff. The downside to this is that being “just ruby”, “leaves a lot of rope to hang yourself”.

One nice feature provided by the Chef company is their hosted chef solution, which allows people to get started without hosting a Chef-server.

The Chef toolset also comes with the knife command, which is a great command line tool for interacting with the Chef-server. It also is a parallel-ssh tool, manipulates chef cookbooks, and can also launch ec2 (and other) instances. (did they take the kitchen-sink metaphor too far?)

The Chef Supermarket serves as the public registry for Chef cookbooks. There are not too many quality indicators to see, to help find which cookbooks are any good. The best metric I could see is just sorting by “followers”. This is made up by the fact that there are over a hundred officially supported cookbooks.

Probably the most difficult aspect of Chef for me to understand was how attributes interact. This confusion is probably most obvious when you look at Chef’s 15 levels of attribute precedence. It seems to me that there should be a more obvious way for intent to flow, but I could be just spoiled by Puppet’s Hiera.

Review of the Chef-Sensu Cookbook

The Sensu Chef Cookbook is also a first-class citizen in the Sensu-world. Chef is the “native config language” of Sean Porter, the main author of Sensu. This gives a lot of credibility to the Cookbook, and shows in the contributor page.

The Cookbook itself is feature complete, with recipes for installing and configuring all aspects of Sensu.

The scope of the cookbook includes all Sensu related technologies, including RabbitMQ, Redis, and Uchiwa. It is certainly “batteries included” and on by default. It even downloads and compiles Redis from source for you.

Another example of this “batteries included” design is the RabbitMQ module setting Apt attributes. Like the above Redis example, this behavior surprised me, but technically it is not related to the Sensu chef cookbook.

At the same time, wrapper cookbooks are recommended as a method to combine multiple cookbooks together in a coherent way. I think in general I just expected the wrapper cookbooks to do more and the main Sensu cookbook to do less.

The cookbook does have an integration test suite, but it is not run via Travis. The code is under active development, and does multiple releases a year. It has native support for Chef data bags for transporting the RabbitMQ SSL support, which is a nice touch (Not tested in this review).


Ansible in General

Ansible is a relative newcomer to the configuration management space. Ansible uses yaml files to define desired state. The yamls files are a nice way to represent things, but it would be misleading to think that Ansible is just yaml files. Ansible has its own DSL and uses Jinja2 templating, which is parsed over the contents of the yaml.

The Ansible Galaxy is the community registry for uploading shared roles. You can sort by rating to try to get a better idea about which roles are potentially higher quality than others.

There doesn’t seem to be any official roles/playbooks. The closest there is to official roles is the ansible-examples repository. But click the link and look at the lamp_simple example. There is no code-reuse at all! Every example re-invents how to install apache, install ntp, configure iptables, etc. What’s up with that?

While the yaml files may make it very easy for beginners to make playbooks that get things done quickly, I don’t think they will work out great as infrastructure expands. The abstractions just are not there

Another sign, to me, that Ansible has the wrong abstractions is that so many roles are distro specific. Not many have the necessary code to work on both “CentOS” and “Debian”. There is a generic package type, but very few roles use it? Check out the original author’s opinion on the subject. Look at the examples! They all only work on yum based distributions.

I’ve read lots of posts of people migrating to Ansible and loving it. Personally, I don’t get it. The abstractions are too low-level. If you are lucky, then the Ansible core has a Module to manipulate the resources on the host, like RabbitMQ stuff. If you are unlucky, then the only primitives you have available are yaml files and running commands and parsing stdout. Or you can write your own module.

Ansible Sensu Playbook Review

There is no official Sensu Ansible playbook. I was not able to find any playbooks that support RedHat-based distributions.

Luckily, I was able to use Mayeu’s ansible playbook, in conjunction with this RabbitMQ playbook on my Ubuntu server.

The sensu_check module is part of the “Extras”, but it is only a very small part of deploying Sensu, and it has no cohesion with the playbook that actually deploys Sensu itself. There is no way to extend sensu_check without forking ansible-modules-extras. It can’t consume arbitrary check metadata.

In the end, to meet my needs I had to construct hashes myself and deploy them to disk as JSON. The playbook-provided way to deploy sensu checks is to have them all contained in the single sensu_checks variable.


Salt in General

Salt is also a relative new-comer to the configuration management world. As a user, Salt feels very similar to Ansible. They both use yaml files to represent the desired state of the system. Both use Jinja templates. Both require the “advanced” system interaction to happen with the core stuff, and the Salt formulas can be just yaml with no real code.

Salt takes a different approach to sharing community code compared to the other configuration management systems. Salt keeps all the official formulas in one GitHub project. The docs recommend forking the formula for your own use. On the plus side, having “canonical” formulas for common tasks reduces duplication and encourages code re-use. The downside is that… it encourages forking? These formulas in general are not that extensive. They don’t have releases or any kind of testing in place.

Salt’s Pillar is a powerful tool for separating configuration from code. It is similar to Puppet’s Hiera. Pro: separate config from code; keep the site-specific variables in a separate folder than the formulas. Con: formulas have to be “pillar-aware”. There is no equivalent to Puppet’s automatic parameter lookup.

Sensu Salt Formula Review

For my testing, I used the official Salt-formula. There is a sensu-salt repo on the official Sensu project, but it is not really suitable for production use in my opinion.

For the most part, the formula did what it said on the tin. Of course, like Ansible, the only way I was able to deploy checks in a flexible way was to construct my own Hashes and deploy them as JSON directly. There is no such thing as a sensu_check type in Salt.

I was not able to get rid of the hard-coded cron check. I guess goes with the idea that they expect you to fork the repo and make your own local changes to meet your needs. I thought I should maybe open an issue for this, but the file has been there for a year and nobody else has complained. I figured it was just me, and maybe I should get over myself and accept the fact that I got a free cron check!

In my own testing, I used the native gem provider with a special path to Sensu’s gem binary to install Sensu gems. But then I discovered that the formula did this too, but in two different ways, using the method instead of the native gem method. I didn’t really like this, but at the same time, this is the first time I’ve ever used Salt.

As far as I can tell, to do more advanced Sensu config things, like filters or mutators, you are expected to fork the formula and drop in the json file into the right directory.


A rough opinionated comparison between the tools, with regards the tool itself and the tool in conjunction with Sensu. “High” doesn’t necessarily mean “good” here:

  Puppet Chef Ansible Salt
Review of The Config Management Tool in General        
Version used 3.4.3 12.4.1 1.5.4 2015.5.3
Third Party Module Easy of Use High High Medium Low
Official Sensu Support for the Tool High High Low Low
Reproducibility High High High High
Easy of use getting started Medium Medium High Medium
Language extensibility High High Low Low
Separation between config data and code Hiera Databags/Attributes just variables? Pillar
Module re-usability? High High Low Low
Review of the Sensu Module/Cookbook/Etc        
Version of the module Used 1.5.5 2.10.0 0.1.0 c6324b3
Sensu Module Feature Completeness High High Medium Medium
Sensu Module Integration with Other Modules Low Extreme? None None
Sensu Module Flexibility High High Medium Low
Sensu Module Re-usability High High High Low
How Opinionated Was It? Low High Low Medium
Usability with Sensu’s Embedded Ruby Yes Yes Not natively Sorta


The way I see it, there are two camps. Chef and Puppet both provide a rich language to build modules with. For example, the PuppetLabs RabbitMQ module contains all the code to interact with RabbitMQ. The main Puppet codebase doesn’t know anything about RabbitMQ. The same goes for Chef. Both Chef and Puppet also have their own DSL. Puppet uses yaml files for Hiera, but they are for config only, unlike Ansible/Salt.

In the other camp is Ansible and Salt. They have a simplified config language, and require the help from the core software to do the “heavy lifting” of the raw types. For example, the Salt RabbitMQ formula requires the help of core Salt RabbitMQ module to provide the primitives.

Final Thoughts

  • Puppet
    • Directed graph dependency ordering, not parse-order driven
    • Type/Provider system and defined types provide the right abstraction layers to build upon.
    • Hiera provides a good separation of config/code, making it easier to reuse modules without modification.
    • Strong culture of testing
    • Lots of good supported modules
    • High deployment overhead and language learning curve
  • Chef
    • LWRP system provides the right abstraction layers to build upon.
    • Knife tool does do a lot of cool stuff
    • Lots of good supported cookbooks
    • Strong culture of testing
    • “Just ruby”
    • 15 levels of attribute precedence is insane
  • Ansible
    • Low deployment overhead and low learning curve
    • “Just yaml files”
    • Lack of type/providers means that playbooks use “apt” and “yum” directly, which kinda sucks
  • Salt
    • Pillar provides a nice separation of config/code, which is good for formula-reuse, if the formula is pillar-aware
    • Centralized formulas emphasize consolidated development effort
    • No strong state testing emphasis or framework

Going Further

If you want to know more about Sensu, of course you can take my training course:

Or you can tell me I’m wrong. You can raise and issue or make a pull-request for the blog post or investigate my actual training material and code on Github.

A Comparison of Image to ASCII Conversion Tools

Inspired by ponysay, I think wicked ascii/ansi artwork on the terminal is great.

I decided to survey all the tools I could find that aid in this conversion to see if there were any dramatic differences in results.


For these tests I used an image with a 160px width, twice that of a standard terminal. Then I cat‘d the image in plain xterm and took a screenshot of the results.

The original has been scaled up (6X) to be the same relative size as the resulting screenshots.

My entire methodology is on github if you wish to see exactly how I made these images. In theory it is 100% reproducible from make. (assuming on a linux desktop)

Tools Compared



bender.png converted using original bender.png converted using img2xterm bender.png converted using util-say bender.png converted using catimg bender.png converted using catimg-bash bender.png converted using img-cat bender.png converted using img2txt bender.png converted using jp2a


lenna.png converted using original lenna.png converted using img2xterm lenna.png converted using util-say lenna.png converted using catimg lenna.png converted using catimg-bash lenna.png converted using img-cat lenna.png converted using img2txt lenna.png converted using jp2a


nyan.png converted using original nyan.png converted using img2xterm nyan.png converted using util-say nyan.png converted using catimg nyan.png converted using catimg-bash nyan.png converted using img-cat nyan.png converted using img2txt nyan.png converted using jp2a


img2xterm stands out to me as the most accurate and true to the original, with util-say as a close second. Both of these tools understand “half-block” characters with two colors, effectively doubling the horizontal resolution of the resulting characters. (two colors per “pixel”)

catimg and img-cat both have good color representation, but lack the additional resolution compared to the other tools, giving it a more “pixelated” look.

img2txt and jp2a are “true ascii” tools, they are really not in the same league as the others. I included them here for completeness.

Playing With IPv6 Over Bluetooth Low Energy (6LoWPAN)

I like Bluetooth Low Energy (BTLE). I also like IPv6. Did you know you could but both together?

Technically 6LoWPAN


modprobe bluetooth_6lowpan
echo 'bluetooth_6lowpan' >> /etc/modules

Establishing the Connection

Set the Bluetooth L2CAP PSM

First you need to set the Protocol/Service Multiplexer value on both sides to “62” (0x3E) on both sides:

echo 62 > /sys/kernel/debug/bluetooth/6lowpan_psm

This PSM value lets the driver know that you are going to multiplex this special new protocol on top of whatever your bluetooth device mith also be doing.

0x25 is the magic value for “Internet Protocol Support Profile” which I think is supposed to be the correct value?

0x3E is some sort of temporary value I had to use to get this working, as 0x25 ended up as a being not supported per the messages in my wireshark dump.

I’m not aware of any other way to set it other than this kernel debug setting.

Making the slave advertise

The slave must be doing Low-Energy advertisements in order for the master to connect to it.

hciconfig hci0 leadv


On the master you should be able to watch the slave advertise:

⮀hcitool lescan
LE Scan ...
C4:85:08:31:XX:XX (unknown)
C4:85:08:31:XX:XX ubuntu-0

Establish a connection from the master to the slave:

echo "connect C4:85:08:31:XX:XX 1" >/sys/kernel/debug/bluetooth/6lowpan_control

Afterwards a bt0 device should show up in ifconfig. Run hcitool conn to verify a connection is actually established. Use wireshark on bluetooth mon mode on the hci device to confirm commands are being sent.

The proof is in the ping:

~ ⮀ # ⮀ping6 fe80::1610:9fff:fee0:1432%bt0
PING fe80::1610:9fff:fee0:1432%bt0(fe80::1610:9fff:fee0:1432) 56 data bytes
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=1 ttl=64 time=158 ms
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=2 ttl=64 time=236 ms
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=3 ttl=64 time=113 ms


After a small number of packets, the connection seems to drop, and on the master side I get:

[  368.947193] Bluetooth: hci0 link tx timeout
[  368.947202] Bluetooth: hci0 killing stalled connection c4:85:08:31:XX:XX

No matter what rmmod or stopping I tried, a reboot was the only thing I could to rebuild the connection. Obviously this is pretty new stuff, hopefully it will stabilize in later versions of the kernel.

At this time though, on 3.19.0-21-generic (Ubuntu Vivid), this feature is not yet usable.

Etherhouse Part 2 - Software

The software that powers the Etherhouse project is open source. This blog post describes that software and how it interacts with all the pieces.


You can see the Client software that runs on the Arduino. This uses one external library and is in the native Arduino C++.

The Arduino runs a limited TCP/IP stack and interacts with the http api.

The code plenty of defensive code in place to ensure the client continues to run without interruption or interaction. No one should need to “turn it off and on again.”


The Server software is also open source.

In designing the software, I aimed for longevity. I want the software to continue to run for many years without maintenance. I decided to use golang.

  • Go binaries are statically compiled, which means the same binary I compile now will continue to run on new platforms for years to come.
  • With godeps I can include all compatible libraries together with no external dependencies, regardless of their long term state.
  • I use Heroku to deploy the code. Heroku is free for small installs and a stable platform. They can probably keep this server up better than I can.
  • I use a DNS name I can control for service discovery. This gives me the flexibility to change platforms over time if necissary.

Etherhouse Part 1 - Hardware

Etherhouse a project of mine involving eight Christmas gifts. Each gift involved a display of some model houses made from folded paper, each representing the home of a friend or family member.

The houses light up, depending on whether that family member is home or not. Their presence is detected based on if their smartphone is on the same network the etherhouse is on.

See the GitHub page for more details.

Getting Started Puppet Acceptance Tests With Beaker

Beaker is a test framework created by Puppetlabs to run tests against puppet modules on real servers (vm, containers whatever) and test that they do what they say they should do.

This is a quick tutorial on how to use this framework. At the time of this writing, Beaker is under heavy development, so this could all change.

The Gem

The first thing you need to do is install beaker. Usually this is as simple as adding it to your Gemfile and running bundle install.

gem 'beaker'
gem 'beaker-rspec'

I recommend using grethr’s puppet module skeleton Gemfile , which includes Beaker already.

Now install it:

bundle install

Acceptance Boilerplate

Rspec and the Puppetlabs Helper

This tutorial assumes you already have the puppetlabs_spec_helper installed, rake, rspec, etc.

Folder For Tests

You need a place to put acceptance tests. They must go in


See puppetlabs-mysql for an example of what it looks like.


You must have at least a default.yml in the nodesets folder inside your acceptance folder. Here is an example:

# consul/spec/acceptance/nodesets/default.yml
    platform: ubuntu-12.04-x64
    image: solarkennedy/ubuntu-12.04-puppet
    hypervisor: docker
  type: foss

You can have different yaml files for different platforms you wish to test against. The format is described in the Beaker wiki

Note: I use my own docker files for speed, as they come preinstalled with the the Beaker Host Requirements

Warning: If you use docker, you cannot test service things because there is no init running inside the container. For comprehensive testing against things like services, firewalls, etc, you must use a true hypervisor with Vagrant.

Acceptance Spec Helper

# consul/spec/spec_helper_acceptance.rb
require 'beaker-rspec'

# Not needed for this example as our docker files have puppet installed already
#hosts.each do |host|
#  # Install Puppet #  install_puppet

RSpec.configure do |c|
  # Project root
  proj_root = File.expand_path(File.join(File.dirname(__FILE__), '..'))

  # Readable test descriptions
  c.formatter = :documentation

  # Configure all nodes in nodeset
  c.before :suite do
    # Install module and dependencies
    puppet_module_install(:source => proj_root, :module_name => 'consul')
    hosts.each do |host|
      # Needed for the consul module to download the binary per the modulefile
      on host, puppet('module', 'install', 'puppetlabs-stdlib'), { :acceptable_exit_codes => [0,1] }
      on host, puppet('module', 'install', 'nanliu/staging'), { :acceptable_exit_codes => [0,1] }

The spec helper does the tasks needed in order to prepare your SUT (system under test). This might include installing puppet, installing your puppet module dependencies, etc.

Example Acceptance Test

# module_root/spec/acceptance/standard_spec.rb
require 'spec_helper_acceptance'

describe 'consul class' do

  context 'default parameters' do
    # Using puppet_apply as a helper
    it 'should work with no errors based on the example' do
      pp = <<-EOS
        file { '/opt/consul/':
          ensure => 'directory',
          owner  => 'consul',
          group  => 'root',
        } ->
        class { 'consul':
          config_hash => {
              'datacenter' => 'east-aws',
              'data_dir'   => '/opt/consul',
              'log_level'  => 'INFO',
              'node_name'  => 'foobar',
              'server'     => true

      # Run it twice and test for idempotency
      expect(apply_manifest(pp).exit_code).to_not eq(1)
      expect(apply_manifest(pp).exit_code).to eq(0)

    describe service('consul') do
      it { should be_enabled }

    describe command('consul version') do
      it { should return_stdout /Consul v0\.2\.0/ }


The filename is important, it must end in _spec.rb in order for the test harness to detect it. You can see that there are many matchers you can use to run pretty much any kind of test you can think of.

See the puppetlabs-mysql collection again for some great examples.

Running Them

bundle exec rake acceptance

This command will spin up your described servers in nodesets, install your puppet modules and dependencies, and test your assertions.


Acceptance tests should be used sparingly, they are the top of the testing testing pyramid.

They are slow, touch the disks and network, and depend on external resources. The example mysql acceptance tests literally install mysql, install and configure databases, and assert that they exist.

They will may be slow, but they can be very helpful, and potentially the only way to really test functionality of a puppet module in an end-to-end way.

Puppet is a system configuration management tool. Unit tests can only go so far to make sure the compiled catalog is “correct”. Puppet acceptance tests can help you go 100% and ensure that your module literally does what it says it does by running tests against actual systems, files, packages, and services.

Managing Ssh Known Hosts With-Serf

Serf is a very interesting service discovery mechanism. Its dynamic membership and tags capability make it very flexible. Can we use it to generate a centralized ssh_known_hosts file?

Installing and Configuring Serf

I like to use configuration management to manage servers. Here I use a Puppet module to install and configure Serf:

class { 'serf':
  config_hash   => {
    'node_name'  => $::fqdn,
    'tags'       => {
      'sshrsakey' => $::sshrsakey
    'discover'   => 'cluster',

This particular module uses a hash to translate directly into the config.json file on disk. Notice how I’m using the new tags feature, and adding a sshrsakey tag, populated by Puppet’s facts.

Querying The Cluster

Once the servers have Serf installed and configured, the cluster can be queried using the serf command line tool:

$ serf members    alive    sshrsakey=AAAA...    alive    sshrsakey=AAAA...

Using the Data

Lets use this data to write out our /etc/ssh/ssh\_known\_hosts file, emulating the the functionality of ssh-keyscan:

$ serf members -format=json | jq -r '.members | .[] | "\(.name) ssh-rsa \(.tags[])" ' | tee /etc/ssh/ssh_known_hosts ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTfPpmHhc+LoD05puxC... ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmzk+Chzrq73c5ytU9I...

So… you can see I’m using jq to manipulate the JSON ouput of the serf command. I’m not super proud of it, but it works.

Lets see if we can use a script instead? Serf provides and RPC protocol to interact with it programmatically:

#!/usr/bin/env ruby
require 'serf/client'
client = Serf::Client.connect address: '', port: 7373
members = client.members.value.body['Members']
puts members.collect { |x| x['Name'] + ' ssh-rsa ' +  x['Tags']['sshrsakey'] }

Of course, no error handling or anything. This script achieves the same result using the serf-client ruby gem.

There are libraries to connect to the Serf RPC directly for many languages, or you can do it yourself using the msgpack RPC library to communicate directly on the tcp socket.


This is just the beginning. Serf allows retrieving the status of members, but also can spawn programs (handlers) whenever members join or leave.

Additionally you can invoke custom events for your own uses, like code deploys.

If you can deal with an AP discovery and orchistration system, then Serf could be a foundation for building great things!