Postmodern Sysadmin

A blog about servers and junk

Another Comparison of Image to ASCII Conversion Tools (2017)

Time for another round of terminal-based ascii/ansi art image conversion tools. Check out my last post from 2015 with more comparisons of more tools.

This year I compare the best tool from 2015, img2xterm, against a new set of tools that I’ve stumbled across. email me with other tools if you would like to be included in the next round.

Methodology

For these tests I used an image with a 160px width, twice that of a standard terminal. Then I cat‘d the image in plain xterm and took a screenshot of the results.

The original has been scaled up (6X) to be the same relative size as the resulting screenshots.

My entire methodology is on github if you wish to see exactly how I made these images. In theory it is 100% reproducible from make. (assuming on a linux desktop)

Tools Compared

Results

bender.png

bender.png converted using original bender.png converted using img2xterm bender.png converted using termplay bender.png converted using TerminalImageViewer bender.png converted using pixterm bender.png converted using timg

lenna.png

lenna.png converted using original lenna.png converted using img2xterm lenna.png converted using termplay lenna.png converted using TerminalImageViewer lenna.png converted using pixterm lenna.png converted using timg

nyan.png

nyan.png converted using original nyan.png converted using img2xterm nyan.png converted using termplay nyan.png converted using TerminalImageViewer nyan.png converted using pixterm nyan.png converted using timg

Conclusion

img2xterm is the contender from the last comparison. Compared to this second wave of tools, its lack of 24bit color shows. It does have the most advanced colorspace conversion capabilities. If you want true-color, look elsewhere.

termplay is a converter written in Rust, capable of converting static images as well as videos. It has native youtube-dl support. It also has a unique feature to mirror an X11 window in real time in the terminal. This converter had the highest performance of all the tools. It probably has extra performance optimizations in order to support video. The lack of half-block character support show when put side-by-side with other converters.

TerminalImageViewer (tiv) has the unique feature of being able to use many different block formats, more than the half-block character, to create an interesting output. Look at the list of unique unicode characters available for use. Due to this feature, I had to provide an upscaled version of the source images in order to produce the the desired output.

pixterm’s most prominent feature is its speed. Taking advantage of Go’s concurrency features, it can convert images quickly using all available cores. It can also support a wide range of input formats.

timg’s unique feature is that it can show animated gifs directly in the terminal. It can handle true color using the half-block character for double the vertical resolution.

Tool Color Block Type Language Performance Overall, data Extra
img2xterm 256 Half-Block C Medium (20, 13, 11) Bash version available
termplay True or 256 Single Block Rust High (6, 5, 4) Video support
TerminalImageViewer True or 256 4x8 Unicode C++ and Java Low (48, 46, 45) Extra unicode characters for multi-pixel matching
pixterm True Half-Block Go Medium (17, 17, 17) multi-core processing
timg True Half-Block C++ High (10, 8, 7) Animated gif support

(Performance data is real time in ms for rendering bender, lenna, and nyan)

Winner: TerminalImageViewer

Even though it has relativly slow performance, its unique unicode rendering can give the highest “resolution” images available on a terminal. It does look a little weird on some images, so I’m glad it has the -0 option to force it back into half-block mode. In this mode it is just as good as pixterm or timg.

Runner up: timg

With true color and half-block support, timg creates very high quality versions of images in terminials. Combine this with its high performance and animated gif support, it is this year’s runner up.

Cruising From Port San Luis to San Francisco in a Gemini 105m

Over the last 5 days my wife and I cruised from Port San Luis to San Francisco. The total distance was 228 Nautical Miles (nm). We recently purchased the boat and decided to relocate it to a Marina closer to where we live. We knew the trip would relatively difficult. The majority of the journey would be against the wind and waves. For this reason we decided to motor the entire distance up North.

Due to technical difficulties, the first ~12 hours of the trip are missing from the GPS track.

In Google Earth and gpx format and directly in Google Maps.

Day 1 (2017-07-05)

On the first day we took Bart to Caltrain to Amtrack from San Francisco to Avila Beach. From there we took the Water Taxi to the boat which was moored on a mooring ball. I made the call for us to leave that night, with me taking the night-shift.

On Amtrack
Port San Luis Pier
All Smiles Right off the bat
Gi Gi: A 1997 Gemini 105m

5pm: Leave Port San Luis

Off this part of the coast during this time the winds were against us at 16kts gusting to 21kts.

(Picture not taken by me) First Landmark: Diablo Canyon Power Plant (CCA “Mike” Michael L. Baird)

The seas overnight were not too rough, but the combination of the pounding and the headwind meant we could only sustain 4kts.

The Gemini pounds the seas pretty badly. This is mostly due to a combination of a solid (not netted) forward deck, a relatively low bridge deck clearance, and the structural nature of a catamaran. Cody got very seasick.

Day 2 (2017-07-06)

2am: A late night conversation

At ~2am I was hailed by a 300ft research vessel heading North towards Portland. He was showing up on my radar and just called to let me know he was passing on the port side and should continue on our respective courses. It was nice to hear from someone in the middle of the night. I had a conversation with him, and his general advice to avoid the rough seas was to generally hug the coast. He mentioned the waves don’t get too bad until Point Sur.

7am: Cody takes over

I sleep after Cody takes over for the day shift
Cody carefully drives into the foggy morning
Difficult to imagine doing this trip without a co-pilot

9am: A snagged prop

In the morning we had a prop fouling by a line that had been pounded out of it’s holding bag on the forward part of the deck. It had found its way under the hull and was just long enough to get the tip of the line fouled in the prop.

The Gemini has a raisable drive leg made by Sillette:

Gemini Drive leg (copyright Chris Gruno)

The drive leg adds complexity to the rig, but does make fixing fouled props easy. The difficulty here was actually figuring out it was fouled in the first place. Every time I would raise the leg to inspect it, there was nothing in it. The line was only long enough to get in the prop while down and at speed (so the suction would draw in the line). I had to watch the water carefully and wait for the line to find its way into the prop to confirm my suspicion about the cause of the extreme vibration.

Chewed up line, only at the tip

3-5pm: Point Sur

By the time we hit Point Sur there was a small craft advisory in effct. There were very strong headwinds and 10ft seas.

This was the most difficult part of the trip. Usually the buoyancy of the Gemini allows it to “only” pound waves and ride over swells. This was the only part of the trip where the waves were so bad that the boat would pound and dig deep into the wave, with seawater crashing over the top. Unsurprisingly the autohelm was not able to keep us on course and I took over the wheel. I drove the engine hard and pressed forward for a few hours to get us past this difficult point. I was only able to maintain 2-3kts.

At around 6pm the weather subsided enough to allow me to throttle back the engine and re-enable autopilot, giving me and the engine a break while we continued to travel up the coast, with our intermediate destination of anchoring at Whalers Cove for the night.

Day 3 (2017-07-07)

12am: Give up on Whalers Cove

Whaler’s Cove is a small little cove for boats to anchor in, “protected” by surrounding rocks. It is only a few hundred feet across, with an opening of about 200 feet.

With the slow progress we made thus far, we arrived here at around midnight in the darkness. The cove was indistinguishable from the surrounding rocks and black land. With very little anchoring experience and a dead spotlight, there was just no way were going to able to make this landing, especially given our fatigue.

Luckily Cody volunteered to drive three more hours north to Monterey Bay Harbor.

3am: Arrive at Monterey Harbor

After a small nap, I had enough brainpower to remember that I had the foresight to load OpenCPN with the appropriate vector maps for this area on my cell phone. This was an amazingly powerful tool for guiding into a Marina at night. A detailed map with a real-time GPS overlay gives you the confidence that you are actually in the right direction and looking at the right lights. Luckily the friendly Harbormaster on the night shift gave us an end-tie and a good nights sleep.

9am: Depart Monterey Harbor

Monterey Harbor is a small city harbor. Nothing really notable, the service was nice, bathrooms were clean, fairly priced.

Docked at Monterey Harbor

For this day we decided to make an easy trip of only 22nm north to Santa Cruz.

On to Santa Cruz after a real nights sleep

5pm: Arrive at Santa Cruz

Santa Cruz is a nice harbor, but for a transient berth we paid ~$50, which is twice as expensive compared to the other marinas we stayed at. On top of that, shower access required a non-refundable “deposit” and access to shore power was extra, no thanks. Since we had a short day, we treated ourselves to Betty’s Burgers for dinner.

Day 4 (2017-07-08)

6am: Depart Santa Cruz

Early morning easy seas selfie
Taking turns driving
Passing a marker
Calm seas for once

I say calm, but it still is the Pacific ocean. Calm in this context means no water is spraying in your face:

5pm: Arrive at Pillar Point

Tied up at Pillar Point
Washing down the salt water

Pillar Point Marina is primary a fishing harbor, but has lots of shops and restaurants nearby. It is a surprisingly happening place.

Day 5 (2017-07-09)

6am: Depart Pillar Point Marina

Ready for the last day

Departing Pillar Point was relatively easy in the early morning, especially assisted with Radar. We were accompanied by many fishing boats out to get their catch. A kayaker warned us of a pair of Humpbacks just outside the harbor barrier wall, and indeed I had to make evasive maneuvers to avoid a whale surfacing ~30ft in front of the boat!

9am: Drive leg leak

This morning the drive train of the boat sounded a lot more “whiny” than normal. The worrying sound was coming from the drive leg, and not the diesel engine. Luckily I had been lurking on the Gemini Owners Mailing List to have a hunch that there might have been an oil leak, and that the “billows” was the most likely culprit:

Drive leg oil leak

The billows is a rubber part that gets a lot of exercise as the drive leg turns, raises, and lowers. This leak allowed enough oil to drain to expose the top CV joint, leaving it less lubricated and whiny. After a quick phone call to the previous owner to confirm the correct oil was onboard and available (thanks so much Jerry! This was a huge lifesaver!), I topped up the drive oil to get us the rest of the way home. This billows will be first on my list of repairs to make on the boat.

11am: Entering San Francisco Bay

A foggy approach to the Golden Gate Bridge

And a classic “dad move” of mashing stop when you think you are mashing record:

A foggy San Francisco
A foggy Alcatraz
Pier 39 or so
Downtown Skyline

We timed it correctly to allow us to come in with the tide, giving as a nice 3kt speed boost giving us a peak speed of 9.3kts under the bridge.

1pm: Arrive at Oyster Point Marina

And finally we arrived at our actual destination: Oyster Point Marina

Tied up at the Oyster Point guest dock

Home Made Lichtenberg Figures

For winter 2016 I made Lichenberg Figures. I used a 5kV 10mA (50W) neon light transformer. I also experimented with a 2kW microwave oven transformer, but found that the lower powered neon transformer produced finer, better, and safer results.

To produce the figures, I would first apply the electricity to the wood, often at the corners. Initially the resistance of the wood is not sufficient to allow any burning. Then I would use a spray bottle full of water/baking soda to moisten the surface of the wood until the electricity could find the path of least resistance and start the burning reaction. With the low-power neon transformer the burning is slow and takes hours.

To guide the reaction in an “aesthetically pleasing way”, I used a heat gun to temporarily dry out parts of the wood, creating channels of low-resistance surface water. This technique is most evident on piece #10. It was also used on #15 to evenly cover the entire (large) piece.

After the electrical treatment, each piece was finished with varnish, matted, framed, and shipped. Below is a gallery of final results. Each was given to a friend or family member as a winter gift:

Piece 01: 14”x7” Mahogany


Piece 02: 12”x7” Birch


Piece 03: 12”x7” Birch


Piece 04: 12”x7” Mahogany


Piece 05: 14”x7” Mahogany


Piece 06: 7.5”x7.5” Birch


Piece 07: 7.5”x7.5” Mahogany


Piece 08: 7.5”x7.5” Mahogany


Piece 09: 7.5”x7.5” Mahogany
(Image not available)


Piece 10: 18”x7.5” Mahogany


Piece 11: 18”x7.5” Mahogany


Piece 12: 24”x7.5” Mahogany


Piece 13: 24”x7.5” Mahogany


Piece 14: 24”x5.5” Oak


Piece 15: 24”x24” MDF


Piece 16: 24”x5.5” Poplar


Piece 17: 24”x7.5” Birch


Piece 18: 37”x5.5” Mahogany


Piece 20: 18”x5.5” Mahogany


A Comparison of Text-Based Web Browsers

Intro

Who browses on the terminal now-a-days? However you are, you are crazy, but you might appreciate this comparison of text-based web browsers, with screenshots of a few different popular sites.

I wanted to test these browsers with more than just simple pages, so where possible I actually logged into places and took screenshots of the actual webpage in a realistic state.

Methodology

All browsers were set to use xterm with TERM=xterm-256color. The following browsers were used with these settings:

  • retawq (0.2.6c)
    • Enable SSL support
  • elinks (0.12~pre6-11build2)
    • underline
    • linux frames
    • 256 color
    • utf8
  • links2 (2.12-1)
    • Linux frames
    • Color
  • w3m (0.5.3-26build1)
    • Render frames
  • lynx (2.8.9dev8-4ubuntu1)
    • underline links
    • Always allow cookies

Check out the code for the exact commands used to generate everything.

Comparison

Wikipedia Rule_110

Wikipedia has great text-based browsing support in general. I did not try editing anything. All browsers had no trouble rendering the data in a readable way.

https://en.wikipedia.org/wiki/Rule\_110 rendered using retawq https://en.wikipedia.org/wiki/Rule\_110 rendered using elinks https://en.wikipedia.org/wiki/Rule\_110 rendered using links2 https://en.wikipedia.org/wiki/Rule\_110 rendered using w3m https://en.wikipedia.org/wiki/Rule\_110 rendered using lynx https://en.wikipedia.org/wiki/Rule\_110 rendered using Original \(surf\)

Hacker News

Hacker News is mostly text-based, so these browsers had no trouble with it in general. I appreciate elinks’s support for colors that are true to the original.

http://news.ycombinator.com rendered using retawq http://news.ycombinator.com rendered using elinks http://news.ycombinator.com rendered using links2 http://news.ycombinator.com rendered using w3m http://news.ycombinator.com rendered using lynx http://news.ycombinator.com rendered using Original \(surf\)

Facebook

I could not actually log into facebook with any text-based browser.

https://facebook.com rendered using retawq https://facebook.com rendered using elinks https://facebook.com rendered using links2 https://facebook.com rendered using w3m https://facebook.com rendered using lynx https://facebook.com rendered using Original \(surf\)

Twitter

Twitter looks “ok” on text-based browsers, although for that particular application you might want to consider a dedicated application built for the terminal.

retawq was unable to log in for some reason.

https://twitter.com rendered using retawq https://twitter.com rendered using elinks https://twitter.com rendered using links2 https://twitter.com rendered using w3m https://twitter.com rendered using lynx https://twitter.com rendered using Original \(surf\)

Gmail

Gmail is a tall order for a text based browser. Only elinks, w3m, and lynx could pull it off.

elinks shines again with great CSS support, with w3m with second place. These were all rendered using the basic HTML version. Luckily I didn’t get a CAPTCHA?

https://mail.google.com rendered using retawq https://mail.google.com rendered using elinks https://mail.google.com rendered using links2 https://mail.google.com rendered using w3m https://mail.google.com rendered using lynx https://mail.google.com rendered using Original \(surf\)

Summary

elinks is my favorite of the bunch because of color support.

This blog post is about 10 years too late, and mostly serves to remind myself which version of “links” I like and why.

Kyle’s (Fashion) Style Guide

I recently read “Why Are SO Many Millennials SO Uncool?”. Let’s start with a quote:

*For the purpose of this writing, I’m defining “cool” as those who don’t conform, who don’t always fit in nor do they try to, and who follow their own path; and “uncool” as those who dress, act, and have the same tastes as the masses and are vulnerable to corporate influences.

Now, I’m by no means some sort of authority on coolness. By this definition there is certainly some degree of subjectivity, but this definition has a hint of personal-values embedded into it.

In otherwords, this is more than “I don’t like black socks and sandles”, but more like “I value non-corporate-sellouts.” At least this value extends beyond just personal taste.

Individuality Versus Popularity

Anyone can choose to adopt this value. I can appreciate it.

If fully adopted, it seems like this would encompass normal corporate branding stuff, as well as things that are simply “popular”. By this definition, wearing a popular brand name or adopting a trendy style is “uncool”. This is at odds with the definition of “cool” that I learned in middle-school. In fact, in middle-school the definition of cool was the exact opposite of the author’s definition.

This is fine. As we mature into adults, some people outgrow this definition of coolness. Others do not.

Gucci Bag Etsy Purse
Corporate Gucci Bag: Uncool Handmade Etsy Bag: Cool

I can get behind this. I also individuality over popularity. I also dislike corporate influences. (or heck, external influences in general)

Examining My (Tech) Wardrobe

One of my other personal values is consistency. If I’m going to adopt this value and be consistent, then perhaps I should examine my wardrobe…

What external corporate ends am I promoting with my wardrobe? Well let’s start with all these technology tshirts:

Docker Shirt OpenSSL Shirt
Docker Shirt: Uncool OpenSSL Shirt: Cool

Both Docker and OpenSSL are open source, but wearing a Docker shirt implicitly promotes the Docker Company. On the other hand, OpenSSL is goverend by the OpenSSL Software Foundation. Is wearing a Docker shirt on par with showing off your Gucci bag?

Ubuntu Shirt Debian Shirt
Ubuntu Shirt: Uncool Debian Shirt: Cool

Ubuntu is a product of Canonical. Debian doesn’t have any corporate counterpart. Is wearing an Ubuntu shirt uncool because you are providing free advertising for a corporate entity?

AWS Shirt Openstack Shirt
AWS Shirt: Uncool Openstack Shirt: Uncool too

I don’t know man, I don’t think Openstack shirts are cool either….

Non-tech

These above examples are given mostly because the represent a large portion of my wardrobe. In general the same principle of rejecting corporate sponsors carries over to non-tech shirts.

I dare say that even wearing shirts with logos of your current or previous employers are not cool.

Conclusion

In general, wearing something that promotes another company’s products, I guess is uncool, even if you like the product or even contribute to it. The root cause is that you are allowing yourself to be treated as a means to their promotion?

Of course the act of trying to be cool in uncool in itself, so I’m pretty sure I’m forever destined to remain… uncool.

A Configuration Management Rosetta Stone: Configuring Sensu With Puppet, Chef, Ansible and Salt

I recently finished my Intermediate Sensu Training on Udemy. It was a ton of work but I’m glad I got it all together. Part of that training includes how to deploy and configure Sensu with four of the most popular open-source configuration management tools: Puppet, Chef, Ansible, and Salt.

The Sensu Decree

In order to do the training I had to learn each of these tools enough so I could install a baseline Sensu installation. Here is what I reproduced with each iteration:

  • A Sensu client, Server, and API Setup and Running
  • RabbitMQ Server, User, and Sensu Vhost ready for use. (no SSL)
  • Redis installed and running for state
  • A Sensu check (check_disk and/or check_apache)
  • The Sensu Mail handler to send emails for alerts
  • The Uchiwa Dashboard
  • All on one host (localhost)

This was no small feat, and required using a non-trivial number of features of each configuration management system to get the job done.

Here were some other guidelines that I followed in this exercise:

  • Always use 3rd party modules/cookbooks/etc. Use official ones if possible.
  • Use the local-execution mode provided by the configuration management tool (no client/server setup)
  • Follow official docs when available for general guidelines for things like installation.
  • Differences in things like config file names or versions of Redis are inconsequential. As long as Sensu behaved the same I considered it complete.
  • No considerations for security (out of scope for this exercise)

Review of Each Tool

Puppet

Puppet In General

Puppet is my “native language” when it comes to configuration management, so it is a little hard for me to imagine what it is like to not know what it is like to know exactly how it works.

Puppet has a custom DSL to describe configuration in terms of “types”. These are the primitives that you can build infrastructure upon, things like “files, “package”, and “service”. Third party modules can extend that language with custom types, allowing you to abstract over the “raw” types. For example, the RabbitMQ has a type for providing rabbitmq_users, which do not correspond to a particular config file or anything, but instead can only be added by special invocations of the rabbitmqctl command.

Puppet strongly emphasizes code-reuse. The Puppet Forge is the registry where you can upload and share modules. The Forge has a number of methods to help indicate code quality. It also exposes “officially supported” and “officially approved” modules, for extra approval stamps. While the forge may have a very “long tail” of modules that do very common tasks, the set of officially-supported and officially-approved modules leaves behind a good selection of high-quality modules ready for re-use.

A common criticism of Puppet is that it does not apply resources in the order that they are declared in the manifest. Instead, Puppet internally calculates a directed graph of resources and their dependencies, and executes them in a dependent order, which is not necessarily in the order in which they are parsed. This is similar to how Linux package managers install packages. If you run apt-get install apache libc libssl, the packages will not necessarily get installed in the order that they were specified on the command line.

Puppet also comes with Hiera, a convenient hierarchical key/value store. This store allows users to override and set site-specific settings to Puppet modules without having to fork or modify them. Hiera encourages custom hierarchies that meet your business needs, allowing users to specify settings in a way that makes the most sense for their environments. And example hierarchy might look something like:

1
2
3
4
5
6
7
8
9
10
11
hieradata/
├── common.yaml
├── environment
│   ├── dev.yaml
│   └── prod.yaml
├── datacenter
│   ├── dc1.yaml
│   └── dc2.yaml
└── hostname
    ├── web1.yaml
    └── web2.yaml

Then Hiera looks up parameters from most-specific (hostname) to least-specific (common), and returns the first value that is available.

Review of the Sensu Puppet Module

The sensu-puppet module is a first-class citizen in the Sensu world. It has native types for the Sensu JSON files that it manages, as well as a sensu-gem type for easily installing rubygems with the embedded Sensu ruby.

The Sensu Puppet module only manages Sensu, and has no integration with any other RabbitMQ, Redis, or any other module. To me this is expected, in the Puppet world it would be the job of a profile to combine the Sensu module with RabbitMQ and other things. For the most part this integration is left as an exercise to the reader.

The Sensu Puppet module also doesn’t manage Uchiwa. That requires a different puppet module. Again to me this is a good thing, I hate it when tools try to do too much.

The actual codebase is actively maintained and reasonably active, with a few releases per year. The Puppet Forge rates it almost perfectly for module quality. The code has excellent unit test and acceptance test coverage. As far as Puppet modules go, the Sensu Puppet module is a great example of a well-maintained piece of code.

One downside the “completeness” of the module is that sometimes new features of Sensu are released, and the puppet-module will lag. The configuration inputs to the puppet module are well-typed, and not just free-form hashes. This gives a lot of guardrails and helps ensure config files are correct before they hit the disk, but it means that some features are not usable until the Puppet module can account for them.

Although the code worked, there was a significant bug that prevented the module from ever converging. This was annoying but allowed me to test the code. This bug looks to be fixed in master.

Chef

Chef in General

Chef is not as old as Puppet, but is certainly a mature product. Chef is “just ruby” when it comes to its configuration language. The upside to this is that Ruby developers can theoretically dive in and hack on stuff. The downside to this is that being “just ruby”, “leaves a lot of rope to hang yourself”.

One nice feature provided by the Chef company is their hosted chef solution, which allows people to get started without hosting a Chef-server.

The Chef toolset also comes with the knife command, which is a great command line tool for interacting with the Chef-server. It also is a parallel-ssh tool, manipulates chef cookbooks, and can also launch ec2 (and other) instances. (did they take the kitchen-sink metaphor too far?)

The Chef Supermarket serves as the public registry for Chef cookbooks. There are not too many quality indicators to see, to help find which cookbooks are any good. The best metric I could see is just sorting by “followers”. This is made up by the fact that there are over a hundred officially supported cookbooks.

Probably the most difficult aspect of Chef for me to understand was how attributes interact. This confusion is probably most obvious when you look at Chef’s 15 levels of attribute precedence. It seems to me that there should be a more obvious way for intent to flow, but I could be just spoiled by Puppet’s Hiera.

Review of the Chef-Sensu Cookbook

The Sensu Chef Cookbook is also a first-class citizen in the Sensu-world. Chef is the “native config language” of Sean Porter, the main author of Sensu. This gives a lot of credibility to the Cookbook, and shows in the contributor page.

The Cookbook itself is feature complete, with recipes for installing and configuring all aspects of Sensu.

The scope of the cookbook includes all Sensu related technologies, including RabbitMQ, Redis, and Uchiwa. It is certainly “batteries included” and on by default. It even downloads and compiles Redis from source for you.

Another example of this “batteries included” design is the RabbitMQ module setting Apt attributes. Like the above Redis example, this behavior surprised me, but technically it is not related to the Sensu chef cookbook.

At the same time, wrapper cookbooks are recommended as a method to combine multiple cookbooks together in a coherent way. I think in general I just expected the wrapper cookbooks to do more and the main Sensu cookbook to do less.

The cookbook does have an integration test suite, but it is not run via Travis. The code is under active development, and does multiple releases a year. It has native support for Chef data bags for transporting the RabbitMQ SSL support, which is a nice touch (Not tested in this review).

Ansible

Ansible in General

Ansible is a relative newcomer to the configuration management space. Ansible uses yaml files to define desired state. The yamls files are a nice way to represent things, but it would be misleading to think that Ansible is just yaml files. Ansible has its own DSL and uses Jinja2 templating, which is parsed over the contents of the yaml.

The Ansible Galaxy is the community registry for uploading shared roles. You can sort by rating to try to get a better idea about which roles are potentially higher quality than others.

There doesn’t seem to be any official roles/playbooks. The closest there is to official roles is the ansible-examples repository. But click the link and look at the lamp_simple example. There is no code-reuse at all! Every example re-invents how to install apache, install ntp, configure iptables, etc. What’s up with that?

While the yaml files may make it very easy for beginners to make playbooks that get things done quickly, I don’t think they will work out great as infrastructure expands. The abstractions just are not there

Another sign, to me, that Ansible has the wrong abstractions is that so many roles are distro specific. Not many have the necessary code to work on both “CentOS” and “Debian”. There is a generic package type, but very few roles use it? Check out the original author’s opinion on the subject. Look at the examples! They all only work on yum based distributions.

I’ve read lots of posts of people migrating to Ansible and loving it. Personally, I don’t get it. The abstractions are too low-level. If you are lucky, then the Ansible core has a Module to manipulate the resources on the host, like RabbitMQ stuff. If you are unlucky, then the only primitives you have available are yaml files and running commands and parsing stdout. Or you can write your own module.

Ansible Sensu Playbook Review

There is no official Sensu Ansible playbook. I was not able to find any playbooks that support RedHat-based distributions.

Luckily, I was able to use Mayeu’s ansible playbook, in conjunction with this RabbitMQ playbook on my Ubuntu server.

The sensu_check module is part of the “Extras”, but it is only a very small part of deploying Sensu, and it has no cohesion with the playbook that actually deploys Sensu itself. There is no way to extend sensu_check without forking ansible-modules-extras. It can’t consume arbitrary check metadata.

In the end, to meet my needs I had to construct hashes myself and deploy them to disk as JSON. The playbook-provided way to deploy sensu checks is to have them all contained in the single sensu_checks variable.

Salt

Salt in General

Salt is also a relative new-comer to the configuration management world. As a user, Salt feels very similar to Ansible. They both use yaml files to represent the desired state of the system. Both use Jinja templates. Both require the “advanced” system interaction to happen with the core stuff, and the Salt formulas can be just yaml with no real code.

Salt takes a different approach to sharing community code compared to the other configuration management systems. Salt keeps all the official formulas in one GitHub project. The docs recommend forking the formula for your own use. On the plus side, having “canonical” formulas for common tasks reduces duplication and encourages code re-use. The downside is that… it encourages forking? These formulas in general are not that extensive. They don’t have releases or any kind of testing in place.

Salt’s Pillar is a powerful tool for separating configuration from code. It is similar to Puppet’s Hiera. Pro: separate config from code; keep the site-specific variables in a separate folder than the formulas. Con: formulas have to be “pillar-aware”. There is no equivalent to Puppet’s automatic parameter lookup.

Sensu Salt Formula Review

For my testing, I used the official Salt-formula. There is a sensu-salt repo on the official Sensu project, but it is not really suitable for production use in my opinion.

For the most part, the formula did what it said on the tin. Of course, like Ansible, the only way I was able to deploy checks in a flexible way was to construct my own Hashes and deploy them as JSON directly. There is no such thing as a sensu_check type in Salt.

I was not able to get rid of the hard-coded cron check. I guess goes with the idea that they expect you to fork the repo and make your own local changes to meet your needs. I thought I should maybe open an issue for this, but the file has been there for a year and nobody else has complained. I figured it was just me, and maybe I should get over myself and accept the fact that I got a free cron check!

In my own testing, I used the native gem provider with a special path to Sensu’s gem binary to install Sensu gems. But then I discovered that the formula did this too, but in two different ways, using the cmd.run method instead of the native gem method. I didn’t really like this, but at the same time, this is the first time I’ve ever used Salt.

As far as I can tell, to do more advanced Sensu config things, like filters or mutators, you are expected to fork the formula and drop in the json file into the right directory.

Comparison

A rough opinionated comparison between the tools, with regards the tool itself and the tool in conjunction with Sensu. “High” doesn’t necessarily mean “good” here:

  Puppet Chef Ansible Salt
Review of The Config Management Tool in General        
Version used 3.4.3 12.4.1 1.5.4 2015.5.3
Third Party Module Easy of Use High High Medium Low
Official Sensu Support for the Tool High High Low Low
Reproducibility High High High High
Easy of use getting started Medium Medium High Medium
Language extensibility High High Low Low
Separation between config data and code Hiera Databags/Attributes just variables? Pillar
Module re-usability? High High Low Low
Review of the Sensu Module/Cookbook/Etc        
Version of the module Used 1.5.5 2.10.0 0.1.0 c6324b3
Sensu Module Feature Completeness High High Medium Medium
Sensu Module Integration with Other Modules Low Extreme? None None
Sensu Module Flexibility High High Medium Low
Sensu Module Re-usability High High High Low
How Opinionated Was It? Low High Low Medium
Usability with Sensu’s Embedded Ruby Yes Yes Not natively Sorta

Conclusion

The way I see it, there are two camps. Chef and Puppet both provide a rich language to build modules with. For example, the PuppetLabs RabbitMQ module contains all the code to interact with RabbitMQ. The main Puppet codebase doesn’t know anything about RabbitMQ. The same goes for Chef. Both Chef and Puppet also have their own DSL. Puppet uses yaml files for Hiera, but they are for config only, unlike Ansible/Salt.

In the other camp is Ansible and Salt. They have a simplified config language, and require the help from the core software to do the “heavy lifting” of the raw types. For example, the Salt RabbitMQ formula requires the help of core Salt RabbitMQ module to provide the primitives.

Final Thoughts

  • Puppet
    • Directed graph dependency ordering, not parse-order driven
    • Type/Provider system and defined types provide the right abstraction layers to build upon.
    • Hiera provides a good separation of config/code, making it easier to reuse modules without modification.
    • Strong culture of testing
    • Lots of good supported modules
    • High deployment overhead and language learning curve
  • Chef
    • LWRP system provides the right abstraction layers to build upon.
    • Knife tool does do a lot of cool stuff
    • Lots of good supported cookbooks
    • Strong culture of testing
    • “Just ruby”
    • 15 levels of attribute precedence is insane
  • Ansible
    • Low deployment overhead and low learning curve
    • “Just yaml files”
    • Lack of type/providers means that playbooks use “apt” and “yum” directly, which kinda sucks
  • Salt
    • Pillar provides a nice separation of config/code, which is good for formula-reuse, if the formula is pillar-aware
    • Centralized formulas emphasize consolidated development effort
    • No strong state testing emphasis or framework

Going Further

If you want to know more about Sensu, of course you can take my training course:

Or you can tell me I’m wrong. You can raise and issue or make a pull-request for the blog post or investigate my actual training material and code on Github.

A Comparison of Image to ASCII Conversion Tools

Inspired by ponysay, I think wicked ascii/ansi artwork on the terminal is great.

I decided to survey all the tools I could find that aid in this conversion to see if there were any dramatic differences in results.

Methodology

For these tests I used an image with a 160px width, twice that of a standard terminal. Then I cat‘d the image in plain xterm and took a screenshot of the results.

The original has been scaled up (6X) to be the same relative size as the resulting screenshots.

My entire methodology is on github if you wish to see exactly how I made these images. In theory it is 100% reproducible from make. (assuming on a linux desktop)

Tools Compared

Results

bender.png

bender.png converted using original bender.png converted using img2xterm bender.png converted using util-say bender.png converted using catimg bender.png converted using catimg-bash bender.png converted using img-cat bender.png converted using img2txt bender.png converted using jp2a

lenna.png

lenna.png converted using original lenna.png converted using img2xterm lenna.png converted using util-say lenna.png converted using catimg lenna.png converted using catimg-bash lenna.png converted using img-cat lenna.png converted using img2txt lenna.png converted using jp2a

nyan.png

nyan.png converted using original nyan.png converted using img2xterm nyan.png converted using util-say nyan.png converted using catimg nyan.png converted using catimg-bash nyan.png converted using img-cat nyan.png converted using img2txt nyan.png converted using jp2a

Conclusion

img2xterm stands out to me as the most accurate and true to the original, with util-say as a close second. Both of these tools understand “half-block” characters with two colors, effectively doubling the horizontal resolution of the resulting characters. (two colors per “pixel”)

catimg and img-cat both have good color representation, but lack the additional resolution compared to the other tools, giving it a more “pixelated” look.

img2txt and jp2a are “true ascii” tools, they are really not in the same league as the others. I included them here for completeness.

Playing With IPv6 Over Bluetooth Low Energy (6LoWPAN)

I like Bluetooth Low Energy (BTLE). I also like IPv6. Did you know you could but both together?

Technically 6LoWPAN

Requirements

modprobe bluetooth_6lowpan
echo 'bluetooth_6lowpan' >> /etc/modules

Establishing the Connection

Set the Bluetooth L2CAP PSM

First you need to set the Protocol/Service Multiplexer value on both sides to “62” (0x3E) on both sides:

echo 62 > /sys/kernel/debug/bluetooth/6lowpan_psm

This PSM value lets the driver know that you are going to multiplex this special new protocol on top of whatever your bluetooth device mith also be doing.

0x25 is the magic value for “Internet Protocol Support Profile” https://www.bluetooth.org/en-us/specification/assigned-numbers/logical-link-control which I think is supposed to be the correct value?

0x3E is some sort of temporary value I had to use to get this working, as 0x25 ended up as a being not supported per the messages in my wireshark dump.

I’m not aware of any other way to set it other than this kernel debug setting.

Making the slave advertise

The slave must be doing Low-Energy advertisements in order for the master to connect to it.

hciconfig hci0 leadv

Connect

On the master you should be able to watch the slave advertise:

⮀hcitool lescan
LE Scan ...
C4:85:08:31:XX:XX (unknown)
C4:85:08:31:XX:XX ubuntu-0

Establish a connection from the master to the slave:

echo "connect C4:85:08:31:XX:XX 1" >/sys/kernel/debug/bluetooth/6lowpan_control

Afterwards a bt0 device should show up in ifconfig. Run hcitool conn to verify a connection is actually established. Use wireshark on bluetooth mon mode on the hci device to confirm commands are being sent.

The proof is in the ping:

~ ⮀ # ⮀ping6 fe80::1610:9fff:fee0:1432%bt0
PING fe80::1610:9fff:fee0:1432%bt0(fe80::1610:9fff:fee0:1432) 56 data bytes
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=1 ttl=64 time=158 ms
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=2 ttl=64 time=236 ms
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=3 ttl=64 time=113 ms

Problems

After a small number of packets, the connection seems to drop, and on the master side I get:

[  368.947193] Bluetooth: hci0 link tx timeout
[  368.947202] Bluetooth: hci0 killing stalled connection c4:85:08:31:XX:XX

No matter what rmmod or stopping I tried, a reboot was the only thing I could to rebuild the connection. Obviously this is pretty new stuff, hopefully it will stabilize in later versions of the kernel.

At this time though, on 3.19.0-21-generic (Ubuntu Vivid), this feature is not yet usable.

Etherhouse Part 2 - Software

The software that powers the Etherhouse project is open source. This blog post describes that software and how it interacts with all the pieces.

Client

You can see the Client software that runs on the Arduino. This uses one external library and is in the native Arduino C++.

The Arduino runs a limited TCP/IP stack and interacts with the http api.

The code plenty of defensive code in place to ensure the client continues to run without interruption or interaction. No one should need to “turn it off and on again.”

Server

The Server software is also open source.

In designing the software, I aimed for longevity. I want the software to continue to run for many years without maintenance. I decided to use golang.

  • Go binaries are statically compiled, which means the same binary I compile now will continue to run on new platforms for years to come.
  • With godeps I can include all compatible libraries together with no external dependencies, regardless of their long term state.
  • I use Heroku to deploy the code. Heroku is free for small installs and a stable platform. They can probably keep this server up better than I can.
  • I use a DNS name I can control for service discovery. This gives me the flexibility to change platforms over time if necissary.

Etherhouse Part 1 - Hardware

Etherhouse a project of mine involving eight Christmas gifts. Each gift involved a display of some model houses made from folded paper, each representing the home of a friend or family member.

The houses light up, depending on whether that family member is home or not. Their presence is detected based on if their smartphone is on the same network the etherhouse is on.

See the GitHub page for more details.