img2xterm is the contender from the last
comparison. Compared to this second wave of tools, its lack of 24bit color
shows. It does have the most advanced colorspace conversion capabilities.
If you want true-color, look elsewhere.
termplay is a converter written in
Rust, capable of converting static images as well as videos. It has native
youtube-dl support. It also has a unique feature to mirror an X11 window in
real time in the terminal. This converter had the highest performance of all
the tools. It probably has extra performance optimizations in order to support
video. The lack of half-block character support show when put side-by-side with
has the unique feature of being able to use many different block formats, more
than the half-block character, to create an interesting output. Look at the
list of unique unicode characters
available for use. Due to this feature, I had to provide an upscaled
version of the source images in order to produce the the desired output.
pixterm’s most prominent feature is its
speed. Taking advantage of Go’s concurrency features, it can convert images
quickly using all available cores. It can also support a wide range of input
timg’s unique feature is that it can show
animated gifs directly in the terminal. It can handle true color using the
half-block character for double the vertical resolution.
Performance Overall, data
Medium (20, 13, 11)
Bash version available
True or 256
High (6, 5, 4)
True or 256
C++ and Java
Low (48, 46, 45)
Extra unicode characters for multi-pixel matching
Medium (17, 17, 17)
High (10, 8, 7)
Animated gif support
(Performance data is real time in ms for rendering bender, lenna, and nyan)
Even though it has relativly slow performance, its unique unicode rendering can
give the highest “resolution” images available on a terminal. It does look a
little weird on some images, so I’m glad it has the -0 option to force it back
into half-block mode. In this mode it is just as good as pixterm or timg.
Runner up: timg
With true color and half-block support, timg creates very high quality versions
of images in terminials. Combine this with its high performance and animated
gif support, it is this year’s runner up.
Over the last 5 days my wife and I cruised from Port San Luis to San Francisco.
The total distance was 228 Nautical Miles (nm). We recently purchased the boat
and decided to relocate it to a Marina closer to where we live. We knew the
trip would relatively difficult. The majority of the journey would be against
the wind and waves. For this reason we decided to motor the entire distance
Due to technical difficulties, the first ~12 hours of the trip are missing from the GPS track.
On the first day we took Bart to
Amtrack from San Francisco to
Avila Beach. From there we took the Water Taxi to the boat which was moored on a mooring ball. I made the call for us to leave that night, with me taking the night-shift.
Port San Luis Pier
All Smiles Right off the bat
Gi Gi: A 1997 Gemini 105m
5pm: Leave Port San Luis
Off this part of the coast during this time the winds were against us at 16kts
gusting to 21kts.
(Picture not taken by me) First Landmark: Diablo Canyon Power Plant (CCA “Mike” Michael L. Baird)
The seas overnight were not too rough, but the combination of the pounding and the headwind meant we could only sustain 4kts.
The Gemini pounds the seas pretty badly. This is mostly due to a combination of a solid (not netted) forward deck, a relatively low bridge deck clearance, and the structural nature of a catamaran. Cody got very seasick.
Day 2 (2017-07-06)
2am: A late night conversation
At ~2am I was hailed by a 300ft research vessel heading North towards Portland.
He was showing up on my radar and just called to let me know he was passing on
the port side and should continue on our respective courses. It was nice to
hear from someone in the middle of the night. I had a conversation with him,
and his general advice to avoid the rough seas was to generally hug the coast.
He mentioned the waves don’t get too bad until Point Sur.
7am: Cody takes over
I sleep after Cody takes over for the day shift
Cody carefully drives into the foggy morning
Difficult to imagine doing this trip without a co-pilot
9am: A snagged prop
In the morning we had a prop fouling by a line that
had been pounded out of it’s holding bag on the forward part of the deck. It
had found its way under the hull and was just long enough to get the tip of the
line fouled in the prop.
The Gemini has a raisable drive leg made by Sillette:
Gemini Drive leg (copyright Chris Gruno)
The drive leg adds complexity to the rig, but does make fixing fouled props
easy. The difficulty here was actually figuring out it was fouled in the first
place. Every time I would raise the leg to inspect it, there was nothing in it.
The line was only long enough to get in the prop while down and at speed (so
the suction would draw in the line). I had to watch the water carefully and
wait for the line to find its way into the prop to confirm my suspicion about
the cause of the extreme vibration.
Chewed up line, only at the tip
3-5pm: Point Sur
By the time we hit Point Sur there was a small craft advisory in effct. There were very strong headwinds and 10ft seas.
This was the most difficult part of the trip. Usually the buoyancy of the
Gemini allows it to “only” pound waves and ride over swells. This was the only
part of the trip where the waves were so bad that the boat would pound and dig
deep into the wave, with seawater crashing over the top. Unsurprisingly the
autohelm was not able to keep us on course and I took over the wheel. I drove the engine hard and pressed
forward for a few hours to get us past this difficult point. I was only able to maintain 2-3kts.
At around 6pm the weather subsided enough to allow me to throttle back the
engine and re-enable autopilot, giving me and the engine a break while we
continued to travel up the coast, with our intermediate destination of
anchoring at Whalers Cove for the night.
Day 3 (2017-07-07)
12am: Give up on Whalers Cove
Whaler’s Cove is a small little cove for boats to anchor in, “protected” by surrounding rocks. It is only a few hundred feet across, with an opening of about 200 feet.
With the slow progress we made thus far, we arrived here at around midnight in
the darkness. The cove was indistinguishable from the surrounding rocks
and black land. With very little anchoring experience and a dead spotlight, there was
just no way were going to able to make this landing, especially given our fatigue.
Luckily Cody volunteered to drive three more hours north to Monterey Bay Harbor.
3am: Arrive at Monterey Harbor
After a small nap, I had enough brainpower to remember that I had the foresight
to load OpenCPN with the appropriate vector maps for
this area on my cell phone. This was an amazingly powerful tool for guiding
into a Marina at night. A detailed map with a real-time GPS overlay gives you
the confidence that you are actually in the right direction and looking at the
right lights. Luckily the friendly Harbormaster on the night shift gave us an
end-tie and a good nights sleep.
9am: Depart Monterey Harbor
Monterey Harbor is a small city harbor. Nothing really notable, the service was nice, bathrooms were clean, fairly priced.
Docked at Monterey Harbor
For this day we decided to make an easy trip of only 22nm north to Santa Cruz.
On to Santa Cruz after a real nights sleep
5pm: Arrive at Santa Cruz
Santa Cruz is a nice harbor, but for a transient berth we paid ~$50, which is twice as expensive compared to the other marinas we stayed at. On top of that, shower access required a non-refundable “deposit” and access to shore power was extra, no thanks. Since we had a short day, we treated ourselves to Betty’s Burgers for dinner.
Day 4 (2017-07-08)
6am: Depart Santa Cruz
Early morning easy seas selfie
Taking turns driving
Passing a marker
Calm seas for once
I say calm, but it still is the Pacific ocean. Calm in this context means no water is spraying in your face:
5pm: Arrive at Pillar Point
Tied up at Pillar Point
Washing down the salt water
Pillar Point Marina is primary a fishing harbor, but has lots of shops and restaurants nearby. It is a surprisingly happening place.
Day 5 (2017-07-09)
6am: Depart Pillar Point Marina
Ready for the last day
Departing Pillar Point was relatively easy in the early morning, especially assisted with Radar. We were accompanied by many fishing boats out to get their catch. A kayaker warned us of a pair of Humpbacks just outside the harbor barrier wall, and indeed I had to make evasive maneuvers to avoid a whale surfacing ~30ft in front of the boat!
9am: Drive leg leak
This morning the drive train of the boat sounded a lot more “whiny” than normal. The worrying sound was coming from the drive leg, and not the diesel engine. Luckily I had been lurking on the Gemini Owners Mailing List to have a hunch that there might have been an oil leak, and that the “billows” was the most likely culprit:
Drive leg oil leak
The billows is a rubber part that gets a lot of exercise as the drive leg turns, raises, and lowers. This leak allowed enough oil to drain to expose the top CV joint, leaving it less lubricated and whiny. After a quick phone call to the previous owner to confirm the correct oil was onboard and available (thanks so much Jerry! This was a huge lifesaver!), I topped up the drive oil to get us the rest of the way home. This billows will be first on my list of repairs to make on the boat.
11am: Entering San Francisco Bay
A foggy approach to the Golden Gate Bridge
And a classic “dad move” of mashing stop when you think you are mashing record:
A foggy San Francisco
A foggy Alcatraz
Pier 39 or so
We timed it correctly to allow us to come in with the tide, giving as a nice 3kt speed boost giving us a peak speed of 9.3kts under the bridge.
For winter 2016 I made
I used a 5kV 10mA (50W) neon light transformer. I also experimented with
a 2kW microwave oven transformer, but found that the lower powered neon
transformer produced finer, better, and safer results.
To produce the figures, I would first apply the electricity to the wood, often
at the corners. Initially the resistance of the wood is not sufficient to allow
any burning. Then I would use a spray bottle full of water/baking soda to
moisten the surface of the wood until the electricity could find the path of
least resistance and start the burning reaction. With the low-power neon
transformer the burning is slow and takes hours.
To guide the reaction in an “aesthetically pleasing way”, I used a heat gun to
temporarily dry out parts of the wood, creating channels of low-resistance
surface water. This technique is most evident on piece #10. It was also
used on #15 to evenly cover the entire (large) piece.
After the electrical treatment, each piece was finished with varnish, matted,
framed, and shipped. Below is a gallery of final results. Each was given to a
friend or family member as a winter gift:
*For the purpose of this writing, I’m defining “cool” as those who don’t
conform, who don’t always fit in nor do they try to, and who follow their own
path; and “uncool” as those who dress, act, and have the same tastes as the
masses and are vulnerable to corporate influences.
Now, I’m by no means some sort of authority on coolness. By this definition
there is certainly some degree of subjectivity, but this definition has a hint
of personal-values embedded into it.
In otherwords, this is more than “I don’t like black socks and sandles”, but
more like “I value non-corporate-sellouts.” At least this value extends beyond
just personal taste.
Individuality Versus Popularity
Anyone can choose to adopt this value. I can appreciate it.
If fully adopted, it seems like this would encompass normal corporate branding
stuff, as well as things that are simply “popular”. By this definition, wearing
a popular brand name or adopting a trendy style is “uncool”. This is at odds
with the definition of “cool” that I learned in middle-school. In fact, in
middle-school the definition of cool was the exact opposite of the author’s
This is fine. As we mature into adults, some people outgrow this definition of
coolness. Others do not.
Corporate Gucci Bag: Uncool
Handmade Etsy Bag: Cool
I can get behind this. I also individuality over popularity. I also dislike
corporate influences. (or heck, external influences in general)
Examining My (Tech) Wardrobe
One of my other personal values is consistency. If I’m going to adopt this
value and be consistent, then perhaps I should examine my wardrobe…
What external corporate ends am I promoting with my wardrobe? Well let’s start
with all these technology tshirts:
Docker Shirt: Uncool
OpenSSL Shirt: Cool
Both Docker and OpenSSL are open source, but wearing a Docker shirt implicitly
promotes the Docker Company. On the other hand, OpenSSL is goverend by the
OpenSSL Software Foundation. Is wearing a Docker shirt on par with showing
off your Gucci bag?
Ubuntu Shirt: Uncool
Debian Shirt: Cool
Ubuntu is a product of Canonical. Debian doesn’t have any corporate
counterpart. Is wearing an Ubuntu shirt uncool because you are providing free
advertising for a corporate entity?
AWS Shirt: Uncool
Openstack Shirt: Uncool too
I don’t know man, I don’t think Openstack shirts are cool either….
These above examples are given mostly because the represent a large portion
of my wardrobe. In general the same principle of rejecting corporate sponsors
carries over to non-tech shirts.
I dare say that even wearing shirts with logos of your current or previous
employers are not cool.
In general, wearing something that promotes another company’s products, I guess
is uncool, even if you like the product or even contribute to it. The root cause
is that you are allowing yourself to be treated as a means to their promotion?
Of course the act of trying to be cool in uncool in itself, so I’m pretty sure
I’m forever destined to remain… uncool.
I recently finished my Intermediate Sensu Training on Udemy.
It was a ton of work but I’m glad I got it all together. Part of that training
includes how to deploy and configure Sensu with four
of the most popular open-source configuration management tools:
Chef, Ansible, and
In order to do the training I had to learn each of these tools enough so I could install
a baseline Sensu installation. Here is what I reproduced with each iteration:
A Sensu client, Server, and API Setup and Running
RabbitMQ Server, User, and Sensu Vhost ready for use. (no SSL)
Redis installed and running for state
A Sensu check (check_disk and/or check_apache)
The Sensu Mail handler to send emails for alerts
The Uchiwa Dashboard
All on one host (localhost)
This was no small feat, and required using a non-trivial number of features of each configuration management system to get the job done.
Here were some other guidelines that I followed in this exercise:
Always use 3rd party modules/cookbooks/etc. Use official ones if possible.
Use the local-execution mode provided by the configuration management tool
(no client/server setup)
Follow official docs when available for general guidelines for things like
Differences in things like config file names or versions of Redis are
inconsequential. As long as Sensu behaved the same I considered it complete.
No considerations for security (out of scope for this exercise)
Review of Each Tool
Puppet In General
Puppet is my “native language” when it comes to configuration management, so it
is a little hard for me to imagine what it is like to not know what it is
like to know exactly how it works.
Puppet has a custom DSL to describe configuration in terms of “types”. These are
the primitives that you can build infrastructure upon, things like “files, “package”,
and “service”. Third party modules can extend that language with custom types,
allowing you to abstract over the “raw” types. For example, the RabbitMQ has a
type for providing rabbitmq_users, which do not correspond to a particular
config file or anything, but instead can only be added by special invocations
of the rabbitmqctl command.
Puppet strongly emphasizes code-reuse. The Puppet Forge is the registry where you can upload and
share modules. The Forge has a number of methods to help indicate code quality.
It also exposes “officially supported” and “officially approved” modules, for
extra approval stamps. While the forge may have a very “long tail” of modules
that do very common tasks, the set of officially-supported and
officially-approved modules leaves behind a good selection of high-quality
modules ready for re-use.
A common criticism of Puppet is that it does not apply resources in the order that
they are declared in the manifest. Instead, Puppet internally calculates a directed
graph of resources and their dependencies, and executes them in a dependent order, which
is not necessarily in the order in which they are parsed. This is similar to how Linux
package managers install packages. If you run apt-get install apache libc libssl,
the packages will not necessarily get installed in the order that they were
specified on the command line.
Puppet also comes with Hiera,
a convenient hierarchical key/value store. This store allows users to override
and set site-specific settings to Puppet modules without having to fork or modify
them. Hiera encourages custom hierarchies that meet your business needs, allowing
users to specify settings in a way that makes the most sense for their environments.
And example hierarchy might look something like:
Then Hiera looks up parameters from most-specific (hostname) to least-specific
(common), and returns the first value that is available.
Review of the Sensu Puppet Module
The sensu-puppet module is a
first-class citizen in the Sensu world. It has native types for the Sensu JSON
files that it manages, as well as a sensu-gem type for easily installing
rubygems with the embedded Sensu ruby.
The Sensu Puppet module only manages Sensu, and has no integration with any
other RabbitMQ, Redis, or any other module. To me this is expected, in the
Puppet world it would be the job of a profile to combine the Sensu module
with RabbitMQ and other things. For the most part this integration is left as
an exercise to the reader.
The Sensu Puppet module also doesn’t manage Uchiwa. That requires a
different puppet module. Again to me
this is a good thing, I hate it when tools try to do too much.
The actual codebase is actively maintained and reasonably active, with a few
releases per year. The Puppet Forge rates it almost perfectly for module quality.
The code has excellent unit test and acceptance test coverage. As far
as Puppet modules go, the Sensu Puppet module is a great example of a
well-maintained piece of code.
One downside the “completeness” of the module is that sometimes new features of
Sensu are released, and the puppet-module will lag. The configuration inputs to
the puppet module are well-typed, and not just free-form hashes. This gives a
lot of guardrails and helps ensure config files are correct before they hit
the disk, but it means that some features are not usable until the Puppet
module can account for them.
Although the code worked, there was a significant bug that prevented the
module from ever converging. This was annoying but allowed me to test the code.
This bug looks to be fixed in master.
Chef in General
Chef is not as old as Puppet, but is certainly a mature product. Chef is “just
ruby” when it comes to its configuration language. The upside to this is that
Ruby developers can theoretically dive in and hack on stuff. The downside to
this is that being “just ruby”, “leaves a lot of rope to hang yourself”.
One nice feature provided by the Chef company is their hosted chef solution, which
allows people to get started without hosting a Chef-server.
The Chef toolset also comes with the knife command, which is a great command
line tool for interacting with the Chef-server. It also is a parallel-ssh tool,
manipulates chef cookbooks, and can also launch ec2 (and other) instances. (did
they take the kitchen-sink metaphor too far?)
The Chef Supermarket serves as the public
registry for Chef cookbooks. There are not too many quality indicators to see,
to help find which cookbooks are any good. The best metric I could see is just
sorting by “followers”. This is made up by the fact that there are over
a hundred officially supported cookbooks.
Probably the most difficult aspect of Chef for me to understand was how
attributes interact. This confusion is probably most obvious when you look at
Chef’s 15 levels of attribute precedence. It seems
to me that there should be a more obvious way for intent to flow, but I could
be just spoiled by Puppet’s Hiera.
Review of the Chef-Sensu Cookbook
The Sensu Chef Cookbook is also a
first-class citizen in the Sensu-world. Chef is the “native config language” of
Sean Porter, the main author of Sensu. This
gives a lot of credibility to the Cookbook, and shows in the contributor page.
The Cookbook itself is feature complete, with recipes for installing and
configuring all aspects of Sensu.
The scope of the cookbook includes all Sensu related technologies, including
RabbitMQ, Redis, and Uchiwa. It is certainly “batteries included” and on by
default. It even downloads and compiles Redis from source for you.
Another example of this “batteries included” design is the RabbitMQ module
settingApt attributes. Like the
above Redis example, this behavior surprised me, but technically it is not related to
the Sensu chef cookbook.
At the same time, wrapper cookbooks are recommended as a
method to combine multiple cookbooks together in a coherent way. I think in
general I just expected the wrapper cookbooks to do more and the main Sensu
cookbook to do less.
The cookbook does have an integration test suite, but it is not run via Travis.
The code is under active development, and does multiple releases a year. It has
native support for Chef data bags for
transporting the RabbitMQ SSL support, which is a nice touch (Not tested in
Ansible in General
Ansible is a relative newcomer to the configuration management space. Ansible
uses yaml files to define desired state. The yamls files are a nice way to
represent things, but it would be misleading to think that Ansible is just yaml
files. Ansible has its own DSL and uses Jinja2 templating, which is parsed
over the contents of the yaml.
The Ansible Galaxy is the community registry for
uploading shared roles. You can sort by rating
to try to get a better idea about which roles are potentially higher quality
There doesn’t seem to be any official roles/playbooks. The closest there is to
official roles is the
repository. But click the link and look at the lamp_simple example. There is
no code-reuse at all! Every example re-invents how to install apache, install
ntp, configure iptables, etc. What’s up with that?
While the yaml files may make it very easy for beginners to make playbooks that
get things done quickly, I don’t think they will work out great as
infrastructure expands. The abstractions just are not there
Another sign, to me, that Ansible has the wrong abstractions is that
so many roles are distro specific.
Not many have the necessary code to work on both “CentOS” and “Debian”. There
is a generic package
type, but very few roles use it? Check out the original author’s opinion on the subject.
Look at the examples! They all only work on yum based distributions.
I’ve read lots of posts of people migrating to Ansible and loving it.
Personally, I don’t get it. The abstractions are too low-level. If you are lucky,
then the Ansible core has a Module to manipulate the resources on the host,
like RabbitMQ stuff. If you are
unlucky, then the only primitives you have available are yaml files and
running commands and parsing stdout.
Or you can write your own module.
Ansible Sensu Playbook Review
There is no official Sensu Ansible playbook. I was not able to find any playbooks
that support RedHat-based distributions.
The sensu_check module is part of the “Extras”, but it is only a very small
part of deploying Sensu, and it has no cohesion with the playbook that actually
deploys Sensu itself. There is no way to extend sensu_check without forking
can’t consume arbitrary check metadata.
In the end, to meet my needs I had to construct hashes myself and deploy them
to disk as JSON. The playbook-provided way to deploy sensu checks is to have
them all contained in the single
Salt in General
Salt is also a relative new-comer to the configuration management world. As a
user, Salt feels very similar to Ansible. They both use yaml files to represent
the desired state of the system. Both use Jinja templates. Both require the
“advanced” system interaction to happen with the core stuff, and the
Salt formulas can be just yaml with no real code.
Salt takes a different approach to sharing community code compared to the
other configuration management systems. Salt keeps all the official formulas in
one GitHub project. The docs recommend
forking the formula
for your own use. On the plus side, having “canonical” formulas for common
tasks reduces duplication and encourages code re-use. The downside is that… it
encourages forking? These formulas in general are not that extensive. They
don’t have releases or any kind of testing in place.
Salt’s Pillar is a powerful tool for separating configuration from code.
It is similar to Puppet’s Hiera. Pro: separate config from code; keep
the site-specific variables in a separate folder than the formulas. Con:
formulas have to be “pillar-aware”. There is no equivalent to Puppet’s
automatic parameter lookup.
Sensu Salt Formula Review
For my testing, I used the official
Salt-formula. There is
a sensu-salt repo on the official Sensu project, but it is not really suitable
for production use in my opinion.
For the most part, the formula did what it said on the tin. Of course, like
Ansible, the only way I was able to deploy checks in a flexible way was to
construct my own Hashes and deploy them as JSON directly. There is no such thing
as a sensu_check type in Salt.
I was not able to get rid of the hard-coded cron check.
I guess goes with the idea that they expect you to fork the repo and make your
own local changes to meet your needs. I thought I should maybe open an issue for
this, but the file has been there for a year and nobody else has complained. I
figured it was just me, and maybe I should get over myself and accept the fact
that I got a free cron check!
In my own testing, I used the native gem provider with a special path to
Sensu’s gem binary to install Sensu gems. But then I discovered that the
formula did this too, but in twodifferent ways,
using the cmd.run method instead of the native gem method. I didn’t really
like this, but at the same time, this is the first time I’ve ever used Salt.
As far as I can tell, to do more advanced Sensu config things, like filters or
mutators, you are expected to fork the formula and drop in the json file into
the right directory.
A rough opinionated comparison between the tools, with regards the tool itself
and the tool in conjunction with Sensu. “High” doesn’t necessarily mean “good” here:
Review of The Config Management Tool in General
Third Party Module Easy of Use
Official Sensu Support for the Tool
Easy of use getting started
Separation between config data and code
Review of the Sensu Module/Cookbook/Etc
Version of the module Used
Sensu Module Feature Completeness
Sensu Module Integration with Other Modules
Sensu Module Flexibility
Sensu Module Re-usability
How Opinionated Was It?
Usability with Sensu’s Embedded Ruby
The way I see it, there are two camps. Chef and Puppet both provide a rich
language to build modules with. For example, the PuppetLabs RabbitMQ module contains all the code
to interact with RabbitMQ. The main Puppet codebase doesn’t know anything about
RabbitMQ. The same goes for Chef. Both Chef and Puppet also have their own DSL.
Puppet uses yaml files for Hiera, but they are for config only, unlike
In the other camp is Ansible and Salt. They have a simplified config language,
and require the help from the core software to do the “heavy lifting” of the
raw types. For example, the Salt RabbitMQ formula
requires the help of core Salt RabbitMQ module
to provide the primitives.
Directed graph dependency ordering, not parse-order driven
Type/Provider system and defined types provide the right abstraction layers to build upon.
Hiera provides a good separation of config/code, making it easier to reuse modules without modification.
Strong culture of testing
Lots of good supported modules
High deployment overhead and language learning curve
LWRP system provides the right abstraction layers to build upon.
Knife tool does do a lot of cool stuff
Lots of good supported cookbooks
Strong culture of testing
15 levels of attribute precedence is insane
Low deployment overhead and low learning curve
“Just yaml files”
Lack of type/providers means that playbooks use “apt” and “yum” directly, which kinda sucks
Pillar provides a nice separation of config/code, which is good for formula-reuse, if the formula is pillar-aware
Centralized formulas emphasize consolidated development effort
No strong state testing emphasis or framework
If you want to know more about Sensu, of course you can take my training course:
img2xterm stands out to me as the most
accurate and true to the original, with util-say
as a close second. Both of these tools understand “half-block”
characters with two colors, effectively doubling the horizontal resolution of the resulting
characters. (two colors per “pixel”)
catimg and img-cat
both have good color representation, but lack the additional resolution compared to the
other tools, giving it a more “pixelated” look.
img2txt and jp2a
are “true ascii” tools, they are really not in the same league as the others. I included them
here for completeness.
Afterwards a bt0 device should show up in ifconfig. Run hcitool conn to verify
a connection is actually established. Use wireshark on bluetooth mon mode on the
hci device to confirm commands are being sent.
The proof is in the ping:
~ ⮀ # ⮀ping6 fe80::1610:9fff:fee0:1432%bt0
PING fe80::1610:9fff:fee0:1432%bt0(fe80::1610:9fff:fee0:1432) 56 data bytes
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=1 ttl=64 time=158 ms
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=2 ttl=64 time=236 ms
64 bytes from fe80::1610:9fff:fee0:1432: icmp_seq=3 ttl=64 time=113 ms
After a small number of packets, the connection seems to drop, and on the master side
No matter what rmmod or stopping I tried, a reboot was the only thing I could to
rebuild the connection. Obviously this is pretty new stuff, hopefully it will
stabilize in later versions of the kernel.
At this time though, on 3.19.0-21-generic (Ubuntu Vivid), this feature is not