Postmodern Sysadmin

A blog about servers and junk

Managing Ssh Known Hosts With-Serf

Serf is a very interesting service discovery mechanism. Its dynamic membership and tags capability make it very flexible. Can we use it to generate a centralized ssh_known_hosts file?

Installing and Configuring Serf

I like to use configuration management to manage servers. Here I use a Puppet module to install and configure Serf:

1
2
3
4
5
6
7
8
9
class { 'serf':
  config_hash   => {
    'node_name'  => $::fqdn,
    'tags'       => {
      'sshrsakey' => $::sshrsakey
    },
    'discover'   => 'cluster',
  }
}

This particular module uses a hash to translate directly into the config.json file on disk. Notice how I’m using the new tags feature, and adding a sshrsakey tag, populated by Puppet’s facts.

Querying The Cluster

Once the servers have Serf installed and configured, the cluster can be queried using the serf command line tool:

1
2
3
$ serf members
server1.xkyle.com    192.168.1.67:7946    alive    sshrsakey=AAAA...
server2.xkyle.com    192.168.1.69:7946    alive    sshrsakey=AAAA...

Using the Data

Lets use this data to write out our /etc/ssh/ssh_known_hosts file, replacing the functionality of ssh-keyscan:

1
2
3
$ serf members -format=json | jq -r '.members | .[] | "\(.name) ssh-rsa \(.tags[])" ' | tee /etc/ssh/ssh_known_hosts
server1.xkyle.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTfPpmHhc+LoD05puxC...
server2.xkyle.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmzk+Chzrq73c5ytU9I...

So… you can see I’m using jq to manipulate the JSON ouput of the serf command. I’m not super proud of it, but it works.

Lets see if we can use a script instead? Serf provides and RPC protocol to interact with it programmatically:

1
2
3
4
5
#!/usr/bin/env ruby
require 'serf/client'
client = Serf::Client.connect address: '127.0.0.1', port: 7373
members = client.members.value.body['Members']
puts members.collect { |x| x['Name'] + ' ssh-rsa ' +  x['Tags']['sshrsakey'] }

Of course, no error handling or anything. This script achieves the same result using the serf-client ruby gem.

There are libraries to connect to the Serf RPC directly for many languages, or you can do it yourself using the msgpack RPC library to communicate directly on the tcp socket.

Conclusion

This is just the beginning. Serf allows retrieving the status of members, but also can spawn programs (handlers) whenever members join or leave.

Additionally you can invoke custom events for your own uses, like code deploys.

What Happens When You Run Puppet Tests

Breaking down bundle exec rake spec

What is happening when you run:

1
bundle exec rake spec

Bundle

The first command you are running is bundle. Bundle is kinda like virtualenv for Ruby. It makes sure that you use the same ruby libraries that you, everyone, and puppetmasters use.

Bundle uses a Gemfile, and searches downwards. As long as you have the Gemfile in the puppet repo, it will work.

Exec

The second part is exec. Exec is an argument to bundle, it simply means run a command. Because you are running it in a “bundled” environment, it runs the next command that is part of your bundle, with the ruby libraries in your Gemfile.

Rake

The third part is rake. Rake is like Make for Ruby. It requires a Rakefile. Each puppet module needs a Rakefile.

You don’t need to re-invent the Rakefile, simply have this in it:

1
require 'puppetlabs_spec_helper/rake_tasks'

This ensures that we are all running tests in the same way.

Spec

Spec is a “rake task” that runs Rspec. Rspec is a ruby testing framework. Rspec + puppet-rspec is a whole other thing described Next Section.

How does Rspec Test Puppet Code?

If you are running bundle exec rake spec, rspec takes over in the environment provided by bundler. It gives you all the gems necissary to do the job, but how does Rspec know about Puppet Code?

If you are including the puppetlabs_spec_helper/rake_tasks, your exact task includes the prep/test/clean stuff.

You need some boilerplate files in place for rspec-puppet tests to run. You can either run

1
rspec-puppet-init

Or you can manually setup the files and folders. Here I will describe the minimal set of files you need:

.fixtures.yml

.fixtures.yml is a puppet_spec_helper construct that allows you to symlink in other modules that might be required to test your code. For example you might require functions from the stdlib. How does Rspec know where stdlib is?

1
2
3
4
5
fixtures:
  repositories:
    stdlib: "git://github.com/puppetlabs/puppetlabs-stdlib.git"
  symlinks:
    your_module: "#{source_dir}"

When rspec runs the preparation parts, the spec_helper will create symlinks, or clone repos, or whatever.

spec/spec_helper.rb

spec/spec_helper.rb is a file you need in place for your rspec tests to reference. If you are using the puppetlabs_spec_helper gem, it is only one line:

1
require 'puppetlabs_spec_helper/module_spec_helper'

This spec_helper.rb file can now be referenced, and by doing so will allow Ruby to import all of the puppet-specific Rspec matchers it needs to function.

For example, at the top of every Rspec ruby file you should see something like this:

1
2
3
4
5
6
7
require 'spec_helper'

describe 'my_module' do

  it { should compile }

end

Directory structure

Putting files in the right places allows Rspec to autodetect them. Giving them a conventional name allows rspec to glob them.

As the scope of your testing increases, a well-organized directory structure is essential:

1
2
3
4
5
6
7
8
9
10
├── spec
│   ├── classes
│   │   └── example_spec.rb
│   ├── defines
│   ├── functions
│   ├── hosts
│   ├── spec_helper.rb
│   ├── spec_helper_system.rb
│   └── system
│       └── basic_spec.rb

The Tests

How to write puppet tests is outside the scope of this particular blog post.

I recommend looking at solid examples from puppetlabs’ github, or right from the offical documentation.

But essentially, Rspec runs puppet in a noop mode, only generating a catelog of what it would do. Then the rspec tests use matchers to describe assertions against the catelog.

Writing Purgable Puppet Code

Whenever possible, I try to write Puppet code that is purgable and “Comment Safe”. That is not a very good description. What I mean is, Puppet code that removes resources from a system when the corresponding Puppet code is “Commented” out of a manifest. Lets look at a few examples.

Example: Managed Sudo

Lets say you used this popular sudo module with the following params:

1
2
3
class { 'sudo':
  purge => true,
}

Great start. All future sudo::conf blocks you write will automatically disappear from the host:

1
2
3
4
5
6
7
8
9
sudo::conf { 'web':
   source => 'puppet:///files/etc/sudoers.d/web',
 }
 
# Commenting out for now. Automatically is purged from the server
# sudo::conf { 'admins':
#   priority => 10,
#   content  => "%admins ALL=(ALL) NOPASSWD: ALL",
# }

Good stuff. Do this.

Example: Managed Firewall

How about another example with the Puppetlabs Firewall module?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Automatically remove rules that are not declared
resources { "firewall":
  purge => true
} 

# Production needs 111 open
firewall { '111 open port 111':
  dport => 111
}
# Tried this but didn't work. Commenting out for now
# Automatically removed from the server when I commented it out
# firewall { '112 open port 112':
#   dport => 112
# }

The Point?

The point here is that we should encourage a culture of purging. Having resources get automatically purged when you comment them out from puppet is great.

Of course, this is obsoleted in the short-lived world of docker or possibly Amazon EC2. But for those engineers who work on long lived servers, this prevents cruft.

Going Further: Purging Packages

I want to purge packages. If someone installs a package not controlled by Puppet, I want puppet to purge it. Crazy I know.

1
2
3
4
5
package { 'apache': ensure => installed }

# No longer using php
# But puppet leaves this behind!
# package { 'php5': ensure => installed }

Of course puppet will leave the package behind. I should be doing ensure => purged right?

But what if the package is deep within nested classes or simply manually installed?

Some day I would like to get to the point where I at least get notified when puppet detects packages that don’t need to be there. I’m open to suggestions on how to do this.

Going Further: Purging /etc/

Most of the time stale configuration leftover in /etc/ causes no harm.

But what about cron jobs in /etc/cron.d? I would love to purge them, but there are non-puppet controlled things installed by system packages. If everything was a puppet module this could eventually be achieved, but it would be too hard to keep in sync with upstream package changes.

Purgin on a per-app basis with things like sensu, apache, and sudo are a great start.

Crossing the Line: Purging /var/lib/mysql

Seems like if you asked puppet to install mysql databases, and then commented them out, you would not want puppet to purge them.

The subtle difference here might be the difference between configuration and data.

Conclusion

Whenever possible I try to purge => true on whatever I can. I would like to see this as the default in new puppet modules.

Someday I would like us to purge more than just files and iptables rules.

Introducing Sensu-shell-helper!

The Problem

The barrier to writing Nagios checks is high. I dare say very high. You have to think about check intervals, host groups, service groups, config files, etc.

But, I know my servers are not behaving, if only there was a way to check them! They run commands for me all the time. In the worst case they fail and no one knows. The best case is that they end up in my cron spam folder….

A Solution!

Sensu-shell-helper. It is a small script I wrote to make it easier to monitor arbitrary commands with Sensu. Here is how you use it

sensu-shell-helper apt-get update

Yes. That is it. No mandatory config options. Good defaults. Minimal overhead. What does this check look like in the dashboard when it fails?

Exactly what I wanted. And of course, when apt-get update begins to work again, the check will resolve itself.

Under The Hood

sensu-shell-helper reall just takes in the output of the command you ask for, tail’s it, then sends the result to localhost:3030, which the sensu-client listens on.

By default it does not specify any handlers. (But they can be specified on the command line with -H) For the check-name it takes the full command and munges it to pass the sensu validator. Duplicate instances of the exact same command on a particular host will be seen as a single “check”.

Most commands do not return 0,1,2,3 according to the Sensu / Nagios plugin API, so the sensu-shell-helper will emit 2 (critical) in the event that the shell command returns anything non-zero. This behavior can be overridden with -N in the case that your command does conform to the 0,1,2,3 spec.

Saying Goodbye to Wordpress

It’s Been a Great Ride

There is no doubt that Wordpress is a great piece of software. As much as people love to hate on PHP, it runs a lot of the internet.

I’ve been running Wordpress personally and professionally for years. It only gets better. I was only hacked once :)

Rethinking What I Need

Since moving to a Low End Box, my resources have been tight. Even on a tuned system, I can’t run much more interesting things than my Nginx+PHPfpm+MySQL.

Right now I also have 32000 spam comments in my queue. Akismet does a great job, but I wonder if I even need it. All I really need is a tiny corner of the web, read only is ok.

What I Lose

  • Comments
  • Cool plugins
  • Well trusted codebase
  • Easy to use gui
  • Familiar Workflow

What I Gain

  • Immutability and hackproof deployment
  • Entire classes of server maintenance issues disappear
  • Git!
  • Grep-able blogs

Making The Change

Sensu Reports in Your Motd With Puppet!

Intro

Sensu is a pretty cool monitoring framework. The authors designed it to be configured by a configuration management system from the beginning. Check out how easily I can make it put a report in my motd with a little bit of python and puppet.

The Report Script

Sensu’s API is super easy to work with. For this I will be using the Events endpoint. Here is a quick script to get the events for a host (gist):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/usr/bin/env python2
import json,sys,urllib2,socket

GREEN = '\033[92m'
RED = '\033[91m'
CLEAR = '\033[0m'

from optparse import OptionParser
parser = OptionParser()
parser.add_option("-s", "--server", dest="server",
                  help="sensu api server hostname", default='sensu')
parser.add_option("-p", "--port", dest="port",
                  help="sensu server api port", default='4567')
(options, args) = parser.parse_args()

response = urllib2.urlopen('http://' + options.server + ':' + options.port + '/events/' + socket.getfqdn())
data = json.load(response)
print
if len(data) > 0:
  print "Failed Sensu checks on this host:"
  for entry in data:
      sys.stdout.write("   " + RED + entry['check'] + ': ' + entry['output'] + CLEAR )
else:
  print "All Sensu checks " + GREEN + "green " + CLEAR + "for this host."
print

 Puppet Glue

1
2
3
4
5
6
7
8
9
10
11
12
13
file { '/usr/bin/sensu_report':
  mode   => '0555',
  source => 'puppet:///files/sensu/sensu_report',
} ->
cron { 'sensu_report':
  command => "/usr/bin/sensu_report -s $sensu_api_server > /etc/motd",
  minute  => fqdn_rand(60),
} ->
sensu::check { "sensu_report":
  handlers    => 'default',
  command     => '/usr/lib/nagios/plugins/check_file_age -w 7200 -c 21600 -f /etc/motd',
  subscribers => 'sensu-test'
}

You can see that there are three things going on here (gist here):

  1. Puppet drops in the python report script file.
  2. Only if the script is in place, it will setup the cronjob to populate the motd
  3. And only if the cron job is in place, a sensu check is installed to verify that it is indeed working (test driven system administration?).

sensu_motd

Coolness

  •  Puppet and Sensu make it easy to construct things like this. Wiring something like this manually with nagios would be a pain.
  • Adding failed checks right in the MTOD increases visibility for them, while decreasing the brain overload of looking a huge sensu dashboard with tons of red that a random user may not care about.
  • Putting checks in the MOTD makes it easy to disseminate information about what might be down on a host, to minimize support requests and increase transparency.

Managing DNS Automatically With Puppet

Why

So you have a decent amount of things configured in Puppet. Great!

Are you finding that you have to manually update your DNS entries when things change, like when new hosts or added, or additional services are created?

Why? Your DNS zone files will forever be out of date, waiting for humans to update them. Just say no. Puppet already knows what the ip addresses and hostnames of your servers, why not take advantage of that existing data?

How

Most of the credit for this has to go do Adam Jahn for his original work. But there is a lot of work to be done and many outstanding pull requests. Until things are more unified, I’m going to recommend installing my version of the module:

puppet module install KyleAnderson/dns

Once the module is installed, you can setup bind on your nameserver:

node 'ns1.example.com' {
  include dns::server
  ...

Warning: Don’t try to use this on top of an existing configuration, Puppet will take control and break your existing stuff.

You can also create zones, right from puppet:

 dns::zone { 'example.com':
    soa         => $::fqdn,
    soa_email   => "admin.${::domain}",
    nameservers => ["${::hostname"]
  }

Now you can add A records:

dns::record::a { $hostname:
   zone => 'example.com',
   data => $::ipaddress, 
}

Going Further

Using the exported resources pattern and stored configs with say, PuppetDB, you can create records on different hosts and then collect them on your name server. For example:

node 'mycoolserver.example.com' {
  @@dns::record::a { $hostname: zone => $::domain, data => $::ipaddress, }
}

node 'ns1.example.com' {
  dns::zone { $::domain:
    soa         => $::fqdn,
    soa_email   => "admin.${::domain}",
    nameservers => [ 'ns1' ],
  }
  # Collect all the records from other nodes
  Dns::Record::A <<||>>
}
In this example, an A record was requested on the mycoolserver node, but it could have been included on any class that includes lots of servers. In the end they show up on the ns1.example.com node with the «   » operator.

Other Possibilities

  • Have your HAProxy or F5 load balancer configs automatically generate the new CNAMEs and A records they need to operate.

  • Setup your Apache vhosts to automatically point to the right server.

  • Never have to remember to update IPMI addresses by using the combining this with the BMCLib module.

  • Setup new hosts in DHCP, and have them automatically get an A record to go with them.

  • Have NTP servers? Did you remember to update their DNS records? Oh wait, puppet does that for you.

Future Work

I will continue to send my pull requests and maintaining my own fork. Join the fun!

Getting Started With Sensu Using Puppet. For Real.

Nagios. So familiar. I feel like I’ve run Nagios at every job I have ever had.

Talk to most ops people, even at really big places, and they will probably admit to using it.

Puppet’s exported resources takes away some of the pain, but sometimes I think to myself, there must be a better way to do this. Sensu might be that better way.

Let’s try it out, but gosh, I am SO lazy. I cannot be bothered to read the installation instructions. All I want to do is install the puppet module, add a couple of lines to my manifest, and let puppet do the rest. Then I can run puppet agent in debug mode so when my boss comes by it looks like I’m REALLY busy.

sensu\_logo

Step 1: Game plan

I’ve got a test server I know I want to be my sensu server. I know I’m going to have enable the sensu client run on the servers I want monitored. Here are my goals:

  • Have sensu-server configured on my server (call it mon1)
  • Have sensu-client configured on my client (call it client1)
  • I want a dashboard
  • I want a an email alert
  • I don’t want to have to ssh to my clients to do anything. (I have puppet to do that for me, duh.)

Step 2: Puppet Module

My puppet master is not mon1, but it doesn’t matter. I run on the puppetmaster

1
2
3
puppet module install example42/redis
puppet module install puppetlabs/rabbitmq
puppet module install sensu/sensu

Ok, good start. So… the “For Real” part in the blog post title is about those other things that most howto’s don’t mention. Unless you already have RabbitMQ and Redis installed, you will need those modules. Don’t know how to run Redis or configure RabbitMQ? It’s ok, neither do I.

Step 2A: SSL Certs

Yea, I know what you are thinking. Kyle, I already have SSL certs for my infrastructure, do I have make another set? Yes. I think so. I’m not smart enough to use existing certs.

Joe Miller has made a pretty easy script to generate some. For RabbitMQ you can basically use a single client and server key and let puppet distribute them:

1
2
3
4
5
git clone git://github.com/joemiller/joemiller.me-intro-to-sensu.git
cd joemiller.me-intro-to-sensu
./ssl_certs.sh generate
mkdir -p /etc/puppet//files/sensu/
cp *.pem testca/*.pem /etc/puppet/files/sensu/

You  can see that I just stick all the files in my “files/sensu” directory for puppet to distribute for me.

Step 2B: Puppet config

Here is the configuration I needed to get a full system running:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
node mon1 {
  file { '/etc/rabbitmq/ssl/server_key.pem':
    source => 'puppet:///files/sensu/server_key.pem',
  }
  file { '/etc/rabbitmq/ssl/server_cert.pem':
    source => 'puppet:///files/sensu/server_cert.pem',
  }
  file { '/etc/rabbitmq/ssl/cacert.pem':
    source => 'puppet:///files/sensu/cacert.pem',
  }
  class { 'rabbitmq':
    ssl_key => '/etc/rabbitmq/ssl//server_key.pem',
    ssl_cert => '/etc/rabbitmq/ssl//server_cert.pem',
    ssl_cacert => '/etc/rabbitmq/ssl//cacert.pem',
    ssl => true,
  }
  rabbitmq_vhost { '/sensu': }
  rabbitmq_user { 'sensu': password => 'password' }
  rabbitmq_user_permissions { 'sensu@/sensu':
    configure_permission => '.*',
    read_permission => '.*',
    write_permission => '.*',
  }
  class {'redis': }
  class {'sensu':
    server => true,
    purge_config => true,
    rabbitmq_password => 'password',
    rabbitmq_ssl_private_key => "puppet:///files/sensu/client_key.pem",
    rabbitmq_ssl_cert_chain => "puppet:///files/sensu/client_cert.pem",
    rabbitmq_host => 'mon1',
    subscriptions => 'sensu-test',
  }
}

Take note that the Sensu module lets you stick in a puppet:/// url for the certs, but the RabbitMQ module does not. Distributing them using the “file” directive is pretty easy though.

I personally believe that purge_config should default to true. We are using puppet here. If you are hand placing json, you are doing it wrong.

Step 3: Clients

With your SSL certs in place, adding clients is pretty easy:

1
2
3
4
5
6
7
8
9
10
node client1 {
  class { 'sensu':
    purge_config => true,
    rabbitmq_password => 'password',
    rabbitmq_host => 'mon1',
    subscriptions => 'sensu-test',
    rabbitmq_ssl_private_key => "puppet:///files/sensu/client_key.pem",
    rabbitmq_ssl_cert_chain => "puppet:///files/sensu/client_cert.pem",
  }
}

Not too bad. Notice that there is nothing server-side to generate the config for this host.

After your puppet runs converge, you should be able to access the Sensu dashboard. By default it is on the sensu server, in this example it would be http://sensu:secret@mon1:8080.

If all of this is working, you should see client1 in the clients list.

Step 4: Handlers

Sensu handlers are scripts that are called with event data. For getting started I use the simplest example:

1
2
3
sensu::handler { 'default':
  command => 'mail -s "sensu alert" kyle@xkyle.com',
}

You are going to get json in your body, but we can make it pretty later.

Step 5A: Your first client-side check

This type of check is what you might consider an NRPE check, it runs on the client:

1
2
3
4
5
6
7
8
node client1 {
...
  package { 'nagios-plugins-basic': ensure => latest }
  sensu::check { "cron":
    handlers    => 'default',
    command     => '/usr/lib/nagios/plugins/check_procs -C cron -c 1:10',
    subscribers => 'sensu-test'
  }

Run puppet, stop cron, you should get an email.

Step 5B: Your first server-side check

Sometimes you need to have the servers do the checking. Not everything can be a client-side check. Sometimes you really do want your monitor server to be able to ping your clients (or check http, etc).

The Sensu documentation doesn’t seem to have examples of this. The only way I know how to do it is with stored configs  with something like puppetdb:

1
2
3
4
5
6
7
8
9
10
11
12
node client1 {
...
@@sensu::check { "check-ping-$fqdn":
    handlers    => 'default',
    command     => "/usr/lib/nagios/plugins/check_ping -H $::ipaddress -w 100.0,60% -c 200.0,90% ",
    subscribers => 'sensu-test'
  }
}
node mon1 {
...
  Sensu::Check <<||>>
}

In this case, the @@ in front of the sensu check tells puppet to not actually make it, just store it. Then the «||» operator on the server side will take those stored configs, and make them.

Conclusion

Sensu is still new, but it shows a lot of promise. It is built from the ground up to be configured by machines, not by humans. It is also designed to scale, allowing you to grow your RabbitMQ cluster and your Sensu-servers at will.

Absent from Sensu (at the time of this writing) is the infrastructure for complicated time periods, escalations, etc. Maybe it is better that way? It does feel a little more unixy, with each individual Sunsu piece handling a very particular function.

Not mentioned in this post is how to manage subscriptions, making new handlers, adding mutators, supplementing the checks with metrics and having Sensu handle them by shipping them off to a metric system, sensu-admin, having Sensu automatically detect downed AWS nodes and not alert on them, etc.

In the brave new elastic-compute-config-management-controlled world, Sensu looks like a lot better option than Nagios in my opinion.

Dropbear With Mosh on a Low End Server

I love my low end boxes. I also love mosh.

Low end boxes usually are tight on resources, so Dropbear is often used as a lightweight ssh server. Mosh is mostly tested with openssh-client/server, so I think there are some bugs.

But it can work, just make sure:

  1. You are using the same version of mosh on the server as you are on your client. (otherwise they may not support the same command line options)

  2. Make sure you have have a en_US.UTF-8. Mosh requires UTF8, and low end boxes usually have a bare install without this local. Run:

locale-gen --no-archive en_US.UTF-8

For a reproducible puppet snipped:

package { 'mosh': ensure => latest }
ensure_packages(['locales'])
exec { "/usr/sbin/locale-gen --no-archive en_US.UTF-8":
 creates => '/usr/lib/locale/en_US.utf8',
}
  1. Run mosh more than once. There is some sort of race condition or bug which prevents mosh from grabbing a tty. Running it multiple times will get it to work eventually. I haven’t tracked down the root cause.

Goodbye Intel - My Favorite Commands

Working at Intel has been a great experience. I wish I could have stayed longer, but in the end we decided to part ways.

During my stay I learned lots of stuff. I would like to boil my experience down to my top Linux commands.

The List

  • git: Lots of git.
  • syscfg: Managing bios settings from within Linux. Nice. (Intel platforms)
  • setupbios: More bios settings from within Linux. (Dell platforms)
  • puppet: I actually enjoy manually running puppet. –noop make me feel warm and fuzzy.
  • micctrl: Borking a lot of kernel installs on mic cards, you end up using this command. Lots.
  • flashupdt/flashrom: Soooo many bios’ flashed. Intel is bios-crazy.
  • amplxe-gui: VTune Amplifier is a super awesome profiling tool. I could spend hours playing around in that gui exploring all programs trying to track down bottlenecks.
  • ipmitool: Everyone needs more ipmitool in their life, totally underrated. Sadly, I would bet most sysadmins don’t even know IPMI exists. :(
  • numactl: I’m waiting for numad, personally. My server should be smart enough to understand its own architecture.

Honorable Mentions

  • icc: I didn’t run icc many times at Intel, but I was impressed with it.
  • objdump: The few times I needed to run this, I felt like a wizard.
  • bsub: On occasion I was required to submit jobs to LSF.
  • lscpu: I felt like I ran this more than at other companies. Could be just selection bias.