Installing Ubuntu 16.04.x with the HWE Kernel stack

Ubuntu LTS releases have two different kernel streams. Deciding which to use, and when and how to use them can seem daunting and possibly confusing.  But it’s actually quite simple, really.  In the next few paragraphs, I’ll explain briefly about the different kernels available for an Ubuntu LTS release, and provide some basic information on how and why to use each.  Then I’ll show you quickly how to install your system with Ubuntu Server choosing the HWE (Hardware Enablement) kernel so that your hardware is fully supported.

First, lets talk about the available kernels for a moment.  There is the stock GA kernel, which for Xenial is 4.4 and is supported for the full 5 years of the LTS (until 2021).  Second, there is the HWE, or Hardware Enablement, kernel stack, and as of this article, that is currently 4.13 based.  The HWE kernels are only supported by Canonical until the next HWE kernel is released, up to the XX.YY.5 point release.  The .5 point release is always based on the newest LTS, and THAT HWE kernel is then supported for the remaining life of the LTS release.  It’s not as complex as it sounds, and this chart makes it a little more clear.

Diagram of the Ubuntu 16.04 HWE support schedule

16.04 HWE support schedule

As you can see, once 16.04.5 has been released, it will be based on Bionic (18.04)’s 4.15 kernel, and that HWE kernel will be supported until the end of 16.04’s supported lifetime in April of 2021, along with the original 4.4 GA kernel.

Most people probably won’t need the HWE kernel.  These kernels exist to introduce new hardware support into the LTS, so unless you have hardware that is too new to have been supported adequately by the 4.4 kernel, you don’t need to even worry about this.  But if your system has hardware that isn’t supported by 4.4, it’s worth installing and booting into the current HWE kernel to see if that gets you up and running.  One very good example of where this matters is with the new Skylake-SP and later CPUs from Intel’s Purley platform.  The 4.4 kernel only provided minimal support for Purley CPUs, only ensuring functionality to the same level as any other older Intel CPU.  Full support for the Purley advanced CPU features, such as AVX512, did not land until 4.10 in 16.04.3.  So any workloads on Purley systems compiled to use AVX512 would need to be performed on an Ubuntu server using the latest HWE kernel, rather than the 4.4 GA kernel.

So how do you install these?  There are two ways.  The Easy Way, and The Hard Way.

First, lets check out The Easy Way.  Use MAAS, Ubuntu’s Metal-as-a-Service hardware orchestration tool.  MAAS allows you to quickly and easily install various OSs, including all current Ubuntu versions, CentOS, and even Windows Server.

Deployment is fast, and simple.  Select your node, Select Deploy from the “Take Action” menu, select the Ubuntu LTS you wish to deploy, and Select the HWE kernel from the Kernel options (for Xenial, it’s called “16.04-hwe”).  Click Deploy and in a few moments, your node will have been provisioned with Ubuntu Server and have the HWE kernel installed and will be ready to go.

Now let’s look at The Hard Way.  This will require the most recent Ubuntu Server ISO and you will need to build a bootable USB key from that, or use your server’s DVD/CD drive, if equipped.  Once you boot and have chose the language to use for installation, you’ll very quickly see the option for “Install Ubuntu Server with the HWE kernel” in the list of install options.  Choose that.  Then continue the installation as you normally would.

image of the Ubuntu Server installation menu

Ubuntu Server Installation Menu

image showing first login after install

Note the kernel version listed above is 4.13, the current HWE kernel.

This will install the HWE kernel stack on your system and once installation has finished, you can verify this by logging into your newly deployed system and checking the installed kernel version.  As you can see from the image above, we are now running Ubuntu 16.04.4 (the daily image now shows .4 rather than .3 because the .4 release is tomorrow) with the 4.13 HWE kernel.  You can further verify this by simply checking the installed kernel packages as shown below:

Image showing list of installed kernel packages

Note all installed kernel packages are the 4.13 HWE kernel, not the 4.4 GA kernel.

Remember when I said there were two ways, the Easy Way and the Hard Way?  I lied.  It’s actually ALL easy, and not as nearly as confusing as it may seem.

As you can see, getting the HWE kernel at install time is simple, whether you deploy via MAAS or straight from the Ubuntu ISO images.  It’s just a matter of picking the right kernel when you launch the installer.  It is always advisable to either run the latest GA or latest HWE kernel.  You should not run an expired HWE kernel as those kernels (4.8 and 4.10 now) no longer receive any security or bug fix updates.  Also, with current events, for Xenial only 4.4 and 4.13 (or later) kernels include Spectre / Meltdown mitigations.  the older HWE kernels like 4.8 and 4.10 are NOT patched and will never be updated with these critical security fixes.

Getting started with Juju 2.0 on AWS

This is a bit of a first timers guide to setting up Juju 2.0 to work with AWS.  To be honest, it’s been quite a while since I really messed with Juju and AWS (or anything outside of MAAS), and this is the first time I’ve really looked at Juju 2.0 anyway.  So this is me sharing the steps I had to take to get Juju 2 talking to my AWS Virtual Private Cloud (VPC) to spin up instances to prototype things.

First, let’s talk about the parts here.   You should already know what Amazon Web Services is, what a Virtual Private Cloud is, and have an account at Amazon to use them.  You should know a bit about what Juju is as well, but as this is a first timer guide, here’s all you really need to know to get started.  Juju is an amazing tool for modeling complex workload deployments.  It takes all the difficult bits of deploying any workload that has a “Charm” in minutes.  All the brain-share needed to install and configure these workloads is encapsulated in the “Charm”.  To configure, you can pass YAML files with full deployment configuration options, use Juju commands to set individual configuration options, or pass them via the juju gui.  Juju is very extensively documented at https://jujucharms.com, and I highly recommend you RTFM a bit to learn what Juju can do. Continue reading