Setting up KVM to PXE boot virtual machines from a local TFTP server

I had occasion to test some failures that a partner is seeing in the field when trying to pxeboot the Ubuntu Server installer.  I’d never set up KVM to do this before, as I almost exclusively use MAAS to do all server deployments, and in fact, this is required for my work.  The partner is doing this as a side project and it seemed like a nice way to waste an afternoon.  The following is somewhat specific to my desktop running Ubuntu 18.04 LTS.  16.04 systems require a few extra steps to get a qemu that supports PXE booting.

On Ubuntu 16.04.x qemu-kvm from the Pike Ubuntu Cloud Archive (UCA) need to be installed.
The Pike Cloud Archive can be enabled like this:

$ sudo add-apt-repository cloud-archive:pike
$ sudo apt update

If you have 16.04 LTS,  after adding the cloud-archive repo and updating, proceed as you would for 17.04 and later:

$ sudo apt install -qq -y libvirt-daemon-system qemu-kvm virt-manager

Next, we need to get tftp installed and verify it’s running.

$ sudo apt install tftpd-hpa
$ sudo service tftpd-hpa status
● tftpd-hpa.service - LSB: HPA's tftp server
Loaded: loaded (/etc/init.d/tftpd-hpa; generated)
Active: active (running) since Thu 2018-09-13 09:35:49 EDT; 5s ago
Docs: man:systemd-sysv-generator(8)
Process: 16715 ExecStop=/etc/init.d/tftpd-hpa stop (code=exited, status=0/SUCC
Process: 16720 ExecStart=/etc/init.d/tftpd-hpa start (code=exited, status=0/SU
Tasks: 1 (limit: 4915)
CGroup: /system.slice/tftpd-hpa.service
└─16745 /usr/sbin/in.tftpd --listen --user tftp --address :69 --secur

Sep 13 09:35:49 galactica systemd[1]: Starting LSB: HPA's tftp server...
Sep 13 09:35:49 galactica tftpd-hpa[16720]: * Starting HPA's tftpd in.tftpd
Sep 13 09:35:49 galactica tftpd-hpa[16720]: ...done.
Sep 13 09:35:49 galactica systemd[1]: Started LSB: HPA's tftp server.

Now we need to start setting up the tftpboot directories.  I’m using /srv/tftp to host the files:

$ sudo mkdir /srv/tftp
$ cd /srv/tftp
$ $ sudo wget -nH -r --cut-dirs=8 http://archive.ubuntu.com/ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/

This will also leave a lot of cruft in the directory, perhaps that’s fixable with other wget options but this gets the files and directories you need.  You can clean up the cruft

$ sudo rm -f *.gif index.html* MANIFEST* MD5SUMS* SHA*
$ ls
boot.img.gz ldlinux.c32 pxelinux.0 udeb.list
debian-cd_info.tar.gz mini.iso pxelinux.cfg vmlinuz
initrd.gz netboot.tar.gz ubuntu-installer xen

Now we need to set permissions and ownership of the files:

/srv/tftp$ cd ../
/srv$ sudo chmod -R 777 tftp/
/srv$ sudo chown -R nobody: tftp/

Next, make sure TFTPD can see the right directory by pointing

$ cat /etc/default/tftpd-hpa
# /etc/default/tftpd-hpa

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/srv/tftp"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure --create"

Finally restart tftpd and verify it’s listening on the correct IP address (the gateway address for KVM’s bridge)

$ nc -uvz 192.168.123.1 69
Connection to 192.168.123.1 69 port [udp/tftp] succeeded!

This means that tftpd is active and listening on port 69 on my libvirt network (my ip addressing may be different from yours).  Now there’s a couple things to set for libvirt that will allow the VMs to grab the right boot files.  Add the lines (in bold) to the config:

$ virsh net-edit default
<network>
  <name>default</name>
  <uuid>c1ea51e9-ac51-455b-a6ff-7a222b6f94eb</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:42:de:cf'/>
  <ip address='192.168.123.1' netmask='255.255.255.0'>
    <tftp root='/srv/tftp'/>
    <dhcp>
      <range start='192.168.123.2' end='192.168.123.254'/>
      <bootp file='pxelinux.0' server='192.168.123.1'/>
    </dhcp>
  </ip>
</network>

Again, note that my IP addressing may be different.  Once the config has been modified to serve up PXE files, restart the virtual network:

$ virsh net-destroy default
Network default destroyed

$ virsh net-start default
Network default started

Now let’s create a VM to PXEBOOT the installer.  I’ll be using virt-manager for this, as when dealing with qemu, I prefer the simplicity of a GUI over remembering a large swath of command line arguments.

Image of the initial config screen

This is the first config screen. Be sure to select PXE boot.

Image of the VM OS template setting screen

The Generic settings are fine here.

Image of the VM Memory and vCPU config screen

I upped the RAM to 2Gb just to make it a little sturdier.

Image of the VM Config Disk Setup Screen

20Gb should be enough for this experiment.​

Image of the VM Config Summary

Summary Screen: Be sure that the “Default” network has been chosen

Image of the bootup screen showing the VM PXE Booting

The VM is now PXE booting

Image of the netboot installer menu

The VM has now successfully booted via PXE

Installing Ubuntu 16.04.x with the HWE Kernel stack

Ubuntu LTS releases have two different kernel streams. Deciding which to use, and when and how to use them can seem daunting and possibly confusing.  But it’s actually quite simple, really.  In the next few paragraphs, I’ll explain briefly about the different kernels available for an Ubuntu LTS release, and provide some basic information on how and why to use each.  Then I’ll show you quickly how to install your system with Ubuntu Server choosing the HWE (Hardware Enablement) kernel so that your hardware is fully supported.

First, lets talk about the available kernels for a moment.  There is the stock GA kernel, which for Xenial is 4.4 and is supported for the full 5 years of the LTS (until 2021).  Second, there is the HWE, or Hardware Enablement, kernel stack, and as of this article, that is currently 4.13 based.  The HWE kernels are only supported by Canonical until the next HWE kernel is released, up to the XX.YY.5 point release.  The .5 point release is always based on the newest LTS, and THAT HWE kernel is then supported for the remaining life of the LTS release.  It’s not as complex as it sounds, and this chart makes it a little more clear.

Diagram of the Ubuntu 16.04 HWE support schedule

16.04 HWE support schedule

As you can see, once 16.04.5 has been released, it will be based on Bionic (18.04)’s 4.15 kernel, and that HWE kernel will be supported until the end of 16.04’s supported lifetime in April of 2021, along with the original 4.4 GA kernel.

Most people probably won’t need the HWE kernel.  These kernels exist to introduce new hardware support into the LTS, so unless you have hardware that is too new to have been supported adequately by the 4.4 kernel, you don’t need to even worry about this.  But if your system has hardware that isn’t supported by 4.4, it’s worth installing and booting into the current HWE kernel to see if that gets you up and running.  One very good example of where this matters is with the new Skylake-SP and later CPUs from Intel’s Purley platform.  The 4.4 kernel only provided minimal support for Purley CPUs, only ensuring functionality to the same level as any other older Intel CPU.  Full support for the Purley advanced CPU features, such as AVX512, did not land until 4.10 in 16.04.3.  So any workloads on Purley systems compiled to use AVX512 would need to be performed on an Ubuntu server using the latest HWE kernel, rather than the 4.4 GA kernel.

So how do you install these?  There are two ways.  The Easy Way, and The Hard Way.

First, lets check out The Easy Way.  Use MAAS, Ubuntu’s Metal-as-a-Service hardware orchestration tool.  MAAS allows you to quickly and easily install various OSs, including all current Ubuntu versions, CentOS, and even Windows Server.

Deployment is fast, and simple.  Select your node, Select Deploy from the “Take Action” menu, select the Ubuntu LTS you wish to deploy, and Select the HWE kernel from the Kernel options (for Xenial, it’s called “16.04-hwe”).  Click Deploy and in a few moments, your node will have been provisioned with Ubuntu Server and have the HWE kernel installed and will be ready to go.

Now let’s look at The Hard Way.  This will require the most recent Ubuntu Server ISO and you will need to build a bootable USB key from that, or use your server’s DVD/CD drive, if equipped.  Once you boot and have chose the language to use for installation, you’ll very quickly see the option for “Install Ubuntu Server with the HWE kernel” in the list of install options.  Choose that.  Then continue the installation as you normally would.

image of the Ubuntu Server installation menu

Ubuntu Server Installation Menu

image showing first login after install

Note the kernel version listed above is 4.13, the current HWE kernel.

This will install the HWE kernel stack on your system and once installation has finished, you can verify this by logging into your newly deployed system and checking the installed kernel version.  As you can see from the image above, we are now running Ubuntu 16.04.4 (the daily image now shows .4 rather than .3 because the .4 release is tomorrow) with the 4.13 HWE kernel.  You can further verify this by simply checking the installed kernel packages as shown below:

Image showing list of installed kernel packages

Note all installed kernel packages are the 4.13 HWE kernel, not the 4.4 GA kernel.

Remember when I said there were two ways, the Easy Way and the Hard Way?  I lied.  It’s actually ALL easy, and not as nearly as confusing as it may seem.

As you can see, getting the HWE kernel at install time is simple, whether you deploy via MAAS or straight from the Ubuntu ISO images.  It’s just a matter of picking the right kernel when you launch the installer.  It is always advisable to either run the latest GA or latest HWE kernel.  You should not run an expired HWE kernel as those kernels (4.8 and 4.10 now) no longer receive any security or bug fix updates.  Also, with current events, for Xenial only 4.4 and 4.13 (or later) kernels include Spectre / Meltdown mitigations.  the older HWE kernels like 4.8 and 4.10 are NOT patched and will never be updated with these critical security fixes.

Actually Useful Getting Started Guide to LXD on Ubuntu

OK, this will still be kinda brief, but hopefully helps get you going with LXC containers (via LXD) quickly in a way that’s actually useful.

I have typically used things like Digital Ocean and AWS to quickly launch a testbed, deploy some modified packages and, check the changes and then tear it all down quickly.  This works well for me but I’ve recently been trying to break my dependence on foreign services for this work.  So I’ve been using LXD more and more which is just as fast, and is local so I can do this sort of work without an internet connection if need be.  Below, I’ll outline a few very quick things to make using containers a bit more easy.  Note, all of the info below assumes you are using Ubuntu 16.04 LTS or later, with LXD installed (LXD is installed by default on 16.04 and newer).  Also, you should have at least some familiarity with lxc and lxd.  For more information on those, see https://linuxcontainers.org/lxc/introduction/.

Tip 1:  import images locally with useful aliases.

By default, when you launch a container, the image will be pulled from the internet if it does not already exist.  Also, if you want to use that container base again locally, you sometimes need to find an ugly fingerprint ID to reference it with.  I’ve prefer to locally import the images I want.  Not only does this let me create my own, easily remembered names for this, I can pull a variety of images from various sources and have my own local, off-line catalog of LXD images to create containers from.

First, see what images are available.  Since I do all my work on ubuntu, I only need to check the default ubuntu remote.  This is done with the ‘image’ command for lxc:

 

bladernr@galactica:~$ lxc image list ubuntu:
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 4c1e4092ead8 | yes | ubuntu 12.04 LTS amd64 (release) (20170417) | x86_64 | 156.78MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | 68a83fae9fd3 | yes | ubuntu 12.04 LTS armhf (release) (20170417) | armv7l | 135.58MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 056784ac045d | yes | ubuntu 12.04 LTS i386 (release) (20170417) | i686 | 141.27MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 536ea2799fc7 | yes | ubuntu 14.04 LTS amd64 (release) (20170405) | x86_64 | 119.89MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t/arm64 (2 more) | 26b9b1fb1b15 | yes | ubuntu 14.04 LTS arm64 (release) (20170405) | aarch64 | 110.96MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t/armhf (2 more) | 5e367a0ad31c | yes | ubuntu 14.04 LTS armhf (release) (20170405) | armv7l | 111.58MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------lets--------------------+---------+----------+-------------------------------+
| t/i386 (2 more) | 38df07c91eac | yes | ubuntu 14.04 LTS i386 (release) (20170405) | i686 | 118.24MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+

There are a LOT of images available so I’ve trimmed the output significantly.  I’m mostly interested in Trusty for now, which has the alias ‘t’, so let’s import that image locally using the ‘copy’ subcommand of the ‘image’ lxc command:

bladernr@galactica:~$ lxc image copy ubuntu:t local: --alias=ubuntu-trusty
Image copied successfully!
bladernr@galactica:~$ lxc image list
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ubuntu-trusty | 536ea2799fc7 | no | ubuntu 14.04 LTS amd64 (release) (20170405) | x86_64 | 119.89MB | Apr 24, 2017 at 10:58pm (UTC) |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ubuntu-xenial | f452cda3bccb | no | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 310.30MB | Jul 15, 2016 at 5:55pm (UTC) |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+lets

What this does is download a copy of the arch appropriate trusty container image hosted on the default Ubuntu image store and make it available locally on my desktop.  As you can see, I have both Trusty and Xenial images, with nice aliases that can be easily remembered later on when deploying containers.

I have the release versions of the images, that’s all I need.  Because I’m just prototyping and testing locally, I don’t really worry too much about the latest package updates being installed on my containers.

Ubuntu has two different remotes (streams) to get images from:

bladernr@galactica:~$ lxc image list ubuntu: |head -10
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 4c1e4092ead8 | yes | ubuntu 12.04 LTS amd64 (release) (20170417) | x86_64 | 156.78MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | 68a83fae9fd3 | yes | ubuntu 12.04 LTS armhf (release) (20170417) | armv7l | 135.58MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 056784ac045d | yes | ubuntu 12.04 LTS i386 (release) (20170417) | i686 | 141.27MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 9e0493502f9d | yes | ubuntu 14.04 LTS amd64 (release) (20170424) | x86_64 | 120.03MB | Apr 24, 2017 at 12:00am (UTC) |


bladernr@galactica:~$ lxc image list ubuntu-daily: |head -10
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 12bb0982a94b | yes | ubuntu 12.04 LTS amd64 (daily) (20170424) | x86_64 | 155.64MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | d95c2d1be3f8 | yes | ubuntu 12.04 LTS armhf (daily) (20170424) | armv7l | 136.64MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 4f516ec69c8f | yes | ubuntu 12.04 LTS i386 (daily) (20170424) | i686 | 139.71MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 9e0493502f9d | yes | ubuntu 14.04 LTS amd64 (daily) (20170424) | x86_64 | 120.03MB | Apr 24, 2017 at 12:00am (UTC) |

The first of those contains only the “release” versions of the Ubuntu images.  That is, the versions that appear on each GA Release Day, or LTS Point Release Day.  The second, ubuntu-daily, provides images from the daily builds of Ubuntu, which are updated far more frequently.  This also gives you access to daily builds of the latest development / interim release such as the soon to be opened Ubuntu 17.10.

Tip 2: Configuring a user for easy login and actually getting work done.

The default Ubuntu images are missing two very important things, ssh keys and a default password for the ‘ubuntu’ user.  There are a few different ways to tackle this.  If a root access is all you need, then this will suffice:

bladernr@galactica:~$ lxc exec subtle-marlin /bin/bash
root@subtle-marlin:~#

This will get you a root login, but I often need to have a non-privileged login.  So the first thing we need to configure the user.  This is accomplished using cloud-init and can be set using the profiles for lxc.  Specifically, I’m setting this in the default profile.  To access/edit this profile, as of lxc version 2.0.7-0ubuntu1~16.04.2, you need to use the lxc profile command to edit the default profile and add a few things.

To edit it use the command lxd profile edit <name> (Note, this command may be different on other versions of lxc, such as lxc edit profile <name>.

bladernr@galactica:~$ lxc profile list
default
docker
juju-controller
juju-default

Note that there are several profiles already created by default.  We’re only interested in the ‘default‘ profile, so let’s edit that:

### This is a yaml representation of the profile.
### Any line starting with a '# will be ignored.
###
### A profile consists of a set of configuration items followed by a set of
### devices.
###
### An example would look like:
### name: onenic
### config:
### raw.lxc: lxc.aa_profile=unconfined
### devices:
### eth0:
### nictype: bridged
### parent: lxdbr0
### type: nic
###
### Note that the name is shown but cannot be changed

config:
 user.vendor-data: |
  #cloud-config
  users:
  - name: ubuntu
    ssh-import-id: bladernr
    lock_passwd: false
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
description: ""
devices:
 eth0:
  name: eth0
  nictype: bridged
  parent: lxdbr0
  type: nic
name: default

In that example, I have added modified the user.vendor-data section to set a few items for the “ubuntu” user.  First, I used ssh-import-id to import my own ssh keys.  I believe this pulls from launchpad, but it may pull locally.  I’m honestly not sure which.  Next, I set lock_passwd to ‘false’.  letsIf you leave this unset, it defaults to ‘true’ which will prevent password logins.  Of course, the ssh logins via key are MUCH more secure, but as I mentioned before, these are very short lived development instances, so security is of no concern to me, as proven in the next line.

On this next line, I tell cloud-init to setup sudo privileges for the ‘ubuntu’ user so that no password is required when performing ANY task via sudo.  That is about as close as you can get to using the root user instead.  It is VERY dangerous because anyone who gains access to ‘ubuntu’ now has full, unfettered root access.  So don’t do this at home.  Again, for my use, these are short lived test and dev instances where security is not important.  I would NEVER do this on anthing that is even close to production level.

In fact, on a production system you should probably consider leaving only the ssh-import-id set to only allow logins via ssh and key-based authentication.  You should definitely NOT set sudo as I have done here, also.

Finally, I set the shell to /bin/bash so when I ssh in, I’ll have a nice bash shell.

There are other items you can set in here, such as password, ssh authorized_keys, group membership and so on.  You can find out more about cloud config in the cloudinit documentation.

Conclusion

So there you go.  Those two tips should help setting up LXC/LXD to be much easier and less hassle when launching instances for testing your code, prototyping and other needs.  Please do remember that I do some fairly ugly things (security wise) and you should make better choices there for production.

Once you have those things configured, you should be able to quickly launch instances and connect to them via SSH and be able to perform whatever tasks you need.

 

 bladernr@galactica:~$ lxc list
+---------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+----------------------+------+------------+-----------+
| subtle-marlin | RUNNING | 10.148.80.232 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
bladernr@galactica:~$ lxc launch ubuntu-trusty demo
Creating demo
Starting demo
bladernr@galactica:~$ lxc list
+---------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+----------------------+------+------------+-----------+
| demo | RUNNING | 10.148.80.217 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
| subtle-marlin | RUNNING | 10.148.80.232 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
bladernr@galactica:~$ ssh ubuntu@10.148.80.217
The authenticity of host '10.148.80.217 (10.148.80.217)' can't be established.
ECDSA key fingerprint is SHA256:gyn682YAhs+LyZc7i0s9akfBoZCOnSYErMeds4MbaKI.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.148.80.217' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-70-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Tue Apr 25 13:30:23 UTC 2017

System load: 0.77 Memory usage: 0% Processes: 15
 Usage of /home: unknown Swap usage: 47% Users logged in: 0

Graph this data and manage this system at:
 https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@demo:~$