Setting up KVM to PXE boot virtual machines from a local TFTP server

I had occasion to test some failures that a partner is seeing in the field when trying to pxeboot the Ubuntu Server installer.  I’d never set up KVM to do this before, as I almost exclusively use MAAS to do all server deployments, and in fact, this is required for my work.  The partner is doing this as a side project and it seemed like a nice way to waste an afternoon.  The following is somewhat specific to my desktop running Ubuntu 18.04 LTS.  16.04 systems require a few extra steps to get a qemu that supports PXE booting.

On Ubuntu 16.04.x qemu-kvm from the Pike Ubuntu Cloud Archive (UCA) need to be installed.
The Pike Cloud Archive can be enabled like this:

$ sudo add-apt-repository cloud-archive:pike
$ sudo apt update

If you have 16.04 LTS,  after adding the cloud-archive repo and updating, proceed as you would for 17.04 and later:

$ sudo apt install -qq -y libvirt-daemon-system qemu-kvm virt-manager

Next, we need to get tftp installed and verify it’s running.

$ sudo apt install tftpd-hpa
$ sudo service tftpd-hpa status
● tftpd-hpa.service - LSB: HPA's tftp server
Loaded: loaded (/etc/init.d/tftpd-hpa; generated)
Active: active (running) since Thu 2018-09-13 09:35:49 EDT; 5s ago
Docs: man:systemd-sysv-generator(8)
Process: 16715 ExecStop=/etc/init.d/tftpd-hpa stop (code=exited, status=0/SUCC
Process: 16720 ExecStart=/etc/init.d/tftpd-hpa start (code=exited, status=0/SU
Tasks: 1 (limit: 4915)
CGroup: /system.slice/tftpd-hpa.service
└─16745 /usr/sbin/in.tftpd --listen --user tftp --address :69 --secur

Sep 13 09:35:49 galactica systemd[1]: Starting LSB: HPA's tftp server...
Sep 13 09:35:49 galactica tftpd-hpa[16720]: * Starting HPA's tftpd in.tftpd
Sep 13 09:35:49 galactica tftpd-hpa[16720]: ...done.
Sep 13 09:35:49 galactica systemd[1]: Started LSB: HPA's tftp server.

Now we need to start setting up the tftpboot directories.  I’m using /srv/tftp to host the files:

$ sudo mkdir /srv/tftp
$ cd /srv/tftp
$ $ sudo wget -nH -r --cut-dirs=8 http://archive.ubuntu.com/ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/

This will also leave a lot of cruft in the directory, perhaps that’s fixable with other wget options but this gets the files and directories you need.  You can clean up the cruft

$ sudo rm -f *.gif index.html* MANIFEST* MD5SUMS* SHA*
$ ls
boot.img.gz ldlinux.c32 pxelinux.0 udeb.list
debian-cd_info.tar.gz mini.iso pxelinux.cfg vmlinuz
initrd.gz netboot.tar.gz ubuntu-installer xen

Now we need to set permissions and ownership of the files:

/srv/tftp$ cd ../
/srv$ sudo chmod -R 777 tftp/
/srv$ sudo chown -R nobody: tftp/

Next, make sure TFTPD can see the right directory by pointing

$ cat /etc/default/tftpd-hpa
# /etc/default/tftpd-hpa

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/srv/tftp"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure --create"

Finally restart tftpd and verify it’s listening on the correct IP address (the gateway address for KVM’s bridge)

$ nc -uvz 192.168.123.1 69
Connection to 192.168.123.1 69 port [udp/tftp] succeeded!

This means that tftpd is active and listening on port 69 on my libvirt network (my ip addressing may be different from yours).  Now there’s a couple things to set for libvirt that will allow the VMs to grab the right boot files.  Add the lines (in bold) to the config:

$ virsh net-edit default
<network>
  <name>default</name>
  <uuid>c1ea51e9-ac51-455b-a6ff-7a222b6f94eb</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:42:de:cf'/>
  <ip address='192.168.123.1' netmask='255.255.255.0'>
    <tftp root='/srv/tftp'/>
    <dhcp>
      <range start='192.168.123.2' end='192.168.123.254'/>
      <bootp file='pxelinux.0' server='192.168.123.1'/>
    </dhcp>
  </ip>
</network>

Again, note that my IP addressing may be different.  Once the config has been modified to serve up PXE files, restart the virtual network:

$ virsh net-destroy default
Network default destroyed

$ virsh net-start default
Network default started

Now let’s create a VM to PXEBOOT the installer.  I’ll be using virt-manager for this, as when dealing with qemu, I prefer the simplicity of a GUI over remembering a large swath of command line arguments.

Image of the initial config screen

This is the first config screen. Be sure to select PXE boot.

Image of the VM OS template setting screen

The Generic settings are fine here.

Image of the VM Memory and vCPU config screen

I upped the RAM to 2Gb just to make it a little sturdier.

Image of the VM Config Disk Setup Screen

20Gb should be enough for this experiment.​

Image of the VM Config Summary

Summary Screen: Be sure that the “Default” network has been chosen

Image of the bootup screen showing the VM PXE Booting

The VM is now PXE booting

Image of the netboot installer menu

The VM has now successfully booted via PXE

Installing Ubuntu 16.04.x with the HWE Kernel stack

Ubuntu LTS releases have two different kernel streams. Deciding which to use, and when and how to use them can seem daunting and possibly confusing.  But it’s actually quite simple, really.  In the next few paragraphs, I’ll explain briefly about the different kernels available for an Ubuntu LTS release, and provide some basic information on how and why to use each.  Then I’ll show you quickly how to install your system with Ubuntu Server choosing the HWE (Hardware Enablement) kernel so that your hardware is fully supported.

First, lets talk about the available kernels for a moment.  There is the stock GA kernel, which for Xenial is 4.4 and is supported for the full 5 years of the LTS (until 2021).  Second, there is the HWE, or Hardware Enablement, kernel stack, and as of this article, that is currently 4.13 based.  The HWE kernels are only supported by Canonical until the next HWE kernel is released, up to the XX.YY.5 point release.  The .5 point release is always based on the newest LTS, and THAT HWE kernel is then supported for the remaining life of the LTS release.  It’s not as complex as it sounds, and this chart makes it a little more clear.

Diagram of the Ubuntu 16.04 HWE support schedule

16.04 HWE support schedule

As you can see, once 16.04.5 has been released, it will be based on Bionic (18.04)’s 4.15 kernel, and that HWE kernel will be supported until the end of 16.04’s supported lifetime in April of 2021, along with the original 4.4 GA kernel.

Most people probably won’t need the HWE kernel.  These kernels exist to introduce new hardware support into the LTS, so unless you have hardware that is too new to have been supported adequately by the 4.4 kernel, you don’t need to even worry about this.  But if your system has hardware that isn’t supported by 4.4, it’s worth installing and booting into the current HWE kernel to see if that gets you up and running.  One very good example of where this matters is with the new Skylake-SP and later CPUs from Intel’s Purley platform.  The 4.4 kernel only provided minimal support for Purley CPUs, only ensuring functionality to the same level as any other older Intel CPU.  Full support for the Purley advanced CPU features, such as AVX512, did not land until 4.10 in 16.04.3.  So any workloads on Purley systems compiled to use AVX512 would need to be performed on an Ubuntu server using the latest HWE kernel, rather than the 4.4 GA kernel.

So how do you install these?  There are two ways.  The Easy Way, and The Hard Way.

First, lets check out The Easy Way.  Use MAAS, Ubuntu’s Metal-as-a-Service hardware orchestration tool.  MAAS allows you to quickly and easily install various OSs, including all current Ubuntu versions, CentOS, and even Windows Server.

Deployment is fast, and simple.  Select your node, Select Deploy from the “Take Action” menu, select the Ubuntu LTS you wish to deploy, and Select the HWE kernel from the Kernel options (for Xenial, it’s called “16.04-hwe”).  Click Deploy and in a few moments, your node will have been provisioned with Ubuntu Server and have the HWE kernel installed and will be ready to go.

Now let’s look at The Hard Way.  This will require the most recent Ubuntu Server ISO and you will need to build a bootable USB key from that, or use your server’s DVD/CD drive, if equipped.  Once you boot and have chose the language to use for installation, you’ll very quickly see the option for “Install Ubuntu Server with the HWE kernel” in the list of install options.  Choose that.  Then continue the installation as you normally would.

image of the Ubuntu Server installation menu

Ubuntu Server Installation Menu

image showing first login after install

Note the kernel version listed above is 4.13, the current HWE kernel.

This will install the HWE kernel stack on your system and once installation has finished, you can verify this by logging into your newly deployed system and checking the installed kernel version.  As you can see from the image above, we are now running Ubuntu 16.04.4 (the daily image now shows .4 rather than .3 because the .4 release is tomorrow) with the 4.13 HWE kernel.  You can further verify this by simply checking the installed kernel packages as shown below:

Image showing list of installed kernel packages

Note all installed kernel packages are the 4.13 HWE kernel, not the 4.4 GA kernel.

Remember when I said there were two ways, the Easy Way and the Hard Way?  I lied.  It’s actually ALL easy, and not as nearly as confusing as it may seem.

As you can see, getting the HWE kernel at install time is simple, whether you deploy via MAAS or straight from the Ubuntu ISO images.  It’s just a matter of picking the right kernel when you launch the installer.  It is always advisable to either run the latest GA or latest HWE kernel.  You should not run an expired HWE kernel as those kernels (4.8 and 4.10 now) no longer receive any security or bug fix updates.  Also, with current events, for Xenial only 4.4 and 4.13 (or later) kernels include Spectre / Meltdown mitigations.  the older HWE kernels like 4.8 and 4.10 are NOT patched and will never be updated with these critical security fixes.

A mutt by any other name…

Jeff and Mutt in the SnowI’ve been thinking a bit about dogs lately, and that’s all been triggered by sorrow.  The passing of a beloved dog is a hard burden to bear, especially because that passing is usually at my own request.  But with each passing, I’m reminded anew of all the dogs I’ve loved in my life and how they’ve changed me, loved me in return and made my life richer for it.  That I am a dog person, there is no doubt.  And that goes way, way back to before I was born.  Several lifetimes ago now.

I still don’t exactly know where she came from.  Part of me always thought my dad found her in a sack on the side of the highway.  Part thinks she was a shelter dog that they adopted from the local animal shelter.  But regardless of how she came into my family, she was there before me.  She was a gorgeous Collie / German Shepherd mix, all long legs and long fur with GSD colors.  She was my dad’s dog, until she wasn’t any more.  That part came later, though.

At first, to say she and I got along would be an egregious lie.  Truth is, as a mewling, pink little pup, I had no idea at all, but she did.  I was an invader.  I was a new thing that her people had brought home.  I was not to be trusted.  I was coming between her and her people.  And so she’d give me looks.  She’d bare her teeth and threaten me with a low growl.  Eventually, as I grew, and became more mobile, she’d turn it up a bit until finally she was snarling and snapping at me.  She was the alpha dog and was going to put me in my place in the pecking order.  I was too young to really remember any of this, mind you, so this is based on what my parents have told me.  One day she snapped at me and slunk off when my dad turned towards her.

“Let me catch you again,” he warned, “and you’ll regret it.”

He caught her again, snarling and snapping at me, a dog far larger than I was at that age, capable of killing me should she have put her mind to it.  And she snapped and my dad kicked her hard in the side, launching her across the room with his big work boots that I remember so well.  And from that moment on, she was no longer my dad’s dog.  From that moment on, she was my dog.  In that moment, her place in the pecking order was solidified, beneath me, and she came to accept that, and became my dog.

Growing up, Mutt was my best friend.  When my parents were at work, or sleeping, she was there to keep me company.  When my grandmother was watching soaps and sipping coffee from that old brown mug she had, Mutt was there.  She lived mostly outdoors in an old kennel made of what seemed to be 50 foot high fencing wire attached to old T-Bar iron posts.  She had a dog house that both of us could easily fit inside, made of ply-board, two by fours and roofing shingles.  Every day I’d race out to free her from her confines and we’d be off.  Some days she’d hike with me in the woods where I grew up.  Some days we’d play in the creek together.  Sometimes we’d just lay in the grass, counting puffy white clouds in the Virginia sky, napping in the warmth of the sun.  But no matter what, we had each other.

Sometimes, she’d wander off to who knows where on her own and be gone for a couple days or so.  But she always came back to me, sometimes bringing back something she’d stolen from wherever it was she ventured.  Sometimes it was a carcass from a kill she’d made hunting.  But she always came back to me.

Eventually, I went off to Kindergarten.  I’m sure she missed me during those long days, but she was always there to greet me when I returned.  Tail wagging, ready for me to play, to run, to hike.  She was always there to greet me, rushing to the end of our drive to sit patiently when the bus dropped me off.  As I grew and made friends, sometimes I’d be late coming back and not get to see her that day.  But she’d always greet me with the same joy and enthusiasm every day I got off that bus.

I’m sure that this is why I’m a dog person to this day.  Because I had such a healthy life with a dog so early on, a creature that started out resenting me but then adopted me and became my most loyal and treasured friend.  Tens of thousands of years ago, this was made possible because someone in a camp, huddled next to a fire to ward off the frightening creatures in the dark, gave some food to a starving wolf that wandered in close.  That wolf returned the next night, and the next, always getting some food from the human.  Eventually that wolf had pups and the pups learned to stay near the humans and they too were fed.  And eventually the dog became domesticated and became the definition of “Man’s Best Friend.”  And Mutt was mine.

All those thousands of years of slowly growing a bond with dogs likely saved my life.  I came home from school one day, got off the bus and headed up the driveway as I always did.  It was a nice late spring day, the gravels of the drive nicely heated from the sun.  The kind of heat that snakes love to lay on to warm their bodies.  And a snake was doing just that as I headed up the drive to my home.  As I grew nearer, I didn’t even see it until it reared back into striking position, hissing at me angrily.  I could say I was terrified, but the truth is, I was too young to really grasp the danger represented by that snake.  All I knew was surprise.  But before that surprise could even register, Mutt flew between us like a blur all fangs and fur and intent to do harm.  Hackles raised, head lowered, growling and snarling, my best friend came between me and an angry, or more likely startled, snake.  The snake decided it was not a battle to be fought with this large descendent of the wolf and slithered off into the grass.  Mutt just looked at me, licked my hand and trotted to the house as I followed her.

The problem with young dogs is that they become old dogs.  Mutt grew old without me even realizing it.  I was still too young to really grasp the concept of growing old and death.  My grandmother was ancient and presumably had always been ancient.  My dad was a giant and would always be so.  Even though I’d been to the funerals of family members, including my Grandfather, I really didn’t understand.  That lesson came to me suddenly one evening.

I don’t even remember how I came to be standing there in the local Kroger’s.  Nor do I remember much at all about that time.  What I do remember is this.

We were grocery shopping one evening.  As we made our way from aisle to aisle, I insistently tried dragging my parents to the pet section.  Mutt needed food.  I know I had noticed that her bag of food was gone, so we must be out, and we must need more.  It was there, in the middle of Krogers, that I, a 7 year old little boy (maybe I was eight?), learned the weight of life.  That was where I learned that my best friend, my loyal companion, my protector was gone and was never coming back.  I don’t know why they waited to tell me.  I suppose the plan was to tell me later that night, so I could cry myself to sleep, but I called their bluff in my own ignorance by insisting that we needed more dog food.  I’m good at crashing plans like that.

But Mutt had, without me even noticing it, grown old and tired.  Her hips were sore and gone were the days of running tirelessly through the field or up in the hollow.  It was her time to go, and my dad had taken care of it.  All I knew was anger for taking my dog away.  I knew grief, I knew sorrow, and I blamed them for being so cold and heartless.  It wasn’t until much later in life, when our next dog, Brittany was at the end of hers, that I really understood.  It was then, as I snuggled Brittany for the last time, said my goodbyes and told her how great a dog she was that I finally understood the weight of it all.  It was even later than that before I finally understood just how difficult it is to make that decision, and years later than even that before I finally came to terms that while I could order up death with a phone call, doing so was the greatest act of love you can give someone who has given their entire life in devotion to you.

A piece of me dies each time I have to make that call myself, and every time I do, I’m that little boy standing in Krogers, and that twenty-something sitting in the floor with an old Brittany Spaniel, and I’m that thirty something holding my boy Jack tightly to me as the doc pushes the plunger on the syringe, and I’m that inconsolable mess as my heart dog, Patches, stops breathing, giving out that one last sigh as I whisper in her ear and I’m that crying, blubbering forty-something bearded biker carrying an empty collar out of the vet clinic having ended the incredible pain that Jazz was suffering, and I’m the gentle giant, crying into the cold, dark night, wrapped in a sleeping bag on the hard wooden deck to comfort and old Husky who was finally, after 16 years, ready to leave this world.  I’m all of those at once.

And it all started with a mutt named Mutt.

A Tale of Two Model Ms

It was the best of keyboards, it was the worst of… wait, no it was the Best of Keyboards, period.

Unicomp Classic and IBM Model M

A long time ago, someone brilliant at IBM (IBM was full of brilliant people) came up with an idea for a keyboard.  This keyboard was constructed of steel and plastic.  It was murderously heavy.  No, seriously, you could kill someone with it.  It had replaceable, cleanable key caps.  It used a buckling spring mechanism over a capacitive PCB and was designed in the early 80s to allow people to type all day long on these new “Personal Computers” that IBM was producing.  It was called the Model F.

The Model F’s were pretty complex bits of machinery themselves, and as with any bit of complicated engineering, there are always ways to make them better, or less expensive to produce.  Thus mid-way through the 80’s, IBM re-designed the Model F slightly and debuted the Model M.  The Model M was nearly identical to the Model F, thought it used a membrane rather than a PCB underneath the keys.  Additionally, the body of the Model M was made from injection molded plastic, rather than the painted plastic of the Model F that was prone to cracks and failure through abuse.  There were other improvements or cost saving measures (depending on how you view it) and that became, arguably, the greatest keyboard ever made for computers.

The Model M was introduced around 1985 and built by IBM until the early 90s when IBM sold off parts of it’s manufacturing and design teams with Lexmark picking up the bits of the company that made keyboards.  Lexmark continued producing the Modem M into the 90s.

I picked one of these up at the IBM junk shop at the RTP campus when I worked there.  The Junk Shop (my term) was a wonderland of outdated equipment of all manner.  You could pick up oscilloscopes, microscopes, old computers, office furniture, equipment and all manner of things that the company no longer needed.  As I perused the aisles of junk, one day, I noticed a curled cord with a PS/2 plug on the end sticking out from a pile of old boxes.  Underneath that was a worn IBM Model M keyboard.  I paid, if I recall correctly, about $3.00 for this old workhorse.  It was born on 13 July 1989 and was model 1391401, the most common variant of the Model M.

I returned to my desk in the SuperLab and after digging up a PS/2 to USB adapter, plugged the old Model M into my Thinkpad and never looked back.  I’ve used that keyboard daily for nearly 10 years now whenever I am at my desk, be it at IBM, or now at my home office working for Canonical.  I’ve used my Model M on Thinkpads, on desktop machines, servers and evan on a MacBook Air and it worked flawlessly until a week or so ago when, frustratingly, the M, B and Space keys stopped working.  Sadly, after 26 years of faithful service, my IBM Model M was ready for retirement.

This started the quest for an adequate replacement.  I searched High and Low for another Model M, finding them everywhere, but often for a significantly more than what I paid for mine back then at the Junk Shop.  Frustration mounted until I stumbled across a company called Unicomp, which, as it turns out, now owns the rights and designs to the IBM Model M that both IBM and Lexmark produced.  The decision was made and that afternoon, I had ordered and received the shipping notice for a new Unicomp Classic M, based on the same designs as the IBM Model M that I loved.

The Unicomp Ms are based on the same designs as the old IBM M’s, as Unicomp owns the designs, having purchased them from IBM.  So with that in mind, they should be identical, and they almost nearly are.  The issues I’ve encountered are more in build quality than anything else.  My IBM Ms were all very heavy, very solid keyboards.  The Unicomp models are also weighty, and honestly not that bad in terms of quality.  However, the cases have some fitment issues making them just a bit creaky, especially in the corders.  But they’re fresh, new, and have that satisfying clicky sound that makes buckling spring keyboards so great.  And they keys and mechanisms are solid, and stand up to all manner of abuse.

My Unicomp Model M lasted for a year and a half before I broke it.  Unfortunately, unlike the IBM Ms which came with drain holes and could stand up to a lot of abuse, the Unicomp M I had seemed a bit weak when it came to spilled coffee.  My morning cuppa was it’s downfall ultimately.  Half the keys stopped working after that so I was forced to revert to a cheap, wireless backup keyboard until I could source a replacement.

Ultimately, I ended up with a Unicomp M of a slightly different design.  The new one is just a bit slimmer (it’s not quite as deep as the Classic M) and includes a TrackPoint built into the keyboard.  I had hoped that the TrackPoint would be the same as the IBM TrackPoint which were pretty solid pointer devices, once you got used to using them.  Unfortunately, the Unicomp parts are less well built, so the TrackPoint feels loose and inaccurate.  It’s annoying, and I don’t care to use it for anything needing precision, but it makes it easy to swap between console windows without having to reach over for my trackball, so I won’t complain too much.

I’ve now had four Model M keyboards.  They are arguably the best keyboards ever made, the design is ancient but still relevant, especially in this age of cruddy chiclet keyboards and non-feedback designs that have no soul.

Actually Useful Getting Started Guide to LXD on Ubuntu

OK, this will still be kinda brief, but hopefully helps get you going with LXC containers (via LXD) quickly in a way that’s actually useful.

I have typically used things like Digital Ocean and AWS to quickly launch a testbed, deploy some modified packages and, check the changes and then tear it all down quickly.  This works well for me but I’ve recently been trying to break my dependence on foreign services for this work.  So I’ve been using LXD more and more which is just as fast, and is local so I can do this sort of work without an internet connection if need be.  Below, I’ll outline a few very quick things to make using containers a bit more easy.  Note, all of the info below assumes you are using Ubuntu 16.04 LTS or later, with LXD installed (LXD is installed by default on 16.04 and newer).  Also, you should have at least some familiarity with lxc and lxd.  For more information on those, see https://linuxcontainers.org/lxc/introduction/.

Tip 1:  import images locally with useful aliases.

By default, when you launch a container, the image will be pulled from the internet if it does not already exist.  Also, if you want to use that container base again locally, you sometimes need to find an ugly fingerprint ID to reference it with.  I’ve prefer to locally import the images I want.  Not only does this let me create my own, easily remembered names for this, I can pull a variety of images from various sources and have my own local, off-line catalog of LXD images to create containers from.

First, see what images are available.  Since I do all my work on ubuntu, I only need to check the default ubuntu remote.  This is done with the ‘image’ command for lxc:

 

bladernr@galactica:~$ lxc image list ubuntu:
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 4c1e4092ead8 | yes | ubuntu 12.04 LTS amd64 (release) (20170417) | x86_64 | 156.78MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | 68a83fae9fd3 | yes | ubuntu 12.04 LTS armhf (release) (20170417) | armv7l | 135.58MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 056784ac045d | yes | ubuntu 12.04 LTS i386 (release) (20170417) | i686 | 141.27MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 536ea2799fc7 | yes | ubuntu 14.04 LTS amd64 (release) (20170405) | x86_64 | 119.89MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t/arm64 (2 more) | 26b9b1fb1b15 | yes | ubuntu 14.04 LTS arm64 (release) (20170405) | aarch64 | 110.96MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t/armhf (2 more) | 5e367a0ad31c | yes | ubuntu 14.04 LTS armhf (release) (20170405) | armv7l | 111.58MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------lets--------------------+---------+----------+-------------------------------+
| t/i386 (2 more) | 38df07c91eac | yes | ubuntu 14.04 LTS i386 (release) (20170405) | i686 | 118.24MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+

There are a LOT of images available so I’ve trimmed the output significantly.  I’m mostly interested in Trusty for now, which has the alias ‘t’, so let’s import that image locally using the ‘copy’ subcommand of the ‘image’ lxc command:

bladernr@galactica:~$ lxc image copy ubuntu:t local: --alias=ubuntu-trusty
Image copied successfully!
bladernr@galactica:~$ lxc image list
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ubuntu-trusty | 536ea2799fc7 | no | ubuntu 14.04 LTS amd64 (release) (20170405) | x86_64 | 119.89MB | Apr 24, 2017 at 10:58pm (UTC) |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ubuntu-xenial | f452cda3bccb | no | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 310.30MB | Jul 15, 2016 at 5:55pm (UTC) |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+lets

What this does is download a copy of the arch appropriate trusty container image hosted on the default Ubuntu image store and make it available locally on my desktop.  As you can see, I have both Trusty and Xenial images, with nice aliases that can be easily remembered later on when deploying containers.

I have the release versions of the images, that’s all I need.  Because I’m just prototyping and testing locally, I don’t really worry too much about the latest package updates being installed on my containers.

Ubuntu has two different remotes (streams) to get images from:

bladernr@galactica:~$ lxc image list ubuntu: |head -10
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 4c1e4092ead8 | yes | ubuntu 12.04 LTS amd64 (release) (20170417) | x86_64 | 156.78MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | 68a83fae9fd3 | yes | ubuntu 12.04 LTS armhf (release) (20170417) | armv7l | 135.58MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 056784ac045d | yes | ubuntu 12.04 LTS i386 (release) (20170417) | i686 | 141.27MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 9e0493502f9d | yes | ubuntu 14.04 LTS amd64 (release) (20170424) | x86_64 | 120.03MB | Apr 24, 2017 at 12:00am (UTC) |


bladernr@galactica:~$ lxc image list ubuntu-daily: |head -10
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 12bb0982a94b | yes | ubuntu 12.04 LTS amd64 (daily) (20170424) | x86_64 | 155.64MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | d95c2d1be3f8 | yes | ubuntu 12.04 LTS armhf (daily) (20170424) | armv7l | 136.64MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 4f516ec69c8f | yes | ubuntu 12.04 LTS i386 (daily) (20170424) | i686 | 139.71MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 9e0493502f9d | yes | ubuntu 14.04 LTS amd64 (daily) (20170424) | x86_64 | 120.03MB | Apr 24, 2017 at 12:00am (UTC) |

The first of those contains only the “release” versions of the Ubuntu images.  That is, the versions that appear on each GA Release Day, or LTS Point Release Day.  The second, ubuntu-daily, provides images from the daily builds of Ubuntu, which are updated far more frequently.  This also gives you access to daily builds of the latest development / interim release such as the soon to be opened Ubuntu 17.10.

Tip 2: Configuring a user for easy login and actually getting work done.

The default Ubuntu images are missing two very important things, ssh keys and a default password for the ‘ubuntu’ user.  There are a few different ways to tackle this.  If a root access is all you need, then this will suffice:

bladernr@galactica:~$ lxc exec subtle-marlin /bin/bash
root@subtle-marlin:~#

This will get you a root login, but I often need to have a non-privileged login.  So the first thing we need to configure the user.  This is accomplished using cloud-init and can be set using the profiles for lxc.  Specifically, I’m setting this in the default profile.  To access/edit this profile, as of lxc version 2.0.7-0ubuntu1~16.04.2, you need to use the lxc profile command to edit the default profile and add a few things.

To edit it use the command lxd profile edit <name> (Note, this command may be different on other versions of lxc, such as lxc edit profile <name>.

bladernr@galactica:~$ lxc profile list
default
docker
juju-controller
juju-default

Note that there are several profiles already created by default.  We’re only interested in the ‘default‘ profile, so let’s edit that:

### This is a yaml representation of the profile.
### Any line starting with a '# will be ignored.
###
### A profile consists of a set of configuration items followed by a set of
### devices.
###
### An example would look like:
### name: onenic
### config:
### raw.lxc: lxc.aa_profile=unconfined
### devices:
### eth0:
### nictype: bridged
### parent: lxdbr0
### type: nic
###
### Note that the name is shown but cannot be changed

config:
 user.vendor-data: |
  #cloud-config
  users:
  - name: ubuntu
    ssh-import-id: bladernr
    lock_passwd: false
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
description: ""
devices:
 eth0:
  name: eth0
  nictype: bridged
  parent: lxdbr0
  type: nic
name: default

In that example, I have added modified the user.vendor-data section to set a few items for the “ubuntu” user.  First, I used ssh-import-id to import my own ssh keys.  I believe this pulls from launchpad, but it may pull locally.  I’m honestly not sure which.  Next, I set lock_passwd to ‘false’.  letsIf you leave this unset, it defaults to ‘true’ which will prevent password logins.  Of course, the ssh logins via key are MUCH more secure, but as I mentioned before, these are very short lived development instances, so security is of no concern to me, as proven in the next line.

On this next line, I tell cloud-init to setup sudo privileges for the ‘ubuntu’ user so that no password is required when performing ANY task via sudo.  That is about as close as you can get to using the root user instead.  It is VERY dangerous because anyone who gains access to ‘ubuntu’ now has full, unfettered root access.  So don’t do this at home.  Again, for my use, these are short lived test and dev instances where security is not important.  I would NEVER do this on anthing that is even close to production level.

In fact, on a production system you should probably consider leaving only the ssh-import-id set to only allow logins via ssh and key-based authentication.  You should definitely NOT set sudo as I have done here, also.

Finally, I set the shell to /bin/bash so when I ssh in, I’ll have a nice bash shell.

There are other items you can set in here, such as password, ssh authorized_keys, group membership and so on.  You can find out more about cloud config in the cloudinit documentation.

Conclusion

So there you go.  Those two tips should help setting up LXC/LXD to be much easier and less hassle when launching instances for testing your code, prototyping and other needs.  Please do remember that I do some fairly ugly things (security wise) and you should make better choices there for production.

Once you have those things configured, you should be able to quickly launch instances and connect to them via SSH and be able to perform whatever tasks you need.

 

 bladernr@galactica:~$ lxc list
+---------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+----------------------+------+------------+-----------+
| subtle-marlin | RUNNING | 10.148.80.232 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
bladernr@galactica:~$ lxc launch ubuntu-trusty demo
Creating demo
Starting demo
bladernr@galactica:~$ lxc list
+---------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+----------------------+------+------------+-----------+
| demo | RUNNING | 10.148.80.217 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
| subtle-marlin | RUNNING | 10.148.80.232 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
bladernr@galactica:~$ ssh ubuntu@10.148.80.217
The authenticity of host '10.148.80.217 (10.148.80.217)' can't be established.
ECDSA key fingerprint is SHA256:gyn682YAhs+LyZc7i0s9akfBoZCOnSYErMeds4MbaKI.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.148.80.217' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-70-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Tue Apr 25 13:30:23 UTC 2017

System load: 0.77 Memory usage: 0% Processes: 15
 Usage of /home: unknown Swap usage: 47% Users logged in: 0

Graph this data and manage this system at:
 https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@demo:~$

What grown man doesn’t want a sixteen year old girlfriend?

On the way home from a doctor’s appointment, I was rocking out to the 50s channel on SiriusXM. I grew up listening to 50s and 60s rock and developed a fondness for it early on. I can probably sing more 50’s songs than I can 90’s songs, truth be told. But one thing I also enjoyed about the “innocent days of rock and roll” was the amount of creepiness that pops up if you pay a bit of attention to the songs.

Case in point, “You’re Sixteen” originally sung by Johnny Burnette.  Right off the bat, we know he’s in love with a teenage girl who just turned sixteen, and we know he appreciates her… physical attributes:

You come on like a dream, peaches and cream
Lips like strawberry wine
You’re sixteen, you’re beautiful and you’re mine

So his now sixteen year old lover tastes like strawberry wine, and she’s all his, coming on like a dream, something about peaches and cream…  It sounds sweet.

You’re all ribbons and curls, ooh, what a girl
Eyes that sparkle and shine
You’re sixteen, you’re beautiful and you’re mine

This paints a slightly different picture, a young girl all “ribbons and curls”, but “ooh, what a girl”.  Does he prefer little girls all ribbons and curls, something traditionally attributed to children?

You’re my baby, you’re my pet,
We fell in love on the night we met.
You touched my hand, my heart went pop,
Ooh, when we kissed, I could not stop.

Now we see that she’s his pet, a bit of ownership there.  But they fell in love on the night we met.  The turn darker, because he’s singing about how she just turned sixteen, so presumably, they met and fell in love and there was a lot of kissing when she was much younger.  How much younger? We never really know, but certainly younger than sixteen.

thatsYou walked out of my dreams, into my arms,
Now you’re my angel divine.
You’re sixteen, you’re beautiful, and you’re mine.
You’re my baby, you’re my pet,
We fell in love on the night we met.
You touched my hand, my heart went pop,
Ooh, when we kissed, I could not stop.
You walked out of my dreams, into my car,
Now you’re my angel divine.

So at some point prior to her sixteenth birthday, she walked into his arms and she’s all his.  She’s his pet.  His baby.  Makes his heart go pop.  Again, lots of kissing with the 14 or 15 year old girl he’s in love with.  In his car… car.  She got into his car.  The plot thickens.  Given the innocence of the 50s, this sounds like a sweet tale of teenage love, two kids out for dates at the malt shop, seeing a movie, getting into her older boyfriend’s car and driving out to “Lover’s Lane” where I’m sure more than kissing was going on.  It’s sweet, and reflective of young love.  Until you realize that Johnny Burnette was 26 when he released this song.

Twenty.  Six.

Here’s a man of 26 years, singing of love for a girl who just turned 16, whom he’s been at least making out with since before she was sixteen.  And before you say that he’s just singing the song, someone else wrote it, that’s true.  It was written by the Sherman Brothers, who, if the song was written as late as possible, 1960, were even older than Johnny Burnette.  Johnny was 26 at the time he sang and released the song.  Robert Sherman would have been 35, and  Richard Sherman would have been 32.  So what exactly did two thirty year olds in the late 1950s know about kissing the strawberry wine lips of 14 and 15 year old girls?

(Note, the above is accurate, but meant to be somewhat tongue in cheek.  The world was a different place in the late 1950s and early 1960s, and this was the equivalent of modern teenage pop songs).

Getting started with Juju 2.0 on AWS

This is a bit of a first timers guide to setting up Juju 2.0 to work with AWS.  To be honest, it’s been quite a while since I really messed with Juju and AWS (or anything outside of MAAS), and this is the first time I’ve really looked at Juju 2.0 anyway.  So this is me sharing the steps I had to take to get Juju 2 talking to my AWS Virtual Private Cloud (VPC) to spin up instances to prototype things.

First, let’s talk about the parts here.   You should already know what Amazon Web Services is, what a Virtual Private Cloud is, and have an account at Amazon to use them.  You should know a bit about what Juju is as well, but as this is a first timer guide, here’s all you really need to know to get started.  Juju is an amazing tool for modeling complex workload deployments.  It takes all the difficult bits of deploying any workload that has a “Charm” in minutes.  All the brain-share needed to install and configure these workloads is encapsulated in the “Charm”.  To configure, you can pass YAML files with full deployment configuration options, use Juju commands to set individual configuration options, or pass them via the juju gui.  Juju is very extensively documented at https://jujucharms.com, and I highly recommend you RTFM a bit to learn what Juju can do. Continue reading

Lets get dangerous…

Here I sit, spitting off this quick missive before I get into my car and drive possibly into the end. We shall see. There’s less than 2% chance that anything will go wrong, and even farther less than that that the thing going wrong will see me shuffle off this mortal coil.

(Hint: if I do kick off, some friends of mine are about to inherit some pretty awesome stuff).

My brother needs stem cells. I have those stem cells. It’s as simple as that. So some nice surgeons are going to DRILL, BABY, DRILL! into my rear parts, and literally suck out all the marrow of life, or at least a couple liters of the marrow of life. I guess I won’t be going to the woods to live deliberately for a while.

If successful, I wake up, get some killer pain meds, and a week at home resting up and healing from having so many holes drilled into my body while THEY. DRINK. MY. MILKSHAKE!. If successful, my brother gets a stem cell transplant tomorrow from me, and he inherits the moustache gene, gets better and goes home to his family.

I had a career at one time being a lifesaver, an ALS provider on both county and privately operated ambulances. This is just a natural extension of that, I guess. I, the guy who still carries a stocked trauma kit, who still stops to render aid to strangers. I, who have faced death and said “Not today.” Now I go forward once more so that others may live.

So here’s to hope. And here’s to being completely knocked the fuck out while my ass is being drilled.

Once more unto the breach, dear friends, once more.

Also, check out James’ blog:  thejameslane.rocks