Setting up KVM to PXE boot virtual machines from a local TFTP server

I had occasion to test some failures that a partner is seeing in the field when trying to pxeboot the Ubuntu Server installer.  I’d never set up KVM to do this before, as I almost exclusively use MAAS to do all server deployments, and in fact, this is required for my work.  The partner is doing this as a side project and it seemed like a nice way to waste an afternoon.  The following is somewhat specific to my desktop running Ubuntu 18.04 LTS.  16.04 systems require a few extra steps to get a qemu that supports PXE booting.

On Ubuntu 16.04.x qemu-kvm from the Pike Ubuntu Cloud Archive (UCA) need to be installed.
The Pike Cloud Archive can be enabled like this:

$ sudo add-apt-repository cloud-archive:pike
$ sudo apt update

If you have 16.04 LTS,  after adding the cloud-archive repo and updating, proceed as you would for 17.04 and later:

$ sudo apt install -qq -y libvirt-daemon-system qemu-kvm virt-manager

Next, we need to get tftp installed and verify it’s running.

$ sudo apt install tftpd-hpa
$ sudo service tftpd-hpa status
● tftpd-hpa.service - LSB: HPA's tftp server
Loaded: loaded (/etc/init.d/tftpd-hpa; generated)
Active: active (running) since Thu 2018-09-13 09:35:49 EDT; 5s ago
Docs: man:systemd-sysv-generator(8)
Process: 16715 ExecStop=/etc/init.d/tftpd-hpa stop (code=exited, status=0/SUCC
Process: 16720 ExecStart=/etc/init.d/tftpd-hpa start (code=exited, status=0/SU
Tasks: 1 (limit: 4915)
CGroup: /system.slice/tftpd-hpa.service
└─16745 /usr/sbin/in.tftpd --listen --user tftp --address :69 --secur

Sep 13 09:35:49 galactica systemd[1]: Starting LSB: HPA's tftp server...
Sep 13 09:35:49 galactica tftpd-hpa[16720]: * Starting HPA's tftpd in.tftpd
Sep 13 09:35:49 galactica tftpd-hpa[16720]: ...done.
Sep 13 09:35:49 galactica systemd[1]: Started LSB: HPA's tftp server.

Now we need to start setting up the tftpboot directories.  I’m using /srv/tftp to host the files:

$ sudo mkdir /srv/tftp
$ cd /srv/tftp
$ $ sudo wget -nH -r --cut-dirs=8 http://archive.ubuntu.com/ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/

This will also leave a lot of cruft in the directory, perhaps that’s fixable with other wget options but this gets the files and directories you need.  You can clean up the cruft

$ sudo rm -f *.gif index.html* MANIFEST* MD5SUMS* SHA*
$ ls
boot.img.gz ldlinux.c32 pxelinux.0 udeb.list
debian-cd_info.tar.gz mini.iso pxelinux.cfg vmlinuz
initrd.gz netboot.tar.gz ubuntu-installer xen

Now we need to set permissions and ownership of the files:

/srv/tftp$ cd ../
/srv$ sudo chmod -R 777 tftp/
/srv$ sudo chown -R nobody: tftp/

Next, make sure TFTPD can see the right directory by pointing

$ cat /etc/default/tftpd-hpa
# /etc/default/tftpd-hpa

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/srv/tftp"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure --create"

Finally restart tftpd and verify it’s listening on the correct IP address (the gateway address for KVM’s bridge)

$ nc -uvz 192.168.123.1 69
Connection to 192.168.123.1 69 port [udp/tftp] succeeded!

This means that tftpd is active and listening on port 69 on my libvirt network (my ip addressing may be different from yours).  Now there’s a couple things to set for libvirt that will allow the VMs to grab the right boot files.  Add the lines (in bold) to the config:

$ virsh net-edit default
<network>
  <name>default</name>
  <uuid>c1ea51e9-ac51-455b-a6ff-7a222b6f94eb</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:42:de:cf'/>
  <ip address='192.168.123.1' netmask='255.255.255.0'>
    <tftp root='/srv/tftp'/>
    <dhcp>
      <range start='192.168.123.2' end='192.168.123.254'/>
      <bootp file='pxelinux.0' server='192.168.123.1'/>
    </dhcp>
  </ip>
</network>

Again, note that my IP addressing may be different.  Once the config has been modified to serve up PXE files, restart the virtual network:

$ virsh net-destroy default
Network default destroyed

$ virsh net-start default
Network default started

Now let’s create a VM to PXEBOOT the installer.  I’ll be using virt-manager for this, as when dealing with qemu, I prefer the simplicity of a GUI over remembering a large swath of command line arguments.

Image of the initial config screen

This is the first config screen. Be sure to select PXE boot.

Image of the VM OS template setting screen

The Generic settings are fine here.

Image of the VM Memory and vCPU config screen

I upped the RAM to 2Gb just to make it a little sturdier.

Image of the VM Config Disk Setup Screen

20Gb should be enough for this experiment.​

Image of the VM Config Summary

Summary Screen: Be sure that the “Default” network has been chosen

Image of the bootup screen showing the VM PXE Booting

The VM is now PXE booting

Image of the netboot installer menu

The VM has now successfully booted via PXE

Installing Ubuntu 16.04.x with the HWE Kernel stack

Ubuntu LTS releases have two different kernel streams. Deciding which to use, and when and how to use them can seem daunting and possibly confusing.  But it’s actually quite simple, really.  In the next few paragraphs, I’ll explain briefly about the different kernels available for an Ubuntu LTS release, and provide some basic information on how and why to use each.  Then I’ll show you quickly how to install your system with Ubuntu Server choosing the HWE (Hardware Enablement) kernel so that your hardware is fully supported.

First, lets talk about the available kernels for a moment.  There is the stock GA kernel, which for Xenial is 4.4 and is supported for the full 5 years of the LTS (until 2021).  Second, there is the HWE, or Hardware Enablement, kernel stack, and as of this article, that is currently 4.13 based.  The HWE kernels are only supported by Canonical until the next HWE kernel is released, up to the XX.YY.5 point release.  The .5 point release is always based on the newest LTS, and THAT HWE kernel is then supported for the remaining life of the LTS release.  It’s not as complex as it sounds, and this chart makes it a little more clear.

Diagram of the Ubuntu 16.04 HWE support schedule

16.04 HWE support schedule

As you can see, once 16.04.5 has been released, it will be based on Bionic (18.04)’s 4.15 kernel, and that HWE kernel will be supported until the end of 16.04’s supported lifetime in April of 2021, along with the original 4.4 GA kernel.

Most people probably won’t need the HWE kernel.  These kernels exist to introduce new hardware support into the LTS, so unless you have hardware that is too new to have been supported adequately by the 4.4 kernel, you don’t need to even worry about this.  But if your system has hardware that isn’t supported by 4.4, it’s worth installing and booting into the current HWE kernel to see if that gets you up and running.  One very good example of where this matters is with the new Skylake-SP and later CPUs from Intel’s Purley platform.  The 4.4 kernel only provided minimal support for Purley CPUs, only ensuring functionality to the same level as any other older Intel CPU.  Full support for the Purley advanced CPU features, such as AVX512, did not land until 4.10 in 16.04.3.  So any workloads on Purley systems compiled to use AVX512 would need to be performed on an Ubuntu server using the latest HWE kernel, rather than the 4.4 GA kernel.

So how do you install these?  There are two ways.  The Easy Way, and The Hard Way.

First, lets check out The Easy Way.  Use MAAS, Ubuntu’s Metal-as-a-Service hardware orchestration tool.  MAAS allows you to quickly and easily install various OSs, including all current Ubuntu versions, CentOS, and even Windows Server.

Deployment is fast, and simple.  Select your node, Select Deploy from the “Take Action” menu, select the Ubuntu LTS you wish to deploy, and Select the HWE kernel from the Kernel options (for Xenial, it’s called “16.04-hwe”).  Click Deploy and in a few moments, your node will have been provisioned with Ubuntu Server and have the HWE kernel installed and will be ready to go.

Now let’s look at The Hard Way.  This will require the most recent Ubuntu Server ISO and you will need to build a bootable USB key from that, or use your server’s DVD/CD drive, if equipped.  Once you boot and have chose the language to use for installation, you’ll very quickly see the option for “Install Ubuntu Server with the HWE kernel” in the list of install options.  Choose that.  Then continue the installation as you normally would.

image of the Ubuntu Server installation menu

Ubuntu Server Installation Menu

image showing first login after install

Note the kernel version listed above is 4.13, the current HWE kernel.

This will install the HWE kernel stack on your system and once installation has finished, you can verify this by logging into your newly deployed system and checking the installed kernel version.  As you can see from the image above, we are now running Ubuntu 16.04.4 (the daily image now shows .4 rather than .3 because the .4 release is tomorrow) with the 4.13 HWE kernel.  You can further verify this by simply checking the installed kernel packages as shown below:

Image showing list of installed kernel packages

Note all installed kernel packages are the 4.13 HWE kernel, not the 4.4 GA kernel.

Remember when I said there were two ways, the Easy Way and the Hard Way?  I lied.  It’s actually ALL easy, and not as nearly as confusing as it may seem.

As you can see, getting the HWE kernel at install time is simple, whether you deploy via MAAS or straight from the Ubuntu ISO images.  It’s just a matter of picking the right kernel when you launch the installer.  It is always advisable to either run the latest GA or latest HWE kernel.  You should not run an expired HWE kernel as those kernels (4.8 and 4.10 now) no longer receive any security or bug fix updates.  Also, with current events, for Xenial only 4.4 and 4.13 (or later) kernels include Spectre / Meltdown mitigations.  the older HWE kernels like 4.8 and 4.10 are NOT patched and will never be updated with these critical security fixes.

A Tale of Two Model Ms

It was the best of keyboards, it was the worst of… wait, no it was the Best of Keyboards, period.

Unicomp Classic and IBM Model M

A long time ago, someone brilliant at IBM (IBM was full of brilliant people) came up with an idea for a keyboard.  This keyboard was constructed of steel and plastic.  It was murderously heavy.  No, seriously, you could kill someone with it.  It had replaceable, cleanable key caps.  It used a buckling spring mechanism over a capacitive PCB and was designed in the early 80s to allow people to type all day long on these new “Personal Computers” that IBM was producing.  It was called the Model F.

The Model F’s were pretty complex bits of machinery themselves, and as with any bit of complicated engineering, there are always ways to make them better, or less expensive to produce.  Thus mid-way through the 80’s, IBM re-designed the Model F slightly and debuted the Model M.  The Model M was nearly identical to the Model F, thought it used a membrane rather than a PCB underneath the keys.  Additionally, the body of the Model M was made from injection molded plastic, rather than the painted plastic of the Model F that was prone to cracks and failure through abuse.  There were other improvements or cost saving measures (depending on how you view it) and that became, arguably, the greatest keyboard ever made for computers.

The Model M was introduced around 1985 and built by IBM until the early 90s when IBM sold off parts of it’s manufacturing and design teams with Lexmark picking up the bits of the company that made keyboards.  Lexmark continued producing the Modem M into the 90s.

I picked one of these up at the IBM junk shop at the RTP campus when I worked there.  The Junk Shop (my term) was a wonderland of outdated equipment of all manner.  You could pick up oscilloscopes, microscopes, old computers, office furniture, equipment and all manner of things that the company no longer needed.  As I perused the aisles of junk, one day, I noticed a curled cord with a PS/2 plug on the end sticking out from a pile of old boxes.  Underneath that was a worn IBM Model M keyboard.  I paid, if I recall correctly, about $3.00 for this old workhorse.  It was born on 13 July 1989 and was model 1391401, the most common variant of the Model M.

I returned to my desk in the SuperLab and after digging up a PS/2 to USB adapter, plugged the old Model M into my Thinkpad and never looked back.  I’ve used that keyboard daily for nearly 10 years now whenever I am at my desk, be it at IBM, or now at my home office working for Canonical.  I’ve used my Model M on Thinkpads, on desktop machines, servers and evan on a MacBook Air and it worked flawlessly until a week or so ago when, frustratingly, the M, B and Space keys stopped working.  Sadly, after 26 years of faithful service, my IBM Model M was ready for retirement.

This started the quest for an adequate replacement.  I searched High and Low for another Model M, finding them everywhere, but often for a significantly more than what I paid for mine back then at the Junk Shop.  Frustration mounted until I stumbled across a company called Unicomp, which, as it turns out, now owns the rights and designs to the IBM Model M that both IBM and Lexmark produced.  The decision was made and that afternoon, I had ordered and received the shipping notice for a new Unicomp Classic M, based on the same designs as the IBM Model M that I loved.

The Unicomp Ms are based on the same designs as the old IBM M’s, as Unicomp owns the designs, having purchased them from IBM.  So with that in mind, they should be identical, and they almost nearly are.  The issues I’ve encountered are more in build quality than anything else.  My IBM Ms were all very heavy, very solid keyboards.  The Unicomp models are also weighty, and honestly not that bad in terms of quality.  However, the cases have some fitment issues making them just a bit creaky, especially in the corders.  But they’re fresh, new, and have that satisfying clicky sound that makes buckling spring keyboards so great.  And they keys and mechanisms are solid, and stand up to all manner of abuse.

My Unicomp Model M lasted for a year and a half before I broke it.  Unfortunately, unlike the IBM Ms which came with drain holes and could stand up to a lot of abuse, the Unicomp M I had seemed a bit weak when it came to spilled coffee.  My morning cuppa was it’s downfall ultimately.  Half the keys stopped working after that so I was forced to revert to a cheap, wireless backup keyboard until I could source a replacement.

Ultimately, I ended up with a Unicomp M of a slightly different design.  The new one is just a bit slimmer (it’s not quite as deep as the Classic M) and includes a TrackPoint built into the keyboard.  I had hoped that the TrackPoint would be the same as the IBM TrackPoint which were pretty solid pointer devices, once you got used to using them.  Unfortunately, the Unicomp parts are less well built, so the TrackPoint feels loose and inaccurate.  It’s annoying, and I don’t care to use it for anything needing precision, but it makes it easy to swap between console windows without having to reach over for my trackball, so I won’t complain too much.

I’ve now had four Model M keyboards.  They are arguably the best keyboards ever made, the design is ancient but still relevant, especially in this age of cruddy chiclet keyboards and non-feedback designs that have no soul.

Actually Useful Getting Started Guide to LXD on Ubuntu

OK, this will still be kinda brief, but hopefully helps get you going with LXC containers (via LXD) quickly in a way that’s actually useful.

I have typically used things like Digital Ocean and AWS to quickly launch a testbed, deploy some modified packages and, check the changes and then tear it all down quickly.  This works well for me but I’ve recently been trying to break my dependence on foreign services for this work.  So I’ve been using LXD more and more which is just as fast, and is local so I can do this sort of work without an internet connection if need be.  Below, I’ll outline a few very quick things to make using containers a bit more easy.  Note, all of the info below assumes you are using Ubuntu 16.04 LTS or later, with LXD installed (LXD is installed by default on 16.04 and newer).  Also, you should have at least some familiarity with lxc and lxd.  For more information on those, see https://linuxcontainers.org/lxc/introduction/.

Tip 1:  import images locally with useful aliases.

By default, when you launch a container, the image will be pulled from the internet if it does not already exist.  Also, if you want to use that container base again locally, you sometimes need to find an ugly fingerprint ID to reference it with.  I’ve prefer to locally import the images I want.  Not only does this let me create my own, easily remembered names for this, I can pull a variety of images from various sources and have my own local, off-line catalog of LXD images to create containers from.

First, see what images are available.  Since I do all my work on ubuntu, I only need to check the default ubuntu remote.  This is done with the ‘image’ command for lxc:

 

bladernr@galactica:~$ lxc image list ubuntu:
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 4c1e4092ead8 | yes | ubuntu 12.04 LTS amd64 (release) (20170417) | x86_64 | 156.78MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | 68a83fae9fd3 | yes | ubuntu 12.04 LTS armhf (release) (20170417) | armv7l | 135.58MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 056784ac045d | yes | ubuntu 12.04 LTS i386 (release) (20170417) | i686 | 141.27MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 536ea2799fc7 | yes | ubuntu 14.04 LTS amd64 (release) (20170405) | x86_64 | 119.89MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t/arm64 (2 more) | 26b9b1fb1b15 | yes | ubuntu 14.04 LTS arm64 (release) (20170405) | aarch64 | 110.96MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t/armhf (2 more) | 5e367a0ad31c | yes | ubuntu 14.04 LTS armhf (release) (20170405) | armv7l | 111.58MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------lets--------------------+---------+----------+-------------------------------+
| t/i386 (2 more) | 38df07c91eac | yes | ubuntu 14.04 LTS i386 (release) (20170405) | i686 | 118.24MB | Apr 5, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+

There are a LOT of images available so I’ve trimmed the output significantly.  I’m mostly interested in Trusty for now, which has the alias ‘t’, so let’s import that image locally using the ‘copy’ subcommand of the ‘image’ lxc command:

bladernr@galactica:~$ lxc image copy ubuntu:t local: --alias=ubuntu-trusty
Image copied successfully!
bladernr@galactica:~$ lxc image list
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ubuntu-trusty | 536ea2799fc7 | no | ubuntu 14.04 LTS amd64 (release) (20170405) | x86_64 | 119.89MB | Apr 24, 2017 at 10:58pm (UTC) |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ubuntu-xenial | f452cda3bccb | no | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 310.30MB | Jul 15, 2016 at 5:55pm (UTC) |
+---------------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+lets

What this does is download a copy of the arch appropriate trusty container image hosted on the default Ubuntu image store and make it available locally on my desktop.  As you can see, I have both Trusty and Xenial images, with nice aliases that can be easily remembered later on when deploying containers.

I have the release versions of the images, that’s all I need.  Because I’m just prototyping and testing locally, I don’t really worry too much about the latest package updates being installed on my containers.

Ubuntu has two different remotes (streams) to get images from:

bladernr@galactica:~$ lxc image list ubuntu: |head -10
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 4c1e4092ead8 | yes | ubuntu 12.04 LTS amd64 (release) (20170417) | x86_64 | 156.78MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | 68a83fae9fd3 | yes | ubuntu 12.04 LTS armhf (release) (20170417) | armv7l | 135.58MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 056784ac045d | yes | ubuntu 12.04 LTS i386 (release) (20170417) | i686 | 141.27MB | Apr 17, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 9e0493502f9d | yes | ubuntu 14.04 LTS amd64 (release) (20170424) | x86_64 | 120.03MB | Apr 24, 2017 at 12:00am (UTC) |


bladernr@galactica:~$ lxc image list ubuntu-daily: |head -10
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p (5 more) | 12bb0982a94b | yes | ubuntu 12.04 LTS amd64 (daily) (20170424) | x86_64 | 155.64MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p/armhf (2 more) | d95c2d1be3f8 | yes | ubuntu 12.04 LTS armhf (daily) (20170424) | armv7l | 136.64MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| p/i386 (2 more) | 4f516ec69c8f | yes | ubuntu 12.04 LTS i386 (daily) (20170424) | i686 | 139.71MB | Apr 24, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
| t (5 more) | 9e0493502f9d | yes | ubuntu 14.04 LTS amd64 (daily) (20170424) | x86_64 | 120.03MB | Apr 24, 2017 at 12:00am (UTC) |

The first of those contains only the “release” versions of the Ubuntu images.  That is, the versions that appear on each GA Release Day, or LTS Point Release Day.  The second, ubuntu-daily, provides images from the daily builds of Ubuntu, which are updated far more frequently.  This also gives you access to daily builds of the latest development / interim release such as the soon to be opened Ubuntu 17.10.

Tip 2: Configuring a user for easy login and actually getting work done.

The default Ubuntu images are missing two very important things, ssh keys and a default password for the ‘ubuntu’ user.  There are a few different ways to tackle this.  If a root access is all you need, then this will suffice:

bladernr@galactica:~$ lxc exec subtle-marlin /bin/bash
root@subtle-marlin:~#

This will get you a root login, but I often need to have a non-privileged login.  So the first thing we need to configure the user.  This is accomplished using cloud-init and can be set using the profiles for lxc.  Specifically, I’m setting this in the default profile.  To access/edit this profile, as of lxc version 2.0.7-0ubuntu1~16.04.2, you need to use the lxc profile command to edit the default profile and add a few things.

To edit it use the command lxd profile edit <name> (Note, this command may be different on other versions of lxc, such as lxc edit profile <name>.

bladernr@galactica:~$ lxc profile list
default
docker
juju-controller
juju-default

Note that there are several profiles already created by default.  We’re only interested in the ‘default‘ profile, so let’s edit that:

### This is a yaml representation of the profile.
### Any line starting with a '# will be ignored.
###
### A profile consists of a set of configuration items followed by a set of
### devices.
###
### An example would look like:
### name: onenic
### config:
### raw.lxc: lxc.aa_profile=unconfined
### devices:
### eth0:
### nictype: bridged
### parent: lxdbr0
### type: nic
###
### Note that the name is shown but cannot be changed

config:
 user.vendor-data: |
  #cloud-config
  users:
  - name: ubuntu
    ssh-import-id: bladernr
    lock_passwd: false
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
description: ""
devices:
 eth0:
  name: eth0
  nictype: bridged
  parent: lxdbr0
  type: nic
name: default

In that example, I have added modified the user.vendor-data section to set a few items for the “ubuntu” user.  First, I used ssh-import-id to import my own ssh keys.  I believe this pulls from launchpad, but it may pull locally.  I’m honestly not sure which.  Next, I set lock_passwd to ‘false’.  letsIf you leave this unset, it defaults to ‘true’ which will prevent password logins.  Of course, the ssh logins via key are MUCH more secure, but as I mentioned before, these are very short lived development instances, so security is of no concern to me, as proven in the next line.

On this next line, I tell cloud-init to setup sudo privileges for the ‘ubuntu’ user so that no password is required when performing ANY task via sudo.  That is about as close as you can get to using the root user instead.  It is VERY dangerous because anyone who gains access to ‘ubuntu’ now has full, unfettered root access.  So don’t do this at home.  Again, for my use, these are short lived test and dev instances where security is not important.  I would NEVER do this on anthing that is even close to production level.

In fact, on a production system you should probably consider leaving only the ssh-import-id set to only allow logins via ssh and key-based authentication.  You should definitely NOT set sudo as I have done here, also.

Finally, I set the shell to /bin/bash so when I ssh in, I’ll have a nice bash shell.

There are other items you can set in here, such as password, ssh authorized_keys, group membership and so on.  You can find out more about cloud config in the cloudinit documentation.

Conclusion

So there you go.  Those two tips should help setting up LXC/LXD to be much easier and less hassle when launching instances for testing your code, prototyping and other needs.  Please do remember that I do some fairly ugly things (security wise) and you should make better choices there for production.

Once you have those things configured, you should be able to quickly launch instances and connect to them via SSH and be able to perform whatever tasks you need.

 

 bladernr@galactica:~$ lxc list
+---------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+----------------------+------+------------+-----------+
| subtle-marlin | RUNNING | 10.148.80.232 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
bladernr@galactica:~$ lxc launch ubuntu-trusty demo
Creating demo
Starting demo
bladernr@galactica:~$ lxc list
+---------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+----------------------+------+------------+-----------+
| demo | RUNNING | 10.148.80.217 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
| subtle-marlin | RUNNING | 10.148.80.232 (eth0) | | PERSISTENT | 0 |
+---------------+---------+----------------------+------+------------+-----------+
bladernr@galactica:~$ ssh ubuntu@10.148.80.217
The authenticity of host '10.148.80.217 (10.148.80.217)' can't be established.
ECDSA key fingerprint is SHA256:gyn682YAhs+LyZc7i0s9akfBoZCOnSYErMeds4MbaKI.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.148.80.217' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-70-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Tue Apr 25 13:30:23 UTC 2017

System load: 0.77 Memory usage: 0% Processes: 15
 Usage of /home: unknown Swap usage: 47% Users logged in: 0

Graph this data and manage this system at:
 https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@demo:~$

Getting started with Juju 2.0 on AWS

This is a bit of a first timers guide to setting up Juju 2.0 to work with AWS.  To be honest, it’s been quite a while since I really messed with Juju and AWS (or anything outside of MAAS), and this is the first time I’ve really looked at Juju 2.0 anyway.  So this is me sharing the steps I had to take to get Juju 2 talking to my AWS Virtual Private Cloud (VPC) to spin up instances to prototype things.

First, let’s talk about the parts here.   You should already know what Amazon Web Services is, what a Virtual Private Cloud is, and have an account at Amazon to use them.  You should know a bit about what Juju is as well, but as this is a first timer guide, here’s all you really need to know to get started.  Juju is an amazing tool for modeling complex workload deployments.  It takes all the difficult bits of deploying any workload that has a “Charm” in minutes.  All the brain-share needed to install and configure these workloads is encapsulated in the “Charm”.  To configure, you can pass YAML files with full deployment configuration options, use Juju commands to set individual configuration options, or pass them via the juju gui.  Juju is very extensively documented at https://jujucharms.com, and I highly recommend you RTFM a bit to learn what Juju can do. Continue reading

Setting up Intel AMT to act as a remote KVM in Linux

I’ve been running MAAS for some time now on an Intel NUC, specifically the DC53427HYE model from a few years ago.  There are very few NUC models that offer vPRO support which is why I chose this one to begin with.  It had been running well for quite  a while with Ubuntu 14.04 and MAAS up to version 1.9 from the MAAS stable PPA.  And then tragedy struck…

It all started when I decided I was going to try to upgrade my server to Xenial, the coming Ubuntu 16.04 LTS Alpha.  I realize the perils of running Alpha level stuff, but this box is as much a development box as it is a production server at home, so I chose the risk, and paid the price.  My upgrade, using do-release-upgrade, left me with a half-configured, semi-functional machine and no easy way to recover.  Thus, I decided it was time to just reinstall from scratch.

Truth be told, this was coming anyway.  I’m working on a set of training slides involving my team’s custom MAAS setup for Ubuntu Server Certification so I wanted to get some good screen shots from various stages of building the server and using it.  Thus, I chose to really put AMT to the test on my NUC and see if it could deliver.  Keep in mind, this is a pretty quick run-through, and if it fails, I’ll likely result to something more manual, like the iPhone screen shot method.

First, I needed to reset AMT/MEBx as I had long forgotten the password I originally set.  This is fairly easy on this board (and most NUCs).  You unplug the NUC, crack the case, find the BIOS/MEBx reset pins near the RAM sockets, and jumper pins 1 and 2 for 5 seconds.  Put it all back together, and when the NUC boots, hit CTRL+P to get the MEBx login and you’re back to the original password of “admin”.  You’ll need to change the password on first entry, as before.  You may also need to check the BIOS settings as well to make sure nothing has changed.  Mine did register a CMOS battery failure alert that I cleared from the System Event Log.  Beyond that, at this point AMT is available remotely once you have set the network settings in the MEBx console and enabled Remote Management.

AMT Web Interface

AMT Web Interface

Note, if you have trouble finding the reset pins I mentioned earlier or are not sure which pins are 1 and 2 (there are three pins) refer to the technical manual for your NUC board.  For this particular board, you’re looking for this guide.

Now that MEBx is reconfigured, I wanted to set up remote KVM via the AMT engine.  This proved to be a little more obscure.  There are ways to easily enable this in Windows.  You simply download a Management Console binary, install that along with RealVNC and then do some config and off you go.  Doing this on Linux proved to be a bit more difficult.  To do this, you’ll need to start with wsmancli.  Thankfully, the painful parts (deciphering WSMAN) were already done, and I learned the process by following from this post at serverfault.com.

First, you need to install wsmancli which is available from the Ubuntu repos.

bladernr@sulaco:~$ sudo apt-get install wsmancli

Next, it helps to set up a few shell variables for important things:

bladernr@sulaco:~$ AMT_IP=192.168.0.20
bladernr@sulaco:~$ AMT_PASSWORD=1Nsecure!
bladernr@sulaco:~$ VNC_PASSWORD="1N\$ecure"

Note that for the VNC Password, it must be EXACTLY 8 characters, consisting of at least one of each of these: Lower Case, Upper Case, Number, Special Character. Additionally, I had to escape the ‘$’ since that, itself, is a shell variable.  We’ll see if that way leads to madness.  Remember that you need to escape the ‘$’ here, but when you reference it outside the shell, the exape character ‘\’ shouldn’t be necessary.  Also, it’s probably a better idea to not use ‘$’ as part of the password anyway.  Just sayin’.

Now, we need to configure things using wsmancli:

bladernr@sulaco:~$ # Enable KVM
bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k RFBPassword=${VNC_PASSWORD}

bladernr@sulaco:~$ # Enable KVM redirection to port 5900
bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k Is5900PortEnabled=true

bladernr@sulaco:~$ # Disable opt-in policy
bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k OptInPolicy=false

Now, on that third line, I ran into trouble.  I’m not sure why that one kept failing with this error:

Connection failed. response code = 0
Couldn't resolve host name

When the others worked fine.  So since this is just disabling the default requirement for Opt-In on KVM connections, I went back into MEBx itself.  You can find the setting under User Consent.  Set Opt-In from KVM to None, so that no opt-in is required for remote connections.

bladernr@sulaco:~$ # Disable session timeout
bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k SessionTimeout=0

bladernr@sulaco:~$ # Enable KVM
bladernr@sulaco:~$ wsman invoke -a RequestStateChange http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_KVMRedirectionSAP -h ${AMT_IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k RequestedState=2

These last two first set the session timeout to 0, so that you are never disconnected, and then finally turn KVM on.

I use Remmina for VNC, RDP and similar connections, so I configured a new connection in Remmina and gave it a go.  It failed.  Looking at the debug log, the connection was quickly terminated.  So first I changed the VNC password.

bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k RFBPassword=Vnc-p@55

The new password was accepted (that was my third attempt at a memorable password that meets all of Intel’s ridiculous requirements.  All things considered, however, I don’t believe the issue was the password.  See the end for the root cause of my issues connecting initially.  Once I had set a new password, I re-configured Remmina and tried again.  And again, failure:

[VNC]VNC server supports protocol version 4.0 (viewer 3.8)
[VNC]We have 2 security types to read
[VNC]0) Received security type 2
[VNC]Selecting security type 2 (0/2 in the list)
[VNC]1) Received security type 128
[VNC]Selected Security Scheme 2
[VNC]VNC authentication succeeded
[VNC]Desktop name "Intel(r) AMT KVM"
[VNC]Connected to VNC server, using protocol version 3.8
[VNC]VNC server default format:
[VNC] 16 bits per pixel.
[VNC] Least significant byte first in each pixel.
[VNC] TRUE colour: max red 31 green 63 blue 31, shift red 11 green 5 blue 0
[VNC]read (104: Connection reset by peer)

But it looks like the password is accepted, so we’re good there.  We’re just missing one piece.

It turns out it was one more bit of annoyance from Intel.  I finally was able to dig up the KVM settings info from Intel after some Googling.  Right there on that page is the following:

If the Intel AMT platform is currently displaying the MEBx, it is not possible to open a KVM session.

I still had MEBx open on the NUC so I could manually change settings if I needed to.  I still don’t know why I was unable to set the Opt-In policy remotely.  That should have worked, but in the end, the result was what I wanted.  I now have KVM control via AMT on my NUC.

Remote KVM via AMT

 

Adventures with Ubuntu Snappy: Prologue

A short while back at IoT World, we introduced a neat little bundle of kit to demonstrate Ubuntu Snappy.  This kit consisted of a Raspberry Pi 2 (the updated Pi that Ubuntu can run on), a PiGlow and really sharp Pibow case, both provided by Pimoroni.  Needless to say, it was a real hit.

An offer came up to get my hands on one of these great, specially made Ubuntu branded versions of the kit and I had to jump on that, because it just looks so darned cool.  Happy coincidence, I was already planning on obtaining a Pi 2 to replace my older RPi that I had toyed with off and on over the last couple years.

Today, my new Ubuntu Snappy Core Raspberry Pi Fun Pack arrived and I had to just stop working and start playing a bit, because, “Hey, new toys!”

So this first post will be an introduction and first steps.Finished. What a sharp looking Raspberry Pi 2. Case and piglow by Pimoroni (http://www.pimoroni.com)OS is Ubuntu Snappy Core (http://developer.ubuntu.com/en/snappy

First, the hardware.  I won’t go into detail on the Raspberry Pi 2 hardware itself.  By now, it’s well known, and it’s well discussed and documented elsewhere to the point where I have nothing Earth shattering to add to that discussion.  You can see some basic information at RaspberryPi.org or search the Goog.

I will point out the two add-on’s though.  First the Pibow case is a plastic case specially made for the Pi 2 and B+ systems.  It consists of two clear panels with several layers of custom designed frames in between.  They sell different configs that range from skeleton cases to full box.  For us, they created one in Ubuntu Orange and laser engraved the Ubuntu logo and Circle of Friends on the top cover.  It looks pretty sharp.

The PiGlow is another neat addition.  It plugs into the pin array on the Pi2 and provides a programmable LED light show.  There are ample instructions for making them flash and I’ll get to a brief demo in another post, perhaps.  The PiGlow is easily programmable with python, and they provide instructions for installing the necessary libraries to interface with the PiGlow via a Python script.  This means it should be quite easy to add a colorful, animated status marker to any python script by adding a few lines of code.  There is video of the PiGlow as well on the Pimoroni site, so enjoy!

Now let’s talk about the OS.  Because of the upgraded ARM Cortex-V7 chip, the Pi 2 can finally run Ubuntu.  Previous incarnations of the RPi used an older ARM v6 chip that Ubuntu was not ported for (though you could get various Debian versions to run on that chip).  In April 2015, we launched Ubuntu 15.04 (Vivid Vervet) and with it came Ubuntu Snappy.  Snappy is a new transactional version of the OS formerly known as Ubuntu Core, thus, Ubuntu Snappy Core.  The key difference is that with Snappy, we no longer use apt to manage package installations.  Instead, packages come in Snaps, which are transactional packages that we first developed for the Ubuntu Phone.  If you’ve never heard of Transactional packages, that’s ok, because you’ve more than likely used them any time you install or update an application on your smartphone.

In a brief nutshell, Transactional Packages are an All-or-Nothing installation that consists of bits of unmodifiable code as well as user-modifiable code.  The unmodifiable parts are the core elements of the application, the UI, the libraries, the binary executables, and so forth.  The user-modifiable parts are things like custom user configurations, downloaded or added data like documents, photos, icons, and other similar things.  When you upgrade an app, you essentially completely replace the unmodifiable bits with the newer version of those bits, unlike traditional package updates that may only update a single library file or binary executable.  One of the two biggest benefits to this are that you never have to worry about update creep, where update after update after update could, possibly, cause things to break as they leave behind old, conflicting or unneeded files.  The other is that if an update breaks, you can very easily roll back to the last working version.  So, for example, if you have Candy Crush version 1.2.3 installed and install 1.2.5 and discover that 1.2.5 is actually broken and you can no longer play Candy Crush, you could simply roll right back to version 1.2.3 and continue on crushing those candies.  And neither the update nor the rollback has touched your user-modifiable data (your records, progress and such in this case).

So the first thing I wanted to do was get this sucker on my network.  My network, however, is very tightly controlled at home.  All IP addresses are handed out by DHCP, on an assigned basis, so if your device’s MAC address isn’t specifically listed in dhcp.conf on my server, you won’t get an address.  And for wireless devices, even connecting to the access point is MAC controlled, so if your phone, tablet or laptop is not on the ACL on the WiFi access point, you can’t even join the network to ask for an IP address.  Sure, it’s not 100% secure, but nothing ever is, and this is “Secure Enough” for my needs and location.

Now, for my first idea, I wanted to replace my home DNS/DHCP server with a small IoT type device like a Raspberry Pi.  I currently use a hacked together Shuttle PC for this purpose, with a giant external 400W power supply that was made necessary because the small Shuttle PSUs are notoriously flaky, and mine didn’t last three months before it started shutting itself off randomly trying to keep up with the power needs of the CPU on the machine.  So that’s a lot of energy being used for a system that essentially does nothing but hand out DHCP requests and answer DNS queries.  Transit also serves as a bastion point so I can access my LAN remotely so I will try to set this up to do likewise.

First, however, I need the Snappy Pi to have a static IP address.  By default, it comes configured for dynamic addressing for it’s onboard ethernet device.  So I needed to modify /etc/network/interfaces.d/eth0eth0-orig

 

To change the file, a couple things had to happen.  First, I needed to be the root user.  Sudo may have worked, but it’s just as easy to become root to do a lot of things.  So sudo -i to become root.

Now, we just need to edit the file.  There is a problem though.  The root filesystem is mounted read-only by default.  This is great for security, but when you need to edit core files, it makes things a bit difficult.  So, you need to remount the root filesystem read-write so you can edit the core files that are not already designed to be user-writable (learn about the Snappy Core filesystem).  This is simply accomplished like so:

root@localhost:/etc/network/interfaces.d# mount -o remount,rw /

after this, you should be able to edit the eth0 file to use a static addresseth0-fixedI edited the file to add the address, netmask, broadcast and gateway lines and saved the eth0 file.  Then, I shut the interface down and brought it back up and I was in business.  I also took a moment to verify that it all worked by rebooting the RPi.  This highlights a couple things:

  1. On reboot, the root filesystem is again mounted read-only, so if you need to modify more core files, you will need to remount the filesystem once more.
  2. allow-hotplug is an interesting thing.  It’s intention is to only bring the device up on hotplug events.  However, because the ethernet device is always plugged in, on reboot, the kernel detects a hotplug event and brings the device up at boot time, regardless.  That means, if you don’t have it plugged in to a switch and turn it on, you could wait a bit while it tries to obtain a dhcp address.  I noted that after rebooting the RPi with the ethernet cable removed, the eth0 device was still shown as up with the prescribed IP address.

One thing I forgot to mention was that because I’m not using DHCP, I also need to list some nameservers.  Eventually, this file will list itself as it will be running Bind9, but for now, I’m going to give it Google’s public DNS addresses by creating a file called “tail” in /etc/resolvconf/resolv.conf.d that looks simply like this:

ubuntu@localhost:~$ cat /etc/resolvconf/resolv.conf.d/tail 
nameserver 8.8.8.8
nameserver 8.8.4.4

Now that I’m rebooted and have an IP address, lets take a brief whirlwind tour of Snappy before I end this prologue to my Snappy Adventure.

As I mentioned before, Snappy doesn’t use apt to manage packages, it uses a new tool called, not surprisingly, “snappy”.  So first, lets get just a bit of information about my system

ubuntu@localhost:~$ snappy info
release: ubuntu-core/devel
frameworks: webdm
apps:

 

 

So there you have it… there is much more available at https://developer.ubuntu.com/en/snappy/tutorials/ about snappy and its use.

One of these things is not like the others

I thought something was a bit odd ’round here…

bladernr@klaatu:/var/log$ iperf -c 10.0.0.1
------------------------------------------------------------
Client connecting to 10.0.0.1, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.50 port 58747 connected with 10.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.09 GBytes 935 Mbits/sec

bladernr@klaatu:~$ iperf -c 192.168.0.20
------------------------------------------------------------
Client connecting to 192.168.0.20, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.100 port 59579 connected with 192.168.0.20 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 380 MBytes 318 Mbits/sec

The first result is from a D-Link GGS-108 8-Port Gigabit switch that I’ve been using only for the test servers connected to my NUC.  The second result is from a TrendNet TEG-S80G 8-port Gigabit switch that handles traffic for the rest of my home.

The TrendNet switch connects the NUC, DNS/DHCP, Router, my main workstation and an uplink to a switch in the living room that hosts the TV, Wii, and so forth.  Obviously, it’s time to buy a new switch that is NOT made by TrendNet.  The ~320Mbps speed is what I’ve been getting for a long while now from that cheap-o switch.  I should have used the D-Link for the home lan when I re-cabled.

Old school glass on a new school camera

I’ve loved photography since I was very, very young.  Going way back to my first camera, an old Kodachrome 110 to a 126 and an old 60’s model Yashica TL-Electro X 35mm.  Later on I found myself shooting Polaroid because instamatics were just fun and satisfied a need for instant gratification that wasn’t repeated until today’s digital age where photos can be uploaded or shared at the touch of a button.  I’ve dabbled with medium format, pinhole and other 35mm bodies like the Minolta X-700 and Canon AE-1.

Of all those, I only held onto the Yashica and the Minolta.  Eventually I found myself exploring with digital going back to an early Canon Point-and-Shoot that shot 640×480 maximum and required an ungainly cable and horrific software to even get the picture off.  I’ve bought and tossed many small digitals over the years, cameras I’d bought for one reason or another.  Oddly, I’ve only owned one digital SLR, a Canon 350D that I’ve loved carrying around the world, taking pictures of my adventures in places like Taipei, Tokyo, London, Oxford, Prague, Switzerland and around the USA.

But I must admit I started to grow a bit tired of it all.  I found myself more often than not leaving my Canon at home and just shooting things with an iPhone.  The iPhone is not a good camera.  Well, it’s not bad, but it’s not great.  What is IS, however, is convenient.  It’s small, light weight, has a reasonably decent sensor that even if it’s not the best out there, is workable.  What it lacks though, is physical zoom, low-light ability and anything remotely satisfying as a photographer who likes landscapes and architectural shots and panoramics and such.  It IS good for street photography, as evidenced by the literally millions of photos of every day life uploaded to the word on a daily basis.

But because of it’s limitations, I found myself again looking for something better.  Something more professional.  More capable.  But not big and bulky.  I wanted a compact camera with the benefits of an SLR without the weight, yet still capable of using those big, heavy lenses if I wanted.  I really, truly, would love to have a full frame SLR one day, but the size of the things is off-putting as I don’t make a living taking pictures and the cost is astronomical in many cases.

So that set me down the path of looking at Micro 4/3’s and APS-C mirrorless cameras.  They’re small, light but use interchangeable lenses allowing room to grow as well as the ability to shoot in any situation.  They are basically baby SLRs, only no flappy mirror, no giant body, no big battery grip (unless you want one).  They still accept hotshoe attachments (mostly flashes, video light panels and microphones) and can be used for everything from portraiture and street photography to landscapes and sports or action.

After a lot of research, reading and agonizing over details, I settled on the current Sony Alpha 6000, right now the almost top of the line mirrorless from Sony.  The next level up is the A7 II, a full frame (!!) mirrorless less than the size of any full frame dSLR and just as capable.  The a6000 is an amazing little camera.  It’s capable of HD video at 60 fps with amazing tracking AF and lenses with power zoom for smooth transitions from close in to wide.  But I don’t shoot a lot of video at all, and on the rare occasion I do, it’s with a GoPro.  My interest is in the fact that it’s as capable as any prosumer SLR but half the weight and bulk.

The 16-50mm kit lens is underrated in my opinion.  Of course it’s not perfect, it IS, after all, an inexpensive kit lens, but just search around a site like Flickr for photos taken with the lens and you’d be amazed at what it’s actually capable of.  If that’s still not good enough, there are some very well reviewed Zeiss lenses out there that will set you back more than $1K and some in-betweeners as well like the Sony 18-105mm f3.5-5.6.  There are also some very nice primes that get as fast as f2.8 or perhaps f2.

It has a good 24MP APS-C sensor, 129 AF points, great color and contrast, and a basket full of other features.  One thing the Sony E-Mount cameras don’t have, yet, is a very large lens selection.  That’s slowly changing, but there still aren’t as many as you’d find from Canon or Fuji or other companies out there.  That’s where the adapter comes in.

I could have done this with my dSLR, and to be honest it never really clicked in my mind how much fun it could be, but for 14.00 I picked up a rather well built adapter that allows me to attach my old Minolta MD mount lenses to the 6000’s E-Mount frame.  And for the first time, I have actually been giddy, even if I was just taking pictures of junk around my house.  The old 50mm f/1.7 is simply astounding on this camera.  My somewhat cheap Quantaray 35-200mm Macro was surprisingly good, perhaps better than I remember it being.

These are things I will still need to practice a lot with, to get back into the habit of using a purely manual focus lens without any of the benefits of modern lenses, but it’s an adventure I’m looking forward to.  It’s small enough, even if I carry the heavier older Minolta glass that I can carry it around the globe and not feel like I’m lugging a suitcase just for camera gear and electronics.  It’s a far better performer than my 350D and so far I’ve quite enjoyed shooting with it.  I still have my old Canon 350D, but to be honest, the more I shoot with this little Sony, the less and less I feel the desire for a 5D or 1D or larger body.