Getting started with Juju 2.0 on AWS

This is a bit of a first timers guide to setting up Juju 2.0 to work with AWS.  To be honest, it’s been quite a while since I really messed with Juju and AWS (or anything outside of MAAS), and this is the first time I’ve really looked at Juju 2.0 anyway.  So this is me sharing the steps I had to take to get Juju 2 talking to my AWS Virtual Private Cloud (VPC) to spin up instances to prototype things.

First, let’s talk about the parts here.   You should already know what Amazon Web Services is, what a Virtual Private Cloud is, and have an account at Amazon to use them.  You should know a bit about what Juju is as well, but as this is a first timer guide, here’s all you really need to know to get started.  Juju is an amazing tool for modeling complex workload deployments.  It takes all the difficult bits of deploying any workload that has a “Charm” in minutes.  All the brain-share needed to install and configure these workloads is encapsulated in the “Charm”.  To configure, you can pass YAML files with full deployment configuration options, use Juju commands to set individual configuration options, or pass them via the juju gui.  Juju is very extensively documented at https://jujucharms.com, and I highly recommend you RTFM a bit to learn what Juju can do. Continue reading

Setting up Intel AMT to act as a remote KVM in Linux

I’ve been running MAAS for some time now on an Intel NUC, specifically the DC53427HYE model from a few years ago.  There are very few NUC models that offer vPRO support which is why I chose this one to begin with.  It had been running well for quite  a while with Ubuntu 14.04 and MAAS up to version 1.9 from the MAAS stable PPA.  And then tragedy struck…

It all started when I decided I was going to try to upgrade my server to Xenial, the coming Ubuntu 16.04 LTS Alpha.  I realize the perils of running Alpha level stuff, but this box is as much a development box as it is a production server at home, so I chose the risk, and paid the price.  My upgrade, using do-release-upgrade, left me with a half-configured, semi-functional machine and no easy way to recover.  Thus, I decided it was time to just reinstall from scratch.

Truth be told, this was coming anyway.  I’m working on a set of training slides involving my team’s custom MAAS setup for Ubuntu Server Certification so I wanted to get some good screen shots from various stages of building the server and using it.  Thus, I chose to really put AMT to the test on my NUC and see if it could deliver.  Keep in mind, this is a pretty quick run-through, and if it fails, I’ll likely result to something more manual, like the iPhone screen shot method.

First, I needed to reset AMT/MEBx as I had long forgotten the password I originally set.  This is fairly easy on this board (and most NUCs).  You unplug the NUC, crack the case, find the BIOS/MEBx reset pins near the RAM sockets, and jumper pins 1 and 2 for 5 seconds.  Put it all back together, and when the NUC boots, hit CTRL+P to get the MEBx login and you’re back to the original password of “admin”.  You’ll need to change the password on first entry, as before.  You may also need to check the BIOS settings as well to make sure nothing has changed.  Mine did register a CMOS battery failure alert that I cleared from the System Event Log.  Beyond that, at this point AMT is available remotely once you have set the network settings in the MEBx console and enabled Remote Management.

AMT Web Interface

AMT Web Interface

Note, if you have trouble finding the reset pins I mentioned earlier or are not sure which pins are 1 and 2 (there are three pins) refer to the technical manual for your NUC board.  For this particular board, you’re looking for this guide.

Now that MEBx is reconfigured, I wanted to set up remote KVM via the AMT engine.  This proved to be a little more obscure.  There are ways to easily enable this in Windows.  You simply download a Management Console binary, install that along with RealVNC and then do some config and off you go.  Doing this on Linux proved to be a bit more difficult.  To do this, you’ll need to start with wsmancli.  Thankfully, the painful parts (deciphering WSMAN) were already done, and I learned the process by following from this post at serverfault.com.

First, you need to install wsmancli which is available from the Ubuntu repos.

bladernr@sulaco:~$ sudo apt-get install wsmancli

Next, it helps to set up a few shell variables for important things:

bladernr@sulaco:~$ AMT_IP=192.168.0.20
bladernr@sulaco:~$ AMT_PASSWORD=1Nsecure!
bladernr@sulaco:~$ VNC_PASSWORD="1N\$ecure"

Note that for the VNC Password, it must be EXACTLY 8 characters, consisting of at least one of each of these: Lower Case, Upper Case, Number, Special Character. Additionally, I had to escape the ‘$’ since that, itself, is a shell variable.  We’ll see if that way leads to madness.  Remember that you need to escape the ‘$’ here, but when you reference it outside the shell, the exape character ‘\’ shouldn’t be necessary.  Also, it’s probably a better idea to not use ‘$’ as part of the password anyway.  Just sayin’.

Now, we need to configure things using wsmancli:

bladernr@sulaco:~$ # Enable KVM
bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k RFBPassword=${VNC_PASSWORD}

bladernr@sulaco:~$ # Enable KVM redirection to port 5900
bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k Is5900PortEnabled=true

bladernr@sulaco:~$ # Disable opt-in policy
bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k OptInPolicy=false

Now, on that third line, I ran into trouble.  I’m not sure why that one kept failing with this error:

Connection failed. response code = 0
Couldn't resolve host name

When the others worked fine.  So since this is just disabling the default requirement for Opt-In on KVM connections, I went back into MEBx itself.  You can find the setting under User Consent.  Set Opt-In from KVM to None, so that no opt-in is required for remote connections.

bladernr@sulaco:~$ # Disable session timeout
bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k SessionTimeout=0

bladernr@sulaco:~$ # Enable KVM
bladernr@sulaco:~$ wsman invoke -a RequestStateChange http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_KVMRedirectionSAP -h ${AMT_IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k RequestedState=2

These last two first set the session timeout to 0, so that you are never disconnected, and then finally turn KVM on.

I use Remmina for VNC, RDP and similar connections, so I configured a new connection in Remmina and gave it a go.  It failed.  Looking at the debug log, the connection was quickly terminated.  So first I changed the VNC password.

bladernr@sulaco:~$ wsman put http://intel.com/wbem/wscim/1/ips-schema/1/IPS_KVMRedirectionSettingData -h $AMT_IP -P 16992 -u admin -p ${AMT_PASSWORD} -k RFBPassword=Vnc-p@55

The new password was accepted (that was my third attempt at a memorable password that meets all of Intel’s ridiculous requirements.  All things considered, however, I don’t believe the issue was the password.  See the end for the root cause of my issues connecting initially.  Once I had set a new password, I re-configured Remmina and tried again.  And again, failure:

[VNC]VNC server supports protocol version 4.0 (viewer 3.8)
[VNC]We have 2 security types to read
[VNC]0) Received security type 2
[VNC]Selecting security type 2 (0/2 in the list)
[VNC]1) Received security type 128
[VNC]Selected Security Scheme 2
[VNC]VNC authentication succeeded
[VNC]Desktop name "Intel(r) AMT KVM"
[VNC]Connected to VNC server, using protocol version 3.8
[VNC]VNC server default format:
[VNC] 16 bits per pixel.
[VNC] Least significant byte first in each pixel.
[VNC] TRUE colour: max red 31 green 63 blue 31, shift red 11 green 5 blue 0
[VNC]read (104: Connection reset by peer)

But it looks like the password is accepted, so we’re good there.  We’re just missing one piece.

It turns out it was one more bit of annoyance from Intel.  I finally was able to dig up the KVM settings info from Intel after some Googling.  Right there on that page is the following:

If the Intel AMT platform is currently displaying the MEBx, it is not possible to open a KVM session.

I still had MEBx open on the NUC so I could manually change settings if I needed to.  I still don’t know why I was unable to set the Opt-In policy remotely.  That should have worked, but in the end, the result was what I wanted.  I now have KVM control via AMT on my NUC.

Remote KVM via AMT

 

Adventures with Ubuntu Snappy: Prologue

A short while back at IoT World, we introduced a neat little bundle of kit to demonstrate Ubuntu Snappy.  This kit consisted of a Raspberry Pi 2 (the updated Pi that Ubuntu can run on), a PiGlow and really sharp Pibow case, both provided by Pimoroni.  Needless to say, it was a real hit.

An offer came up to get my hands on one of these great, specially made Ubuntu branded versions of the kit and I had to jump on that, because it just looks so darned cool.  Happy coincidence, I was already planning on obtaining a Pi 2 to replace my older RPi that I had toyed with off and on over the last couple years.

Today, my new Ubuntu Snappy Core Raspberry Pi Fun Pack arrived and I had to just stop working and start playing a bit, because, “Hey, new toys!”

So this first post will be an introduction and first steps.Finished. What a sharp looking Raspberry Pi 2. Case and piglow by Pimoroni (http://www.pimoroni.com)OS is Ubuntu Snappy Core (http://developer.ubuntu.com/en/snappy

First, the hardware.  I won’t go into detail on the Raspberry Pi 2 hardware itself.  By now, it’s well known, and it’s well discussed and documented elsewhere to the point where I have nothing Earth shattering to add to that discussion.  You can see some basic information at RaspberryPi.org or search the Goog.

I will point out the two add-on’s though.  First the Pibow case is a plastic case specially made for the Pi 2 and B+ systems.  It consists of two clear panels with several layers of custom designed frames in between.  They sell different configs that range from skeleton cases to full box.  For us, they created one in Ubuntu Orange and laser engraved the Ubuntu logo and Circle of Friends on the top cover.  It looks pretty sharp.

The PiGlow is another neat addition.  It plugs into the pin array on the Pi2 and provides a programmable LED light show.  There are ample instructions for making them flash and I’ll get to a brief demo in another post, perhaps.  The PiGlow is easily programmable with python, and they provide instructions for installing the necessary libraries to interface with the PiGlow via a Python script.  This means it should be quite easy to add a colorful, animated status marker to any python script by adding a few lines of code.  There is video of the PiGlow as well on the Pimoroni site, so enjoy!

Now let’s talk about the OS.  Because of the upgraded ARM Cortex-V7 chip, the Pi 2 can finally run Ubuntu.  Previous incarnations of the RPi used an older ARM v6 chip that Ubuntu was not ported for (though you could get various Debian versions to run on that chip).  In April 2015, we launched Ubuntu 15.04 (Vivid Vervet) and with it came Ubuntu Snappy.  Snappy is a new transactional version of the OS formerly known as Ubuntu Core, thus, Ubuntu Snappy Core.  The key difference is that with Snappy, we no longer use apt to manage package installations.  Instead, packages come in Snaps, which are transactional packages that we first developed for the Ubuntu Phone.  If you’ve never heard of Transactional packages, that’s ok, because you’ve more than likely used them any time you install or update an application on your smartphone.

In a brief nutshell, Transactional Packages are an All-or-Nothing installation that consists of bits of unmodifiable code as well as user-modifiable code.  The unmodifiable parts are the core elements of the application, the UI, the libraries, the binary executables, and so forth.  The user-modifiable parts are things like custom user configurations, downloaded or added data like documents, photos, icons, and other similar things.  When you upgrade an app, you essentially completely replace the unmodifiable bits with the newer version of those bits, unlike traditional package updates that may only update a single library file or binary executable.  One of the two biggest benefits to this are that you never have to worry about update creep, where update after update after update could, possibly, cause things to break as they leave behind old, conflicting or unneeded files.  The other is that if an update breaks, you can very easily roll back to the last working version.  So, for example, if you have Candy Crush version 1.2.3 installed and install 1.2.5 and discover that 1.2.5 is actually broken and you can no longer play Candy Crush, you could simply roll right back to version 1.2.3 and continue on crushing those candies.  And neither the update nor the rollback has touched your user-modifiable data (your records, progress and such in this case).

So the first thing I wanted to do was get this sucker on my network.  My network, however, is very tightly controlled at home.  All IP addresses are handed out by DHCP, on an assigned basis, so if your device’s MAC address isn’t specifically listed in dhcp.conf on my server, you won’t get an address.  And for wireless devices, even connecting to the access point is MAC controlled, so if your phone, tablet or laptop is not on the ACL on the WiFi access point, you can’t even join the network to ask for an IP address.  Sure, it’s not 100% secure, but nothing ever is, and this is “Secure Enough” for my needs and location.

Now, for my first idea, I wanted to replace my home DNS/DHCP server with a small IoT type device like a Raspberry Pi.  I currently use a hacked together Shuttle PC for this purpose, with a giant external 400W power supply that was made necessary because the small Shuttle PSUs are notoriously flaky, and mine didn’t last three months before it started shutting itself off randomly trying to keep up with the power needs of the CPU on the machine.  So that’s a lot of energy being used for a system that essentially does nothing but hand out DHCP requests and answer DNS queries.  Transit also serves as a bastion point so I can access my LAN remotely so I will try to set this up to do likewise.

First, however, I need the Snappy Pi to have a static IP address.  By default, it comes configured for dynamic addressing for it’s onboard ethernet device.  So I needed to modify /etc/network/interfaces.d/eth0eth0-orig

 

To change the file, a couple things had to happen.  First, I needed to be the root user.  Sudo may have worked, but it’s just as easy to become root to do a lot of things.  So sudo -i to become root.

Now, we just need to edit the file.  There is a problem though.  The root filesystem is mounted read-only by default.  This is great for security, but when you need to edit core files, it makes things a bit difficult.  So, you need to remount the root filesystem read-write so you can edit the core files that are not already designed to be user-writable (learn about the Snappy Core filesystem).  This is simply accomplished like so:

root@localhost:/etc/network/interfaces.d# mount -o remount,rw /

after this, you should be able to edit the eth0 file to use a static addresseth0-fixedI edited the file to add the address, netmask, broadcast and gateway lines and saved the eth0 file.  Then, I shut the interface down and brought it back up and I was in business.  I also took a moment to verify that it all worked by rebooting the RPi.  This highlights a couple things:

  1. On reboot, the root filesystem is again mounted read-only, so if you need to modify more core files, you will need to remount the filesystem once more.
  2. allow-hotplug is an interesting thing.  It’s intention is to only bring the device up on hotplug events.  However, because the ethernet device is always plugged in, on reboot, the kernel detects a hotplug event and brings the device up at boot time, regardless.  That means, if you don’t have it plugged in to a switch and turn it on, you could wait a bit while it tries to obtain a dhcp address.  I noted that after rebooting the RPi with the ethernet cable removed, the eth0 device was still shown as up with the prescribed IP address.

One thing I forgot to mention was that because I’m not using DHCP, I also need to list some nameservers.  Eventually, this file will list itself as it will be running Bind9, but for now, I’m going to give it Google’s public DNS addresses by creating a file called “tail” in /etc/resolvconf/resolv.conf.d that looks simply like this:

ubuntu@localhost:~$ cat /etc/resolvconf/resolv.conf.d/tail 
nameserver 8.8.8.8
nameserver 8.8.4.4

Now that I’m rebooted and have an IP address, lets take a brief whirlwind tour of Snappy before I end this prologue to my Snappy Adventure.

As I mentioned before, Snappy doesn’t use apt to manage packages, it uses a new tool called, not surprisingly, “snappy”.  So first, lets get just a bit of information about my system

ubuntu@localhost:~$ snappy info
release: ubuntu-core/devel
frameworks: webdm
apps:

 

 

So there you have it… there is much more available at https://developer.ubuntu.com/en/snappy/tutorials/ about snappy and its use.

One of these things is not like the others

I thought something was a bit odd ’round here…

bladernr@klaatu:/var/log$ iperf -c 10.0.0.1
------------------------------------------------------------
Client connecting to 10.0.0.1, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.50 port 58747 connected with 10.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.09 GBytes 935 Mbits/sec

bladernr@klaatu:~$ iperf -c 192.168.0.20
------------------------------------------------------------
Client connecting to 192.168.0.20, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.100 port 59579 connected with 192.168.0.20 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 380 MBytes 318 Mbits/sec

The first result is from a D-Link GGS-108 8-Port Gigabit switch that I’ve been using only for the test servers connected to my NUC.  The second result is from a TrendNet TEG-S80G 8-port Gigabit switch that handles traffic for the rest of my home.

The TrendNet switch connects the NUC, DNS/DHCP, Router, my main workstation and an uplink to a switch in the living room that hosts the TV, Wii, and so forth.  Obviously, it’s time to buy a new switch that is NOT made by TrendNet.  The ~320Mbps speed is what I’ve been getting for a long while now from that cheap-o switch.  I should have used the D-Link for the home lan when I re-cabled.

Old school glass on a new school camera

I’ve loved photography since I was very, very young.  Going way back to my first camera, an old Kodachrome 110 to a 126 and an old 60’s model Yashica TL-Electro X 35mm.  Later on I found myself shooting Polaroid because instamatics were just fun and satisfied a need for instant gratification that wasn’t repeated until today’s digital age where photos can be uploaded or shared at the touch of a button.  I’ve dabbled with medium format, pinhole and other 35mm bodies like the Minolta X-700 and Canon AE-1.

Of all those, I only held onto the Yashica and the Minolta.  Eventually I found myself exploring with digital going back to an early Canon Point-and-Shoot that shot 640×480 maximum and required an ungainly cable and horrific software to even get the picture off.  I’ve bought and tossed many small digitals over the years, cameras I’d bought for one reason or another.  Oddly, I’ve only owned one digital SLR, a Canon 350D that I’ve loved carrying around the world, taking pictures of my adventures in places like Taipei, Tokyo, London, Oxford, Prague, Switzerland and around the USA.

But I must admit I started to grow a bit tired of it all.  I found myself more often than not leaving my Canon at home and just shooting things with an iPhone.  The iPhone is not a good camera.  Well, it’s not bad, but it’s not great.  What is IS, however, is convenient.  It’s small, light weight, has a reasonably decent sensor that even if it’s not the best out there, is workable.  What it lacks though, is physical zoom, low-light ability and anything remotely satisfying as a photographer who likes landscapes and architectural shots and panoramics and such.  It IS good for street photography, as evidenced by the literally millions of photos of every day life uploaded to the word on a daily basis.

But because of it’s limitations, I found myself again looking for something better.  Something more professional.  More capable.  But not big and bulky.  I wanted a compact camera with the benefits of an SLR without the weight, yet still capable of using those big, heavy lenses if I wanted.  I really, truly, would love to have a full frame SLR one day, but the size of the things is off-putting as I don’t make a living taking pictures and the cost is astronomical in many cases.

So that set me down the path of looking at Micro 4/3’s and APS-C mirrorless cameras.  They’re small, light but use interchangeable lenses allowing room to grow as well as the ability to shoot in any situation.  They are basically baby SLRs, only no flappy mirror, no giant body, no big battery grip (unless you want one).  They still accept hotshoe attachments (mostly flashes, video light panels and microphones) and can be used for everything from portraiture and street photography to landscapes and sports or action.

After a lot of research, reading and agonizing over details, I settled on the current Sony Alpha 6000, right now the almost top of the line mirrorless from Sony.  The next level up is the A7 II, a full frame (!!) mirrorless less than the size of any full frame dSLR and just as capable.  The a6000 is an amazing little camera.  It’s capable of HD video at 60 fps with amazing tracking AF and lenses with power zoom for smooth transitions from close in to wide.  But I don’t shoot a lot of video at all, and on the rare occasion I do, it’s with a GoPro.  My interest is in the fact that it’s as capable as any prosumer SLR but half the weight and bulk.

The 16-50mm kit lens is underrated in my opinion.  Of course it’s not perfect, it IS, after all, an inexpensive kit lens, but just search around a site like Flickr for photos taken with the lens and you’d be amazed at what it’s actually capable of.  If that’s still not good enough, there are some very well reviewed Zeiss lenses out there that will set you back more than $1K and some in-betweeners as well like the Sony 18-105mm f3.5-5.6.  There are also some very nice primes that get as fast as f2.8 or perhaps f2.

It has a good 24MP APS-C sensor, 129 AF points, great color and contrast, and a basket full of other features.  One thing the Sony E-Mount cameras don’t have, yet, is a very large lens selection.  That’s slowly changing, but there still aren’t as many as you’d find from Canon or Fuji or other companies out there.  That’s where the adapter comes in.

I could have done this with my dSLR, and to be honest it never really clicked in my mind how much fun it could be, but for 14.00 I picked up a rather well built adapter that allows me to attach my old Minolta MD mount lenses to the 6000’s E-Mount frame.  And for the first time, I have actually been giddy, even if I was just taking pictures of junk around my house.  The old 50mm f/1.7 is simply astounding on this camera.  My somewhat cheap Quantaray 35-200mm Macro was surprisingly good, perhaps better than I remember it being.

These are things I will still need to practice a lot with, to get back into the habit of using a purely manual focus lens without any of the benefits of modern lenses, but it’s an adventure I’m looking forward to.  It’s small enough, even if I carry the heavier older Minolta glass that I can carry it around the globe and not feel like I’m lugging a suitcase just for camera gear and electronics.  It’s a far better performer than my 350D and so far I’ve quite enjoyed shooting with it.  I still have my old Canon 350D, but to be honest, the more I shoot with this little Sony, the less and less I feel the desire for a 5D or 1D or larger body.

[UPDATE] MPow Bluetooth 4.0 Stereo Headset – Far better than it has any right to be.

I recently picked up a new set of Bluetooth earbuds for listening to music while at the gym or riding my bike or jogging. I needed the new set to replace a set of Plantronics BackBeat Go II earbuds that had died (The battery was no longer taking a charge). Because I had a warranty claim in with Plantronics on my Go II’s already, I bought the cheapest thing I could find on Amazon with fair reviews. What I ended up with was the MPow Bluetooth 4.0 Stereo Headset:

http://www.amazon.com/gp/product/B00NZTHGN2/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

MPow Bluetooth Earbuds

I will admit, I bought these with only two expectations in mind:

1: They will pair with my iPhone so I can listen to music while working out
2: I will be able to hear music, even if it is tinny, or too low, or scratchy, or poppy, or generally low-fidelity.

I will preface this with the fact that I have not, nor do I plan to, use the phone features of this headset for making calls. I bought this specifically to replace a broken pair of Plantronics BackBeat Go II bluetooth buds for working out in the gym, jogging, or whatever. So this review only focuses o that use case.

As the title said, I was quite surprised by these. After using the medium sized hook inserts and putting the smaller ear tips on, they fit far more comfortably than I expected them to. In fact, I could conceivably wear these for hours without issue, they are as comfortable as my Bose QC-20i and Shure SE215s. They also do an admirable job of blocking noise (non-actively). I don’t hear a lot of rubbing noises as I move, they fit and sit firmly in my ears.

The sound reproduction was amazing. At least, amazing in the sense of “HOLY CRAP I DIDN’T EXPECT THAT”. The sound reproduction actually is quite good. They aren’t as good as my Shures or Bose, or other high end headsets, but you really don’t expect that from something at this price point. What you do get, however, is good, clear audio reproduction. My only real complaint there is that the sound is a bit bassy but not distorted. Just bassy with a bit of loss on the upper range. Mid-range is satisfying.

They battery life seems to be good. I really don’t know yet how long the battery will last, I’ve only used them on two gym trips so far, about and hour and 15 minutes each trip, so far on the same charge. This is already worlds above the Plantronics set that I paid twice as much for, which only have battery life for about 2 hours max.

So there are the big pros for this set of bluetooth earbuds. They are comfortable, have reasonable, and actually surprisingly good audio quality, and so far exemplary battery life.

The only con I’ve come across so far is that the controls don’t seem to control everything on my iPhone. So I can use the + and – buttons with a click and hold to skip songs, but when I use them to adjust volume, they adjust the headset volume NOT the overall phone volume. This isn’t as big a deal as it seems, though I imagine that lack of control may be an issue using them to make phone calls with.

There you have it. For the price, these are far better than they have any right to be. They’re already better than the more expensive set from the very respected Plantronics line at half the cost. These have definitely turned out to be a great purchase.

UPDATE 4-3-2015:

I’ve had these things for  a couple weeks now, and in the interim, Plantronics has shipped my replacement set of BackBeat Go II headphones to me.  After careful consideration of the merits of each, and after thinking long and hard, I have decided that…

… the Plantronics replacements are going on Craigslist.

I am really surprised by these things.  They sound good.  They’re comfortable for my use, an hour or two at a time at the gym.  And the battery seem to go on forever.  I’m at two weeks of use now (at about 1 hour 30 minutes a pop) and they STILL are on the original charge.  Even when new, the Plantronics needed to be recharged after every two workouts.  So far, I’m 7 workouts in and still haven’t had these quit on me.  I’m not going to recharge them either until they do finally die on me.  I want to see just how long they’ll go.

So all in all, this is still one of the most surprising purchases I’ve made in a while.  I really didn’t have high expectations when I bought these headphones, but they are doing everything I expected of the Plantronics set in the first place, and they’re doing those things better.

I can deploy a full Openstack cloud in under 75 minutes… We’re that f#@$ing cool.

openstackI’ve been in Taiwan for the last week and a half working with a customer on a proof of concept solution. All that time was spent setting up and tearing down and rebuilding a fully HA OpenStack Icehouse deployment using Juju and MAAS and Ubuntu 14.04. But the highlight of the week is one of the coolest things we have done in a while.

So you have the background, what I’m about to describe uses the three biggest tools that Canonical produces for the server space. MAAS, Metal-As-A-Service is a bare metal management system that handles provisioning of Ubuntu (and other OSes) in a quick and repeatable way on bare metal, maasbe that on traditional servers or the newer scale-out density server systems like the SeaMicro SM15000, HP Moonshot, and similar.

The next piece is Juju. Juju does for software what MAAS does for hardware. It’s a service orchestration tool that allows you to rapidly deploy services like Hadoop, LAMP stacks, OpenStack, Nagios, OpenVSwitch and pretty much anything you can imagine. And even better, with Juju you can deploy these services just about anywhere you’d want to, from AWS to Azure to Digital Ocean, on Virtual Machines, on a single machine, anywhere in LXC or in this case, to bare metal using MAAS as the provisioning system.

The final piece of puzzle is Canonical’s datacenter management tool called Landscape. Landscape provides a means to manage all your hardware in one location, handling system updates, security alerts, users and so forth. And now, it provides a means to very, VERY easily deploy OpenStack, a task that is not at all trivial.

Recently, Canonical announced The Canonical Distribution of Openstack. This is our way of giving you a cloud, on your hardware, in the easiest way you can imagine. The first time I saw this work, sitting at UTSA to set up a demo for the OpenCompute Summit in Paris, I felt giddy and thought, “This is fucking cool.” And it is.

landscapeUsing Landscape, MAAS and Juju, we have reduced an OpenStack deployment to a few mouse clicks. Once you have MAAS configured and hardware commissioned and ready to roll, you install Landscape using our openstack-installer tool. After that, getting OpenStack running is simply a matter of registering the MAAS server so Landscape knows about the hardware, choosing a few options and away you go.

For now, it’s in Public Beta and the options are limited but more are scheduled and coming in the near future. In essence, if you can plug it into an OpenStack deployment, we want you to be able to do so as easily and smoothly as possible.  Currently, the only hypervisor is KVM though support for other hypervisors is planned. For networking, OpenVSwitch is the way we roll. Object storage is provided by one of either Ceph or Swift and Block Storage is provided by either Ceph or Cinder (via iSCSI). Once you’ve selected your components, you simply need to select the hardware you want to deploy.

For Hardware, you need a minimum of 5 registered systems in MAAS, which need 2 hard drives each and one of which needs to have two NICs on valid networks. The 2-NIC system is used for Quantum Gateway. The second hard disk in each node is used for the Ceph or Swift storage clusters. Once you’ve selected your hardware, you simply click “Install” and wait.

Landscape will give you a status/progress report and when it’s done it will show you your new Cloud, give you the URL and access credentials for the OpenStack Dashboard (Horizon) as well as the openstack credentials (private key). From there you can log into the dashboard, create an instance (it downloads Ubuntu 14.04 and 12.04 by default, but you can add any cloud image you want via Glance) and you’re in business.

And all that, from the time you click Install to the time you log into the Dashboard, takes less than 75 minutes. And that time was on a system with small 2.5″ hard drives spinning at 5400 RPM. On a system with pure SSD storage, it would be a whole lot faster.  Use a local mirror of the Ubuntu archives rather than the internet and the speed increases even more.

So there we sat, walking a Vice President through 6 or 7 mouse clicks to a full OpenStack Juno deployment and yeah, it was that fucking cool.

Horizon

Purism Librem 15 Linux laptop blends high-end hardware with totally free software | PCWorld

Purism Librem 15 Linux laptop blends high-end hardware with totally free software | PCWorld

This is an interesting idea from the world of crowdsourcing.  It’s based on Trisquel, which is based on Ubuntu.  Seems to be made from high end components but it also seems to have a high end price tag.

Looking at the prices, $2199 fro 8GB and a 250GB SSD is more than I paid for my MacBook Air with 8GB and a 512GB mSATA SSD.  CPUs are similar though.

Of course, I always find it interesting how people thumb their nose at Apple Products only to flock to similar products from other makers that just happen to look surprisingly similar to a Mac product.

But if you’re interested in a high end Linux laptop, this is worth a look to be sure.  I think it’s probably in the same realm as the Dell XPS 13, though slightly larger at 15″.  Read more at the link.

Purism Librem 15 Linux laptop blends high-end hardware with totally free software | PCWorld.

 

It’s A Return To The Azure-Alypse–Microsoft Azure Suffers Global Outage

Go big, or go home.  When they have a catastrophic failure, they REALLY have a catastrophic failure.  Seems something has caused a global Azure outage that is impacting many of MSFT’s own services, in addition to the cloud services.  I’m looking forward to reading the postmortem on this.

It’s A Return To The Azure-Alypse–Microsoft Azure Suffers Global Outage.