[UPDATE] MPow Bluetooth 4.0 Stereo Headset – Far better than it has any right to be.

I recently picked up a new set of Bluetooth earbuds for listening to music while at the gym or riding my bike or jogging. I needed the new set to replace a set of Plantronics BackBeat Go II earbuds that had died (The battery was no longer taking a charge). Because I had a warranty claim in with Plantronics on my Go II’s already, I bought the cheapest thing I could find on Amazon with fair reviews. What I ended up with was the MPow Bluetooth 4.0 Stereo Headset:

http://www.amazon.com/gp/product/B00NZTHGN2/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

MPow Bluetooth Earbuds

I will admit, I bought these with only two expectations in mind:

1: They will pair with my iPhone so I can listen to music while working out
2: I will be able to hear music, even if it is tinny, or too low, or scratchy, or poppy, or generally low-fidelity.

I will preface this with the fact that I have not, nor do I plan to, use the phone features of this headset for making calls. I bought this specifically to replace a broken pair of Plantronics BackBeat Go II bluetooth buds for working out in the gym, jogging, or whatever. So this review only focuses o that use case.

As the title said, I was quite surprised by these. After using the medium sized hook inserts and putting the smaller ear tips on, they fit far more comfortably than I expected them to. In fact, I could conceivably wear these for hours without issue, they are as comfortable as my Bose QC-20i and Shure SE215s. They also do an admirable job of blocking noise (non-actively). I don’t hear a lot of rubbing noises as I move, they fit and sit firmly in my ears.

The sound reproduction was amazing. At least, amazing in the sense of “HOLY CRAP I DIDN’T EXPECT THAT”. The sound reproduction actually is quite good. They aren’t as good as my Shures or Bose, or other high end headsets, but you really don’t expect that from something at this price point. What you do get, however, is good, clear audio reproduction. My only real complaint there is that the sound is a bit bassy but not distorted. Just bassy with a bit of loss on the upper range. Mid-range is satisfying.

They battery life seems to be good. I really don’t know yet how long the battery will last, I’ve only used them on two gym trips so far, about and hour and 15 minutes each trip, so far on the same charge. This is already worlds above the Plantronics set that I paid twice as much for, which only have battery life for about 2 hours max.

So there are the big pros for this set of bluetooth earbuds. They are comfortable, have reasonable, and actually surprisingly good audio quality, and so far exemplary battery life.

The only con I’ve come across so far is that the controls don’t seem to control everything on my iPhone. So I can use the + and – buttons with a click and hold to skip songs, but when I use them to adjust volume, they adjust the headset volume NOT the overall phone volume. This isn’t as big a deal as it seems, though I imagine that lack of control may be an issue using them to make phone calls with.

There you have it. For the price, these are far better than they have any right to be. They’re already better than the more expensive set from the very respected Plantronics line at half the cost. These have definitely turned out to be a great purchase.

UPDATE 4-3-2015:

I’ve had these things for  a couple weeks now, and in the interim, Plantronics has shipped my replacement set of BackBeat Go II headphones to me.  After careful consideration of the merits of each, and after thinking long and hard, I have decided that…

… the Plantronics replacements are going on Craigslist.

I am really surprised by these things.  They sound good.  They’re comfortable for my use, an hour or two at a time at the gym.  And the battery seem to go on forever.  I’m at two weeks of use now (at about 1 hour 30 minutes a pop) and they STILL are on the original charge.  Even when new, the Plantronics needed to be recharged after every two workouts.  So far, I’m 7 workouts in and still haven’t had these quit on me.  I’m not going to recharge them either until they do finally die on me.  I want to see just how long they’ll go.

So all in all, this is still one of the most surprising purchases I’ve made in a while.  I really didn’t have high expectations when I bought these headphones, but they are doing everything I expected of the Plantronics set in the first place, and they’re doing those things better.

I can deploy a full Openstack cloud in under 75 minutes… We’re that f#@$ing cool.

openstackI’ve been in Taiwan for the last week and a half working with a customer on a proof of concept solution. All that time was spent setting up and tearing down and rebuilding a fully HA OpenStack Icehouse deployment using Juju and MAAS and Ubuntu 14.04. But the highlight of the week is one of the coolest things we have done in a while.

So you have the background, what I’m about to describe uses the three biggest tools that Canonical produces for the server space. MAAS, Metal-As-A-Service is a bare metal management system that handles provisioning of Ubuntu (and other OSes) in a quick and repeatable way on bare metal, maasbe that on traditional servers or the newer scale-out density server systems like the SeaMicro SM15000, HP Moonshot, and similar.

The next piece is Juju. Juju does for software what MAAS does for hardware. It’s a service orchestration tool that allows you to rapidly deploy services like Hadoop, LAMP stacks, OpenStack, Nagios, OpenVSwitch and pretty much anything you can imagine. And even better, with Juju you can deploy these services just about anywhere you’d want to, from AWS to Azure to Digital Ocean, on Virtual Machines, on a single machine, anywhere in LXC or in this case, to bare metal using MAAS as the provisioning system.

The final piece of puzzle is Canonical’s datacenter management tool called Landscape. Landscape provides a means to manage all your hardware in one location, handling system updates, security alerts, users and so forth. And now, it provides a means to very, VERY easily deploy OpenStack, a task that is not at all trivial.

Recently, Canonical announced The Canonical Distribution of Openstack. This is our way of giving you a cloud, on your hardware, in the easiest way you can imagine. The first time I saw this work, sitting at UTSA to set up a demo for the OpenCompute Summit in Paris, I felt giddy and thought, “This is fucking cool.” And it is.

landscapeUsing Landscape, MAAS and Juju, we have reduced an OpenStack deployment to a few mouse clicks. Once you have MAAS configured and hardware commissioned and ready to roll, you install Landscape using our openstack-installer tool. After that, getting OpenStack running is simply a matter of registering the MAAS server so Landscape knows about the hardware, choosing a few options and away you go.

For now, it’s in Public Beta and the options are limited but more are scheduled and coming in the near future. In essence, if you can plug it into an OpenStack deployment, we want you to be able to do so as easily and smoothly as possible.  Currently, the only hypervisor is KVM though support for other hypervisors is planned. For networking, OpenVSwitch is the way we roll. Object storage is provided by one of either Ceph or Swift and Block Storage is provided by either Ceph or Cinder (via iSCSI). Once you’ve selected your components, you simply need to select the hardware you want to deploy.

For Hardware, you need a minimum of 5 registered systems in MAAS, which need 2 hard drives each and one of which needs to have two NICs on valid networks. The 2-NIC system is used for Quantum Gateway. The second hard disk in each node is used for the Ceph or Swift storage clusters. Once you’ve selected your hardware, you simply click “Install” and wait.

Landscape will give you a status/progress report and when it’s done it will show you your new Cloud, give you the URL and access credentials for the OpenStack Dashboard (Horizon) as well as the openstack credentials (private key). From there you can log into the dashboard, create an instance (it downloads Ubuntu 14.04 and 12.04 by default, but you can add any cloud image you want via Glance) and you’re in business.

And all that, from the time you click Install to the time you log into the Dashboard, takes less than 75 minutes. And that time was on a system with small 2.5″ hard drives spinning at 5400 RPM. On a system with pure SSD storage, it would be a whole lot faster.  Use a local mirror of the Ubuntu archives rather than the internet and the speed increases even more.

So there we sat, walking a Vice President through 6 or 7 mouse clicks to a full OpenStack Juno deployment and yeah, it was that fucking cool.

Horizon

Purism Librem 15 Linux laptop blends high-end hardware with totally free software | PCWorld

Purism Librem 15 Linux laptop blends high-end hardware with totally free software | PCWorld

This is an interesting idea from the world of crowdsourcing.  It’s based on Trisquel, which is based on Ubuntu.  Seems to be made from high end components but it also seems to have a high end price tag.

Looking at the prices, $2199 fro 8GB and a 250GB SSD is more than I paid for my MacBook Air with 8GB and a 512GB mSATA SSD.  CPUs are similar though.

Of course, I always find it interesting how people thumb their nose at Apple Products only to flock to similar products from other makers that just happen to look surprisingly similar to a Mac product.

But if you’re interested in a high end Linux laptop, this is worth a look to be sure.  I think it’s probably in the same realm as the Dell XPS 13, though slightly larger at 15″.  Read more at the link.

Purism Librem 15 Linux laptop blends high-end hardware with totally free software | PCWorld.

 

It’s A Return To The Azure-Alypse–Microsoft Azure Suffers Global Outage

Go big, or go home.  When they have a catastrophic failure, they REALLY have a catastrophic failure.  Seems something has caused a global Azure outage that is impacting many of MSFT’s own services, in addition to the cloud services.  I’m looking forward to reading the postmortem on this.

It’s A Return To The Azure-Alypse–Microsoft Azure Suffers Global Outage.

Multiple NICs on the same Subnet – Avoiding ARP Flux

This is an interesting testing dilemma I haven’t had to deal with in a long time. The story starts out like so:

I have a machine with 2 or more NICs. I have a target system that will be used to do network testing. All NICs on my Server Under Test (SUT) need to be on the same subnet to talk to my target server.

One way of accomplishing this could be to have each NIC on the SUT on it’s own subnet like so:

eth0: 10.0.0.100/24
eth1: 10.0.1.100/24

and have the inbound NIC on my target server support two addresses like this:

eth0.0: 10.0.0.1/24
eth0.1: 10.0.1.1/24

While that’s easy in practice, in theory it becomes cumbersome if you don’t know how many SUTs you’ll have, nor how many NICs in each. I’ve run network tests before on servers with up to 200 NIC ports, meaning my target server would have to have over 200 alias IP addresses on its inbound port.

So the easiest way to handle this is with all the SUT NICs on a single subnet along with my target machine. This allows me to hook up as man NICs as I have address space, meaning I could hook up a LOT of SUTs and run them all simultaneously.

The problem with this approach, however, is ARP flux. This occurrs when a packet is sent to one address, but the target responds with the MAC address for a different NIC. To demonstrate this, see the following. This is me using arping to ping both NICs on my 1U test server:

ubuntu@critical-maas:~$ sudo arping -I eth0 10.0.0.123
ARPING 10.0.0.123 from 10.0.0.1 eth0
Unicast reply from 10.0.0.123 [00:30:48:65:5E:0C] 0.745ms
Unicast reply from 10.0.0.123 [00:30:48:65:5E:0C] 0.779ms
Unicast reply from 10.0.0.123 [00:30:48:65:5E:0C] 0.757ms

ubuntu@critical-maas:~$ sudo arping -I eth0 10.0.0.128
ARPING 10.0.0.128 from 10.0.0.1 eth0
Unicast reply from 10.0.0.128 [00:30:48:65:5E:0C] 0.887ms
Unicast reply from 10.0.0.128 [00:30:48:65:5E:0C] 0.901ms
Unicast reply from 10.0.0.128 [00:30:48:65:5E:0C] 0.849ms

As you can see, no matter which interface IP address I ping, the MAC from eth0 responds saying “Yeah, that’s me!” This can lead to ARP cache poisoning, especially when you’re looking at a system with more than just 2 NICs. All manner of fun things happen when that occurs. But primarily, the problem when testing is that with this scenario, you’re not fully testing your target NIC, you’re testing packets going out on eth1 and coming in on eth0. So we need to fix this issue.

The first things we need to fix involve ARP handling in the Linux kernel. The default behaviour that induces ARP Flux is actually safe in most cases, and gives you a better chance of packets reaching their target. It’s not so good for testing however. So we use sysctl to change some kernel settings.


$ sysctl -w net.ipv4.conf.all.arp_announce=1
$ sysctl -w net.ipv4.conf.all.arp_ignore=2

First the arp_announce change to 1 does this:

Try to avoid local addresses that are not in the target’s subnet for this interface. This mode is useful when target hosts reachable via this interface require the source IP address in ARP requests to be part of their logical network configured on the receiving interface. When we generate the request we will check all our subnets that include the target IP and will preserve the source address if it is from such subnet. If there is no such subnet we select source address according to the rules for level 2.

Then the arp_ignore change to 2 does this:

Reply only if the target IP address is local address configured on the incoming interface and both with the sender’s IP address are part from same subnet on this interface.

Now, on older kernels (2.4 and earlier), that was enough. But on newer kernels, an additional change is necessary due to the changes in how rp_filter is handled. So this would apply to kernels starting at 2.6 and onward through the current 3.x versions. So to make this work on 2.6+ kernels, we set the additional rp_filter value:

$ sysctl -w net.ipv4.conf.all.rp_filter=0

And then, magically, it all starts working the way it should. NOW when we arping the SUT’s eth1 address, we can see that eth1 is actually responding:

ubuntu@critical-maas:~$ sudo arping -I eth0 10.0.0.128
[sudo] password for ubuntu:
ARPING 10.0.0.128 from 10.0.0.1 eth0
Unicast reply from 10.0.0.128 [00:30:48:65:5E:0D] 0.937ms
Unicast reply from 10.0.0.128 [00:30:48:65:5E:0D] 0.888ms
Unicast reply from 10.0.0.128 [00:30:48:65:5E:0D] 0.844ms

To demonstrate this with some traffic load.

Before changing the kernel settings:

ubuntu@supermicro:~$ netstat -ni
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 2864 0 0 0 583 0 0 0 BMRU
eth1 1500 0 58 0 0 0 2035 0 0 0 BMRU
lo 65536 0 0 0 0 0 0 0 0 0 LRU
ubuntu@supermicro:~$ sudo ping -I eth1 -f -c 10000 10.0.0.1
PING 10.0.0.1 (10.0.0.1) from 10.0.0.128 eth1: 56(84) bytes of data.
--- 10.0.0.1 ping statistics ---
10000 packets transmitted, 10000 received, 0% packet loss, time 298ms
rtt min/avg/max/mdev = 0.210/0.258/0.497/0.024 ms, ipg/ewma 0.295/0.260 ms
ubuntu@supermicro:~$ netstat -ni
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 12897 0 0 0 600 0 0 0 BMRU
eth1 1500 0 60 0 0 0 12035 0 0 0 BMRU
lo 65536 0 0 0 0 0 0 0 0 0 LRU

See here that we send 10000 ICMP packets out eth1 on the SUT, but the reply packets are all accepted on eth0 instead.

Now after changing:

ubuntu@supermicro:~$ netstat -ni
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 47106 0 0 0 1483445 0 0 0 BMRU
eth1 1500 0 42805 0 0 0 8179 0 0 0 BMRU
lo 65536 0 0 0 0 0 0 0 0 0 LRU
ubuntu@supermicro:~$ sudo ping -c 10000 -I eth1 -f 10.0.0.1
PING 10.0.0.1 (10.0.0.1) from 10.0.0.128 eth1: 56(84) bytes of data.
--- 10.0.0.1 ping statistics ---
10000 packets transmitted, 10000 received, 0% packet loss, time 2496ms
rtt min/avg/max/mdev = 0.166/0.216/0.367/0.010 ms, ipg/ewma 0.249/0.216 ms
ubuntu@supermicro:~$ netstat -ni
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 57046 0 0 0 1503462 0 0 0 BMRU
eth1 1500 0 52811 0 0 0 18179 0 0 0 BMRU
lo 65536 0 0 0 0 0 0 0 0 0 LRU

Now you can see that the 10000 ICMP Packets exit eth1 and the acks are all accepted on eth1 as they should be.

The future already started. You missed it, but it’s coming soon!

It started with a post to an e-mail list I’m on of fellow disenfranchised geeks who wanted a place where we could crack wise, talk of off-topic things and be most assuredly politically incorrect, feelings be damned. In a typical thread derailment this was said:

PC aren’t dying, they’re in transition. There won’t be a time where you don’t need a PC. Tablets and smart phones have gotten good, but they still can’t replace a powerful general purpose PC.

This is not a difficult statement to make. The ubiquitous PC has been around for longer than many of us today have been alive. However, some of us DO still predate the PC and have seen it grow from the boxy white bricks like the IBM PS/2 to today’s myriad shapes and sizes, full of glowing neon, water or oil cooling systems and more power than older supercomputers.

But then, another member of the group made this simple statement

People used to say that about watches. I haven’t worn a watch in 15 years.

Which is some sort of indication that both PCs and Watches are anachronisms only fit to serve as curiosity pieces or statements made by elitists and hipsters. And yet millions of watches are still sold every year. I wear a watch almost daily even though I have the time displayed on every computer I own plus my iPhone. Heck, even my coffee maker has a clock built in. Then again, I collect analog watches so I have a biased opinion on the death of the watch. But again, millions are sold every year.

I do agree that PCs are in transition. You can’t really say that smart phones and tablets are going to supplant PCs any more than you can say that PCs as we know them are never going to die. PC means Personal Computer and for all intensive purposes (*Yes, I know it’s incorrect) iPhones and the bigger Android based smart phones are quickly becoming the standard “Personal Computer”. After all they are computers, there is no doubt about that, and they are very, very personal. My iPhone has more computing power than my first desktop machine did.

What I think is actually going to happen is that phones and tablets will become more and more powerful, faster, and simply better than what we know today. They will come with faster processors and with the advent of Systems on Chip (SoC) more and more cores are going to be jammed onto smaller and smaller devices. The current ARM phones already sport dual core chips. Quad core is not too far away.

What we think of as the PC today will continue becoming smaller and smaller and eventually those lines will intersect and we’ll have the next big thing that WILL replace both. That, like it or not, is really inevitable. Sure there will be those holdouts that insist on a big clunky desktop. I have a Tandy TRS-80 Color Computer II sitting on the other desk, hooked up to a 13 inch television that I still write short programs on from time to time. But eventually, those two lines, phones and tablets getting bigger, better, PCs getting smaller, smaller WILL intersect and that will be the moment everything changes.

Think how much we take for granted today. Just 6 years ago there was no such thing as a “Smart Phone”. People had cellular phones, people had PDAs. Some lucky few had palm-top computer type devices. Some had PDAs that were phones. But none of them worked terribly well. Then, as they did in 1984 in their Orwellian commercial introducing the Macintosh, Apple changed the world with the launch of the iPhone.

They had already changed the way we entertain ourselves with the iPod, and now the iPhone was a shot heard round the world, kicking off the never ending Smart Phone arms race that has been fought on the streets, in stores, in advertisement and in court. From there the Smart Phone gained overnight traction and the chimes of inevitability rang. Already computers were getting smaller with the Laptop, the Notebook, the Sub-Notebook and then the Net-book. Tablets were just around the corner.

So where does this lead? It is leading to a collision of cosmic proportions. For a good glimpse of this think back to 2009. A man is working his way though the airport security line when he is pulled aside to explain the strange device he’s trying to sneak through security. The man goes on to explain the Motorola Atrix, a phone that’s a computer. Or is it a computer that’s a phone? It’s a cell phone with a fast (at the time) processor and, more importantly, a dock that turns it into a notebook computer.

It’s not super fast. but I think that it’s a pretty good first glimpse at what we’re going to see in the future. Think of how many people (aside from computer geeks like me because like it or not, we are NOT the super-set, we are a reasonably tiny subset) buy “Desktop” machines versus laptops.

Just using my own life as an anecdote: at my desk I have a 1U rack server and a Shuttle PC. The 1U serves as a test machine for when I’m testing server type stuff like virtualization hosting systems, cloud infrastrucutre, etc. The Shuttle is my media server at home that provides music, movies and photos to the rest of the personal computers in my home. It has no keyboard, no monitor and can only be accessed via the network. The rest of my personal computers are all laptops, one netbook and one thin client. Plus appliances like my blu-ray player, my xBox 360 and my Wii. I have ONE traditional tower case, and it’s used as an end table. I sit my beer on it when I’m working. That’s it’s sole function these days.

My parents have not owned a desktop in years. Today, they only own a pair of laptops. In fact, outside of the Power User/Admin/Hacker set, I don’t know anyone who owns a desktop, and I know a LOT who own tablets.

We are headed to a point where “Computer” and “Form Factor” are irrelevant, I think. We’re going to get to a point where you can wear a computer on your wrist, if you choose, that is as powerful as the one you carry in your shoulder bag, that is as powerful as the one you use at your desk at work, and they may all be the same computer with
swap-able physical interfaces.

Think for a moment about Apple’s AirPlay. You can queue up a video on your Mac Laptop, then pick it up on your iPhone or iPad mid-stream, then shoot it over to your AppleTV and finish watching it on your HDTV. This works. Today. This works well. I have seen it in action.

Now imagine that you have a smart phone type device. You’re on the train headed to work or riding on the bus or whatever surfing the internet, reading email, doing the normal “stuff” we do when we’re bored. Facebook. News. You get the idea. You get to work and sit down at your desk. You push a button on the 24″ display panel on your desk and as soon as it has turned on, all that data you were looking at on your phone and more instantly starts appearing on the desk monitor. You use the wireless keyboard to type e-mail, write code or who knows what. You use a mouse or perhaps a touch device to move the pointer around the giant display.

You’re done for the day, so time to head home. You pick up your smart phone thing and turn off that big 24″ monitor, and out the door you go. Back on the bus you’ve picked up where you left off at work and continue doing things on the device as you head home. At home it’s time to catch up a bit with your friends. You sit down at your desk and open up a thing that looks a bit like the laptops we have today. Power that on and again, your smart phone begins sending all it’s data to that machine instead. Now you have a small laptop like device to type on. The built in camera lets you video chat with your mom, 1000 miles away, before dinner by wirelessly sending that data through your phone.

Dinner is over and it’s time to watch some television. You look up the show guide on your phone device and notice that there’s a new episode of “Ow! My Balls” on tonight. You click that and it begins playing, not on your phone, but across the room on your 60″ 3D HDTV. You spend the rest of the night watching mind numbing reality television and then head off to bed, to start the cycle over again tomorrow.

We’re not there yet, but the bits and pieces individually for every piece of that already exist today. From things like the Atrix to IPTV to FIOS and ubiquitous wireless data access (wi-fi or cellular). It’s all there. Every last bit. The trick is bringing it all together. Smashing one atom of Personal Computer together with one atom of Smart Phone and one atom of Tablet. That will kick off an explosion of interconnected technology that many can not even imagine. Just like the way IBM couldn’t imagine why people at home would want a computer of their own and thus turned away a young Bill Gates. And we all know where that led.