Storagebod Rotating Header Image

Me

Doctors in the Clouds

At the recent London Cloud Camp; there was a lot of discussion about DevOps on the UnPanel; as the discussion went on, I was expecting the stage to be stormed by some of the older members in the audience. Certainly some of the tweets and the back-channel conversations which were going on were expressing some incredulity at some of the statements from the panel.

Then over beer and pizza; there were a few conversations about the subject and I had a great chat with Florian Otel who for a man who tries to position HP as a Cloud Company is actually a reasonable and sane guy (although he does have the slightly morose Scandinavian thing down pat but that might just be because he works for HP). The conversation batted around the subject a bit until I hit an analogy for DevOps that I liked and over the past twenty-four hours, I have knocked it around a bit more in my head. And although it doesn’t quite work, I can use it as the basis for an illustration.

Firstly, I am not anti-DevOps at all; the whole DevOps movement reminds me of when I was fresh-faced mainframe developer; we were expected to know an awful lot about our environment and infrastructure. We also tended to interact and configure our infrastructure with code; EXITS of many forms were part of our life.

DevOps however is never going to kill the IT department (note: when did the IT department become exclusively linked with IT Operations?) and you are always going to have specialists who are required to make and fix things.

So here goes; it is a very simple process to instantiate a human being really. The inputs are well known and it’s a repeatable process. This rather simple process however instantiates a complicated thing which can go wrong in many ways.

When it goes wrong, often the first port of call is your GP; they will poke and prod, ask questions and the good GP will listen and treat the person as a person. They will fix many problems and you go away happy and cured. But most GPs actually have only a rather superficial knowledge of everything that can go wrong; this is fine, as many problems are rather trivial. It is important however that the GP knows the limits of their knowledge and knows when to send the patient to a specialist.

The specialist is a rather different beast; what they generally see is a component that needs fixing; they often have lousy bedside manners and will do drastic things to get things working again. They know their domain really well and you really wouldn’t want to be without them. However to be honest, are they a really good investment? If a GP can treat 80% of the cases that they are faced with, why bother with the specialists? Because having people drop dead for something that could be treated is not especially acceptable.

As Cloud and Dynamic Infrastructures make it easier to throw up new systems with complicated interactions with other systems; unforeseeable consequences may become more frequent, your General Practitioner might be able to fix 80% of the problems with a magic white-pill or tweak here or there….but when your system is about to collapse in a heap, you might be quite thankful that you still have your component specialists who make it work again. Yes, they might be grumpy and miserable; their bedside manner might suck but you will be grateful that they are there.

Yes, they might work for your service provider but the IT Ops guys aren’t going away; in fact, you DevOps have taken away a lot of the drudgery of the Ops role. And when the phone rings, we know it is going to be something interesting and not just an ingrown toe-nail.

Of course the really good specialist also knows when the problem presented is not their speciality and pass it on. And then there is the IT Diagnostician; they are grumpy Vicodin addicts and really not very nice!

The Right Stuff?

I must be doing something right or perhaps very wrong but the last few months have seen this blog pick-up a couple of ‘accolades’ that have left me feeling pretty chuffed.

Firstly Chris Mellor asked whether El Reg could carry my blog; as a long-term reader of Chris’ and of The Register, this made my year. To be picked up by the scurrilous El Reg is pretty cool.

And yesterday I got an email from EMC telling me that I had been voted to EMCElect! Now, that’s a pretty good start to this year.

This doesn’t mean that I’m going to go easy on EMC; I don’t think that’s what they want from me and if I did, El Reg wouldn’t want me either.

So I guess I’ll keep doing what I’m doing and hope you continue to enjoy it.

Scale-Out Fun For Everyone?

Recently I’ve been playing with a new virtual appliance; well new to me in that I’ve only just got my hands on it. It’s one of the many that our friends in EMC have built and it is one which could do with a wider audience.

A few years ago Chad Sakac managed to make the Celerra virtual appliance available to one and all; a little sub-culture built up around it and many VMware labs have been built around it; when the Celerra and Clariion morphed into the VNX range, the virtual appliance followed. Nicholas Weaver further enhanced it and made it better and easier to use. It’s a great way for the amateur to play with an Enterprise class NAS and get some experience; I suspect it is also a great way for EMC to get community feedback and input on the usability and features in the VNX. A win/win as we like to say.

But EMC have another NAS product, one that I suspect over the long term will become the foundation of their NAS offerings; it is certainly important to their Big Data aspirations; yes, the Isilon range of Scale-Out NAS. I’d always suspected that there must be an appliance version kicking around; I mean anyone who has ever played with an Isilon box will have realised that it really is just an Intel server. You can order the SuperMicro motherboard which it is built on and pretty build your own if you wanted.

At a recent meeting, I was talking about the need for a training/test system for some of my guys to play on and lamenting that I probably could not justify the cost; our Isilon TC said ‘Why don’t I send you links to the Virtual Appliances?’

I bit his hand off and now I have a little virtual Scale-Out NAS to play with. It’s pretty much as easy to set up as the real thing without all the hassles of racking and stacking; I’ve got it running with 5 virtual nodes with a small amount of disk and can mess around with to my heart’s content.

I wish that you guys could also have a play but perhaps the guys from the Isilon team are bit nervous that we might do some silly things like put it into a production environment. I guess some of you might be that stupid but it didn’t stop them putting out the Celerra/Clariion version. So EMC can you give the community an early Christmas present and get the Isilon appliance out there.

Scale-Out NAS is going to be a really important growth sector; OneFS is a great product and it takes away a lot of the pain in building them and helps to demystify the whole thing.

At worst, a few geeks like me get to have some fun and you get some interesting feedback but I suspect you might find some people doing some interesting things with it and build a decent community.

And IBM, perhaps you could do the same and build a SONAS appliance and get that out as well?

I’d love to see EMC make the Enginuity appliance generally available but that does have stupid memory and CPU requirements, so I’m not holding my breath for that….

New Lab Box: HP ML110 G7

I keep meaning to do a blog post on the Bod home-office, something like the Lifehacker Workspace posts but I never quite get round to it. Still, suffice to say, my home workspace is pretty nice; it’s kind of the room I wanted when I was thirteen but cooler! The heart of the workspace though is the tech, I have tech for gaming, working, chilling and generally geeking; desktops of every flavour and a few servers for good measure.

Recently, as you will know I have been playing with Razor and Puppet and I found that my little HP Microserver was struggling a bit with the load I was putting on it and I started to think about getting something with a bit more oomph. I had decided that I was going to put something together based on Intel’s Xeon technology and began to put together a shopping list.

Building PCs is something I kind of enjoy but then as luck would have it, an email dropped into my mailbox from Servers Plus in the UK offering £150 cashback on the HP ProLiant ML110 G7 Tower Server with a E3-1220 Quad Core processor; this brought the price down to £240 including VAT. And I was sold..no PC building for me this time.

As well as the afore mentioned E3-1220, the G7 comes equipped with a 2Gb ECC UDIMM, 2x Gigabit Ethernet ports, iL03 sharing the first Ethernet, 250GB Hard Disk, 350W power supply and generally great build quality (although I could do a better job with the cable routing I reckon).

The motherboard can support up to six SATA devices and there are four non-hotswap caddies for no-screw hard-disk installation, one of which holds the 250 GB Hard Disk. Installing additional drives was a doddle and involved no cursing and hunting for SATA cables. I did not bother to install an optical disk as I intended to network boot and install from my Razor server.

Maximum supported memory is an odd 16GB; the chipset definitely supports 32GB but there are very mixed reports of running an ML110 G7 with 32GB; I just purchased a couple of generic 4 GB ECC DIMMS for about £50 to bring it up to 10 GB for the time being. I’d be interested in hearing if anyone has got a ML110 G7 running successfully with 32GB. There’s no technical reason for HP to limit the capability and it does seem strange. The DIMM slots are easily accessible and no contortions are required to install the additional memory.

There are 4 PCIe card slots available; 1×16, 2×4 and 1×1; this should be ample for most home servers as it already comes with two onboard ethernet ports.

After installing the additional memory and hard-disks, I powered the box up and let it register with my Razor server; added a policy to install ESXi on it and let it go.

A quick note about the iLO3, it is the basic license which allows you to power-up and power-off, do some basic health checking and monitoring but no remote terminal; this is not a huge problem for me as the server is in the same room and I can easily put a monitor on it if required.

The ML110 is pretty damn quiet considering the amount of fans in it but start putting under load and you will know it’s there but it’s no noisier than my desktop when I am gaming and all the fans are spinning. It is certainly noisier than the Microserver though.

Once ESXi was installed; bringing up the vSphere let me see that all the components are recognised as expected; all the temperature monitors and fans were also being seen. Power management is available and can be set to Low power if you want for your home lab.

So I would say that if you want a home lab box with a little more oomph than the HP Microserver; the ML110 G7, especially with the £150 cash-back takes some beating. If it could be upgraded all the way to 32GB, then it would be awesome.

 

 

 

Razor – An Idiot’s Guide to Getting Started!

My role at work does not really allow me to play with tech very often and unless I have a real target in mind, I’m not as much as an inveterate hacker as I used to be; I generally find a problem, fix a problem and then leave it alone until solution is broken. So I don’t tend to go into any depth on anything but every now and then, I see something and think that’s cool, can I get it to work.

When I saw the new tool from Nicholas Weaver and EMC built on Puppet to configure ‘bare-metal’ machines; I decided that was something cool enough to have a play with. But there were a few problems, I knew what Puppet is but I’d never used it and I really didn’t have a clue what I was doing.

Still, I followed the links to the documentation and started to hack; after a couple of failed attempts due to missing prerequisites, I decided to be a bit more methodical and document what I was doing.

So this is what I did…more or less. And hopefully this might be helpful to someone else. I am assuming a certain amount of capability tho’! So more an Idiot Savant than just an Idiot.

I have a number of core services already running at home; I have my own internal DNS and DHCP server running on Ubuntu, I have a couple of NAS servers and a couple of machines running ESX5i.

All the documentation for Razor is based around Ubuntu Precise Pangolin 12.04; so first thing to do was to build a Precise VM on one of the ESX5i boxes. This was provisioned with 2 Gigs and 2 virtual cores.

1) Install Ubuntu 12.04 Server and I always do an OpenSSH Server build at installation; I leave everything else to after I’ve done a base install.

2) I added a static assignment for the server in my DHCP box and created a DNS entry for it.

3) After the server was installed, I did my normal ‘break all the security’ and used sudo to set a root password and allow myself to log directly on as root. I’m at home and can’t be bothered to use sudo for everything.

4) I started installing packages, I’m not sure whether the order matters but this the order I did things and all this was done as root

EDIT:According to Nick and he should know, the Razor module installs Node and Mongo-db automagically…I had some problems the first couple of times and decided to do it myself, this is possibly because I’m an extremely clever idiot and break idiot proof processes.

Node.JS

apt-get install nodejs npm

MongoDB

I didn’t use the standard packaged version and pulled down the package from mongodb.org

apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
vi /etc/apt/sources.list
Added deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen
apt-get update
apt-get install mongodb-10gen

Puppet

Yet again I didn’t use the standard packaged version and pulled down from puppetlabs.com

wget http://apt.puppetlabs.com/puppetlabs-release_1.0-3_all.deb
dpkg -i puppetlabs-release_1.0-3_all.deb
apt-get update
apt-get install puppetmaster

Ruby 1.9.3

Note that the above install does install Ruby but appears to bring down Ruby 1.8; Razor wants a later version.

apt-get install ruby1.9.3

This seems to do what you want!

At this point you should be in the position to start installing Razor.

Razor

This is very much cribbed from the Puppet Labs page here

puppet module install puppetlabs-razor

chown -R puppet:puppet /etc/puppet/modules

puppet apply /etc/puppet/modules/razor/tests/init.pp --verbose

This should run cleanly and at this stage you should have some confidence that Razor will probably work.

/opt/razor/bin/razor

This shows that it does work.

/opt/razor/bin/razor_daemon.rb start

/opt/razor/bin/razor_daemon.rb status

This starts the razor daemon and confirms it is running. Our friends at PuppetLabs forgot to tell you start the daemon, it’s kind of obvious I guess but made this idiot pause for a few minutes.

Configuring Your DHCP Server

I run my own DHCP server based and this needed to be configured to point netbooting servers at the Puppet/Razor server.

I amended the /etc/dhcp/dhcp.conf file and added the following

filename "pxelinux.0";
next-server ${tftp_server_ipaddress};

in the subnet declaration.

At this point, you should be ready to really start motoring and it should be plain sailing I hope. Certainly this idiot managed to follow the rest of the instructions for Example Usage on Puppetlabs.

Of course now I’ve got lots of reading to do around Puppet and the likes but at the moment, it does appear to work.

So great work Nick and everyone at EMC.

Flash Changed My Life

All the noise about all flash arrays and acquisitions set me thinking a bit about SSDs and flash; how it has changed things for me.

To be honest, the flash discussions haven’t yet really impinged on my reality in my day-to-day job, we do have the odd discussion about moving metadata onto flash but we don’t need it quite yet; most of the stuff we do is sequential large I/O and spinning rust is mostly adequate. Streaming rust i.e tape is actually adequate for a great proportion of our workload. But we keep a watching-eye on the market and where the various manufacturers are going with flash.

But flash has made a big difference to the way that I use my personal machines and if I was going to deploy flash in a way that would make the largest material difference to my user-base, I would probably put it in their desktops.

Firstly, I now turn my desktop off; I never used to unless I really had to but waiting for it to boot or even awake from sleep was at times painful. And sleep had a habit of not sleeping or flipping out on a restart; turning the damn thing off is much better. This has had the consequence that I now have my desktops on an eco-plug which turns off all the peripherals as well; good for the planet and good for my bills.

Secondly, the fact that the SSD is smaller means that I keep less crap on it and am a bit more sensible about what I install. Much of my data is now stored on the home NAS environment which means I am reducing the number of copies I hold; I find myself storing less data. There is another contributing factor; fast Internet access means that I tend to keep less stuff backed-up and can stream a lot from the Cloud.

Although the SSD is smaller and probably needs a little more disciplined house-keeping; running a full virus check which I do on occasion is a damn sight quicker and there are no more lengthy defrags to worry about.

Thirdly, applications load a lot faster; although my desktop has lots of ‘chuff’ and can cope with lots of applications open, I am more disciplined about not keeping applications open because their loading times are that much shorter. This helps keeping my system running snappily, as does shutting down nightly I guess.

I often find on my non-SSD work laptop that I have stupid numbers of documents open; some have been open for days and even weeks. This never happens on my desktop.

So all-in-all; I think if you really want bang-for-buck and to put smiles on many of your users’ faces; the first thing you’ll do is flash-enable the stuff that they do everyday.

 

 

Your Life, Their Product

So whilst the UK was recovering from over-indulging in chocolate eggs; across the Atlantic, Facebook were splashing out $1 Billion on Instagram. And still the world continued to spin and orbit the Sun. So what does this mean to us all; there will be a lot of soul searching and discussion but ultimately this just continues to productise your life and your experience.

I watched the Google’s Project Glass video prior to the Facebook announcement and was thinking if Google were to buy someone like Instagram; the anonymity of the crowd has gone, the glasses could identify the person you were looking at immediately. Of course Facebook could do the same thing and create their own Project Social Glass. You will no longer be able to sit in coffee shop quietly unrecognised, you would be instantly identifiable. Would you be entirely comfortable with that? I know I won’t be.

There are times in our lives where we just want to want be alone and not identified; to remove that opportunity and to have that constant feeling that you are being watched will change our natures. In our allowing of our lives to productised, we may lose something which is essential to our well-being; Facebook is arguably already removing the right to make mistakes and the ability to forget.

Could it remove the right to be anonymous? Are we heading towards the perfect storm which shatters our illusions of privacy? For even if it is an illusion, it is an important one.

We have to be very careful as to where this road takes us.

Of course Facebook could have just spent $1 Billion on an app to make crap photos look like they were taken 40 years ago.

 

Do you need a desktop?

Work provide me with a laptop which spends most of its time locked to my desk. It’s quite a nice business laptop but really I can’t be bothered to carry it around. On occasion, when I’m working from home and realise that I am going to need access to some of corporate applications which require VPN access, it’ll come home with me but mostly not.

To be quite honest, even my MBA doesn’t travel that much, up and down the stairs is about as far as it goes. It is quite the nicest and most practical laptop that I’ve ever owned but I think we are getting close to the stage where a tablet can do almost everything that I need where-ever I am.

I was thinking as I was working today whether what I was doing required the traditional desktop experience and could I simply use my iPad as the access device instead. The answer is mostly yes, almost all the applications that I use are generic enough that there are good enough replacements on the iPad or they are accessed by a web interface anyway.

There are a few blockers tho’ at present

1) at present I can’t get my iPad onto the corporate wireless, this means that I can’t access a number of key applications due to ‘security’ restrictions but I can access email which appears to be our preferred file delivery/transfer mechanism.

2) I need a real keyboard to type on, there is a limit to how much I am prepared to type on a screen keyboard. I could overcome this relatively easily by bringing a bluetooth keyboard in.

3) Wired Ethernet is a necessity when working in some of our data centres or secure areas.

4) Unfortunately, I struggle without PowerPoint and Visio unfortunately; I can cope without Word, Excel is a little more problematic but it’s manageable. Keynote is nice but it makes a real mess of rendering PowerPoint in my experience.

5) Working on an external display is often a much nicer experience than using the tablet screen, even tho’ the retina display is the wonderful. But I have both the HDMI and VGA dongles which gets round this. But I wish that Apple could find a way to put a mini-DisplayPort on the iPad as using the adapters means that I loose any chance of using a USB device. Not important most of the time but very useful for transferring files from cameras and other devices.

But then I started thinking some more, perhaps I don’t really need a tablet either for work. Perhaps a smartphone which I dock would do? What we could do with is a standard dock for all mobile devices which charges, displays on an external screen and allows input from a standard keyboard/mouse.

Planes, trains, hotels and the like could simply provide a dock and you would end up carrying even less. At that point a device the size of a Samsung Note or Kindle Fire becomes a very interesting proposition.

And yet, I still expect to keep my PC desktop for some time….why? It’s still the best serious gaming platform out there. But for almost everything else I could probably manage with a mobile device.

N

I was relaxing in the bath pondering the bubbles, clusters of bubbles are quite interesting, you pop one and the structure re-organises to compensate; you add another one in and everything shifts about to make room. I was thinking about Cloud and Cloud architectures.  Now being an infrastructure-type person, I tend to focus on infrastructure and how you make an infrastructure as robust as possible.

We tend to design to an ‘N+’ model, more often ‘N+1’ but sometimes it can be ‘N+n’ where ‘n’ reflects how important we think an environment is. This sort of model suits the applications and infrastructure we find in the traditional data-centre; it certainly suits the applications which are not especially aware of their surroundings and their resources. These applications generally have no situational awareness and don’t really care. If all of your applications are like this, you will be probably looking at Infrastructure as a Service at best. You want reliable hardware to support your dumb, unreliable applications.

Now this brings me to ‘N’; I think one of the key characteristics of a Cloud application is that you design it to run on ‘N’ nodes where ‘N’ is subject to change and that change is often going to be negative. In fact, you probably ought to design and code for ‘N-1’ or even ‘N-n’; your infrastructure will change and fail more often than it does in a traditional data-centre and you cannot rely on anything. This means that your applications need to be a lot more sophisticated when dealing with concepts such as state but also need ways of discovering services, resources, scaling both up and down. Your applications need to be reliable and intelligent. They need to be like the bubbles in a bath.

By the way, this does not negate the need for infrastructure people; you may need less of them but they are going to be working at a different levels, they need to be thinking about environments and not individual machines; architecting availability zones, scalable networks and storage etc but they will not be providing an individual service to support a specific application.

 

Cache Splash

It’s funny, I had a brief discussion about my blog with my IT director at work today; I wasn’t aware that he was aware that I blogged but it seems a couple of people outside of work had outed me, in what appears to be very complementary terms; he was pretty relaxed about my blog and one of his comments was that he assumed I discussed new products and I said I did.

But on the way home, I thought about it and to be quite frank, I used to talk a lot about new products but I don’t really do so these days. So it is ironic that today, I’m going to knock out a quick blog about EMC’s VFCache announcement; they don’t need the publicity but I’m going to talk about it anyway.

VFCache is very much a version 1.0 product from what I can see; EMC appeared to have set their bar quite low in what they are trying to achieve with this release; it appears that they’ve very much targeted Fusion-IO pretty much directly and decided to go after them from the get-go. Trash them early and don’t let another NetApp happen.

Go for engineering simplicity and don’t fill the product full of features….yet! Keeping it simple means that EMC can accelerate any array, not just an EMC array but in the future when new features come along, many of these might well only be available with an EMC back-end array. You’ve bought your Flash card, if you really want value….you need to partner it with an EMC array.

And in fact, to really leverage any server-side flash product; you probably do need array-awareness to ensure that you don’t do economically silly things like storing multiple copies of the same information in different caches; how many times do you want to cache the same data.

You need an efficient way of telling the array, ‘Oi I’ve cached this, you don’t need to’; this will allow you to utilise the array cache for workloads which might not easily support server-side caching currently. Perhaps at some point we’ll see a standard but standards are rarely fast moving in storage.

I also expect to see EMC build in some intelligence to allow it to leverage the split card capability; perhaps using PowerPath to flag that actually you might want to consider using the split card capability to gain performance?

I’d also be interested in seeing advancing modelling tools which allowed you to identify those servers and workloads which would most benefit from VFCache and what the impact is on the other workloads in the data-centre. If you accelerate one workload with VFCache and hence free up cache on the shared-array, do all workloads benefit? Can I target the deployment at key servers?

Deduplication is coming but it needs to be not at the expense of latency.

And of course there is the whole cluster-awareness and cache-consistency thing to sort out and perhaps this whole thing is a cul-de-sac whilst we move to flash-only-shared-storage-arrays…that’s until the next super-fast technology comes along.

Yes, EMC’s annoucement is very product 1.0 and a bit ‘ho-hum’ but the future is more interesting. Storage, Snorage? Sometimes but it’s impact sometimes wakes you up with a bit of a shudder’.

I wonder who is going to announce next or what the next announcement might be. 2012 might be a bit more interesting.