Storagebod Rotating Header Image

Virtualisation

Dude – You’re Getting An EMC

Just a few thoughts on the Dell/EMC takeover/merger or whatever you want to call it. 

  1. In a world where IT companies have been busy splitting themselves up; think HP, Symantec, IBM divesting from server business…it seems a brave move to build a new IT behemoth. 
  2. However; some of the restructuring already announced hints at a potential split in how Dell do business. Dell Enterprise to be run out of Hopkinton and using EMC’s Enterprise smarts in this space.
  3. Dell have struggled to build a genuine storage brand since going their different ways; arguably their acquisitions have under-performed.
  4. VMware is already under attack from various technologies – VMware under control of hardware server vendor would have been a problem a decade ago but might be less so as people have more choices for both virtualising Heritage applications and Cloud-Scale. VMware absolutely now have to get their container strategy right.
  5. EMC can really get to grips with how to build their hyper-converged appliances and get access to Dell’s supply chain. 
  6. That EMC have been picked up by a hardware vendor just shows how hard it is to transition from a hardware company to a software company. 
  7. A spell in purdah seems necessary for any IT company trying to transition their business model. Meeting the demands of the market seems to really hamper innovation and change. EMC were so driven by a reporting cycle, it drove very poor behaviours.
  8. All those EMC guys who transitioned away from using Dell laptops to various MacBooks…oh dear!
  9. I doubt this is yet a done deal and expect more twists and turns! But good luck to all my friends working at both companies! May it be better!

 

Wish I was there?

It’s an unusual month which sees me travel to conferences twice but October was that month but there is a part of me who wishes that I was on the road again and off to the OpenStack Summit in Paris. At the moment, it seems that OpenStack has the real momentum and it would have been interesting to compare and contrast it with VMworld.

There does seem to be a huge overlap in the vendors attending and even the people but it feels like it is the more vibrant community at the moment. And as the OpenStack services continue to extend and expand; it seems that it is a community that is only going to grow and embrace all aspects of infrastructure.

But I have a worry in that some people are looking at OpenStack as a cheaper alternative to VMware; it’s not and it’s a long way off that and hopefully it’ll never be that…OpenStack needs to be looked as a different way of deploying infrastructure and applications, not to virtualise your legacy applications. I am sure that we will at some point get case-studies where someone has virtualised their Exchange infrastructure on it but for every success in virtualising legacy, there are going to be countless failures.

If you want an alternative to VMware for your legacy; Hyper-V is probably it and it may be cheaper in the short-term. Hyper-V is still woefully ignored by many; lacking in cool and credibility but it is certainly worth looking at as an alternative; Microsoft have done a good job and you might want to whisper this but I hear good things from people I trust about Azure.

Still, OpenStack has that Linux-type vibe and with every Tom, Dick and Harriet offering their own distribution; it feels very familiar…I wonder which distribution is going to be Redhat and which is going to be Yggdrasil.

 

ESXi Musings…

VMware need to open-source ESXi and move on; by open-sourcing ESXi, they could start to concentrate on becoming the dominant player in the future delivery of the 3rd platform.

If they continue with the current development model with ESXi; their interactions with the OpenStack community and others will always be treated with slight suspicion. And their defensive moves with regards to VIO to try to keep the faithful happy will not stop larger players abandoning them to more open technologies.

A full open-sourcing of ESXi could bring a new burst of innovation to the product; it would allow the integration of new storage modules for example. Some will suggest that they just need to provide a pluggable architecture but that will inevitably will also leave people with the feeling that they allow preferential access to core partners such as EMC.

The reality is that we are beginning to see more and more companies running multiple virtualisation technologies. If we throw in containerisation into the mix, within the next five years, we will see large companies running three or four virtualisation technologies to support a mix of use-cases and the real headache on how we manage these will begin.

I know it is slightly insane to be even talking about us having more virtualisation platforms than operating systems but most large companies are running at least two virtualisation platforms and probably many are already at three (they just don’t realise it). This ignores those with running local desktop virtualisation by the way.

The battle for dominance is shifting up the stack as the lower layers become ‘good enough’..vendors will need to find new differentiators…

 

Hype Converges?

In a software-defined data-centre; why are some of the hottest properties, hardware platforms? Nutanix and Simplivity are two such examples that lead to mind; highly converged, sometimes described as hyper-converged servers.

I think that it demonstrates what a mess our data-centres have got into that products such as these have any kind of attraction. Is it the case that we have built in processes that are so slow and inflexible; that a hardware platform that resembles nothing more than a games-console for virtualisation has an attraction.

Surely the value has to be in the software; so have we got so bad at building out data-centres that it makes sense to pay a premium for a hardware platform and there is certainly a large premium for some of them.

Now I don’t doubt that deployment times are quicker but my real concern is why have we got to this situation. It seems that the whole infrastructure deployment model has collapsed under it’s own weight. But is the answer expensive converged hardware platforms?

Perhaps it is time to fix the deployment model and deploy differently because I have a nasty feeling that many of those people who are struggling to deploy their current infrastructure will also struggle to deploy these new hyper-converged servers in a timely manner.

It really doesn’t matter how quickly you can rack, stack and deploy your hypervisor if it takes you weeks to cable it to to talk the outside world or give it an IP address or even a name!

And then the questions will be asked….you couldn’t deploy the old infrastructure in a timely manner; you can’t deploy the new infrastructure in a timely manner even if we pay a premium for it….so perhaps we will give public cloud a go.

Most of problems at present in the data-centre are not technology; they are people and mostly process. And I don’t see any hardware platform fixing these quickly….

Viperidae – not that venomous?

There’s a lot of discussion about what ViPR is and what it isn’t; how much of this confusion is deliberate and how much is simply the normal of fog of war which pervades the storage industry is debateable. Having had some more time to think about it; I have some more thoughts and questions.

Firstly, it is a messy announcement; there’s a hotch-potch of products here, utilising IP from acquisitions and from internal EMC initiatives. There’s also an attempt to build a new narrative which doesn’t seem to work; perhaps it worked better when put into the context of an EMC World event but not so much from the outside.

And quite simply, I don’t see anything breathtaking or awe-inspiring but perhaps I’m just hard to impress these days?

But I think there are some good ideas here.

ViPR as a tool to improve storage management and turn it into something which is automatable is a pretty good idea. But we’ve had the ability to script much of this for many years; the problem has always been that every vendor has some different way of doing this, syntax and tools are different and often not internally consistent between themselves.

Building pools of capability and service; calling it a virtual array…that’s a good idea but nothing special. If ViPR can have virtual arrays which federate and span multiple arrays; moving workloads around within the virtual array, maintaining consistency groups and the like across arrays from different vendors; now that’d be something special. But that would almost certainly put you into the data-path and you end up building a more traditional storage virtualisation device.

Taking an approach where the management of array is abstracted and presented in a consistent manner; this is not storage virtualisation, perhaps it is storage management virtualisation?

EMC have made a big deal about the API being open and that anyone will be able to implement plug-ins for it; any vendor should be able to produce a plug-in which will allow ViPR to ‘manage’ their array.

I really like the idea that this also presents a consistent API to the ‘user’; allowing the user to not care about what the storage vendor is at the other end; they just ask for disk from a particular pool and off it goes. This should be able to be done from an application, a web-front-end or anything else which interacts with an API.

So ViPR becomes basically a translation layer.

Now, I wonder how EMC will react to someone producing their own clean-room implementation of the ViPR API? If someone does a Eucalyptus to them? Will they welcome it? Will they start messing around with the API? I am not talking about plug-ins here, I am talking about a ViPR-compatible service-broker.

On more practical things, I am also interested on how ViPR will be licensed? A capacity based model? A service based model? Number of devices?

What I am not currently seeing is something which looks especially evil! People talk about lock-in? Okay, if you write a lot of ViPR based automation and provisioning, you are going to be kind of locked-in but I don’t see anything that stops your arrays working if you take ViPR out. As far as I can see, you could still administer your arrays in the normal fashion?

But that in itself could be a problem; how does ViPR keep itself up to date with the current state of a storage estate? What if your storage guys try to manage both via ViPR and the more traditional array management tools?

Do we again end up with the horrible situation where the actual state of an environment is not reflected in the centralised tool.

I know EMC will not thank me for trying to categorise ViPR as just another storage management tool ‘headache’ and I am sure there is more to it. I’m sure that there will be someone along to brief me soon.

And I am pretty positive about what they are trying to do. I think the vitriol and FUD being thrown at it is out of all proportion but then again, so was the announcement.

Yes, I know have ignored the Object on File or File on Object part of the announcement. I’ll get onto that in a later post.

 

 

Wellies!

I was watching the iPhone 5 announcement with a sinking feeling; I am at the stage where I am thinking about upgrading my phone and have been thinking about coming back to Apple and I really wanted Apple to smash the ball over the pavilion and into the car-park (no baseball metaphors for me). But they didn’t, it’s a perfectly decent upgrade but nothing which has made my mind up for me.

I am now at the situation where I am considering another Android phone, an iPhone or even the Lumia 920 and there’s little to choose between them; I don’t especially want any of them, they’ll all do the job. I just want someone to do something new in the smartphone market but perhaps there’s nothing new to do.

And so this brings me onto storage; we are in the same place with the general purpose corporate storage; you could choose EMC, NetApp, HDS, HP or even IBM for your general purpose environment and it’d do the job. Even price-wise, once you have been through the interminable negotiations mean that there is little between them. TCO, you choose the model which supports your decision; you can make it look good or bad as you want. There’s not even a really disruptive entry to the market; yes, Nexanta are getting some traction but there’s no big market swing.

I don’t get the feeling that there is a big desire for change in this space. The big boys are packaging their boring storage with servers and networking and trying to make it look interesting and revolutionary. It’s not.

And yet, there are more storage start-ups in storage than ever before but they are all focused around some very specific niches and we seeing these niches becoming mainstream or gaining mainstream attention.

SSD and flash-accelerated devices aimed at the virtualisation market; there’s a proliferation of these appearing from players large and small. These are aimed at VMware environments generally, once I see them appearing for Hyper-V and other rivals; then I’ll believe that VMware is really being challenged in the virtualisation space.

Scalable bulk storage; be it Object or traditional file protocols; we see more and more players in this space. And there’s no real feeling of a winner or a dominant player; this is especially true in the Object space where the lack of or even the perceived lack of a standard is hampering adoption by many who would really be the logical customers.

And then there is the real growth where the exciting stuff is happening; this is the like of Dropbox, Evernote and others; this is really where the interesting stuff is happening, it is all about the application and the API access. This is kind of odd, people seem to be willing to build applications, services and apps around these proprietary protocols in a way that people feel unwilling to do so with the Object Storage vendors. Selling an infrastructure product is hard, selling an infrastructure product masquerading as a useful app….maybe that is the way to go.

It is funny that some of the most significant changes in the way that we will do infrastructure and related services in the future is being driven from completely non-traditional spaces..but this kind of brings me back round to mobile phones, Nokia didn’t start as a mobile company and who knows perhaps it’ll go back to making rubber boots again.

New Lab Box: HP ML110 G7

I keep meaning to do a blog post on the Bod home-office, something like the Lifehacker Workspace posts but I never quite get round to it. Still, suffice to say, my home workspace is pretty nice; it’s kind of the room I wanted when I was thirteen but cooler! The heart of the workspace though is the tech, I have tech for gaming, working, chilling and generally geeking; desktops of every flavour and a few servers for good measure.

Recently, as you will know I have been playing with Razor and Puppet and I found that my little HP Microserver was struggling a bit with the load I was putting on it and I started to think about getting something with a bit more oomph. I had decided that I was going to put something together based on Intel’s Xeon technology and began to put together a shopping list.

Building PCs is something I kind of enjoy but then as luck would have it, an email dropped into my mailbox from Servers Plus in the UK offering £150 cashback on the HP ProLiant ML110 G7 Tower Server with a E3-1220 Quad Core processor; this brought the price down to £240 including VAT. And I was sold..no PC building for me this time.

As well as the afore mentioned E3-1220, the G7 comes equipped with a 2Gb ECC UDIMM, 2x Gigabit Ethernet ports, iL03 sharing the first Ethernet, 250GB Hard Disk, 350W power supply and generally great build quality (although I could do a better job with the cable routing I reckon).

The motherboard can support up to six SATA devices and there are four non-hotswap caddies for no-screw hard-disk installation, one of which holds the 250 GB Hard Disk. Installing additional drives was a doddle and involved no cursing and hunting for SATA cables. I did not bother to install an optical disk as I intended to network boot and install from my Razor server.

Maximum supported memory is an odd 16GB; the chipset definitely supports 32GB but there are very mixed reports of running an ML110 G7 with 32GB; I just purchased a couple of generic 4 GB ECC DIMMS for about £50 to bring it up to 10 GB for the time being. I’d be interested in hearing if anyone has got a ML110 G7 running successfully with 32GB. There’s no technical reason for HP to limit the capability and it does seem strange. The DIMM slots are easily accessible and no contortions are required to install the additional memory.

There are 4 PCIe card slots available; 1×16, 2×4 and 1×1; this should be ample for most home servers as it already comes with two onboard ethernet ports.

After installing the additional memory and hard-disks, I powered the box up and let it register with my Razor server; added a policy to install ESXi on it and let it go.

A quick note about the iLO3, it is the basic license which allows you to power-up and power-off, do some basic health checking and monitoring but no remote terminal; this is not a huge problem for me as the server is in the same room and I can easily put a monitor on it if required.

The ML110 is pretty damn quiet considering the amount of fans in it but start putting under load and you will know it’s there but it’s no noisier than my desktop when I am gaming and all the fans are spinning. It is certainly noisier than the Microserver though.

Once ESXi was installed; bringing up the vSphere let me see that all the components are recognised as expected; all the temperature monitors and fans were also being seen. Power management is available and can be set to Low power if you want for your home lab.

So I would say that if you want a home lab box with a little more oomph than the HP Microserver; the ML110 G7, especially with the £150 cash-back takes some beating. If it could be upgraded all the way to 32GB, then it would be awesome.

 

 

 

Fashionably Late

Like Royalty, IBM have turned up late to what is arguably their own party with their PureSystems launch today. IBM, the company which invented converged systems in the form of the mainframe, have finally got round to launching their own infrastructure stack product. But have they turned up too late and is everyone already tucking into the buffet and ignoring the late-comer?

For all the bluster and talk about the ability to have Power and x86 in the same frame and dare I whisper mainframe; this is really an answer to the vBlock, FlexPod and Matrix et all. IBM can wrap it and clothe it but this is a stack and if pushed they will admit this.

But when I first had the pitch a few months ago; I must admit, despite the ‘so what’ reaction, I was impressed with what appears to be a lot of thought and detail from an infrastructure engineering point of view. It looks pretty good as slide-ware.

Still the question is…is it any better than the competitors; well even if you treat it as a pure x86 infrastructure ‘stack in a rack’, it certainly appears to be more flexible than some of the competitors. You have choices as to what hypervisor it’ll support for starters. It appears to be more polished and less bodged together from a hardware point of view.

But at the end of the day, it is what it is and what is going to be really important is whether it can really deliver the management efficiencies and improve IT’s effectiveness. And that, as is with all it’s competitors is still a question where there is not yet a solid answer.

As a product, it looks at least as good as the rest…as an answer? The workings are still being worked upon.

Solutions and Specialists…

Solutions are great, a vendor turns up and sells a turn-key solution, it’s a marvellous world and everything just works; it’s all certified and lovely. There is a single supplier to kick and trouble-shoot the problem. Or at least that’s the theory….

But what happens when the supplier can’t fix the problem? Who do you turn to then? Funnily enough, that’s the situation I’m in today. A turn-key media solution which we’ve been kept at arms length from for years has developed issues and now who do the customer turn to when not getting good answers from the solutions vendor? That’s right, our little team of specialists..

A couple of hours of investigation and a meeting with the vendor where we expose what we see as issues by treating the solution as just another piece of infrastructure has been enlightening to both our internal customer and the solutions vendor. There will be no arms-length going forward; solutions still need specialists.

You need specialists to ensure that the wool is not being pulled over your eyes; you need specialists to ask the right questions and know when the answers are not good enough. Too often you will find that the solutions vendor themselves have little clue about the underlying infrastructure; focusing on the application whilst using the hardware as a nice little revenue lift. This is fine until you hit problems.

If you are buying integrated solutions and stacks; make sure that they are integrated and that the solutions provider can actually support the stack. Don’t be afraid to dig into what is being provided as part of the stack/solution and keep some specialists around.

 

Two Fat Ladies

We have just seen the release of VMware Workstation 8 and the developer release of Windows 8. Both promise significant changes and improvement in usability.

As a fan of both of them in version 7; I was interested to see how these new versions are stacking up.

VMware Workstation 8

I have used Workstation since it’s public beta releases and have upgraded to every version since then paying my own cash; it is fair to say that I like this product. For me, it is VMware’s defining product and it is what made VMware cool.

Workstation does have competitors now; VirtualBox is an excellent product and is free; even VMware’s Player is probably sufficient for most people having gone beyond a mere Player into a useable desktop virtualisation tool for most people.

So why upgrade to Workstation 8? Well, I suspect in my case, it has become habit but 8 does have some significant improvements.

Firstly it can take advantage of the power of the new chipsets around; virtual machines can have up to 64GB of RAM and up to 8 virtual cores. It can also run 64-bit nested virtual machines for those of you who are simulating virtualised data-centres on your desktop. And there is a new UI; which is prettier, more useable and feels a bit snappier.

But the most significant improvement is the way that Workstation now integrates with vSphere and vCenter. From Workstation, you can connect to other VMware ‘servers’ and work with both local and server based VMs from within workstation. You can configure and install VMs from the Workstation console; for those of you with home-labs, this is really nice.

Workstation can also work as server and you can share your VMs with other Workstation users; another Workstation user can talk to your VMs and utilise them. Quicker and easier than copying them around the network. And yes, there are security controls that you can put in place.

You can copy a VM from your workstation to a vSphere server and have it instantiated there. Note that this is not a live migration and it is also one way. You cannot drag a VM back to Workstation to work on.

All in all, it’s a pretty solid release and an improvement on what has gone before.

Windows 8 Developer Preview

Microsoft have made available a Preview release for Windows 8; in theory aimed at developers, it is available to anyone. It comes with the new touch focused interface and a raft apps for you try. It also comes with developer tools obviously.

The new interface will be familiar to anyone who has seen or used a Windows Phone 7 and it is based on the Metro touch interface with live tiles which reflect the state of running applications.

The more familiar desktop is accessed as a app or comes up when you access an application which uses the Windows desktop. A familiar desktop with recycle bin, task bar and ‘Start’ button appears. But beware, the ‘Start’ button reopens the Metro-orientated Start screen again; the Start menu has gone.

I didn’t really get any further than a quick play and I can’t say that  I especially like the changes. I am not convinced that an interface designed for touch actually works especially well with a Mouse and Keyboard. I found the lack of Start menu driving me mad; I don’t want to be flung back to a full-screen menu when I’m firing up a new app; bringing up control panel etc, etc.

But I’m sure I’ll get used to it and start to find my way around.  I don’t like this release as much as I liked Windows 7 but it’s new interface and brave reboot by Microsoft.

Two Fat Ladies?

And to tie things altogether; I installed the Windows 8 Developer Preview into Workstation 8; there is no Windows 8 option yet but I believe that there was that option in the Workstation 8 beta, so it cannot be far off. The Windows 7 options work fine as long as you do a manual install; if you let Workstation try to do the install, it gets itself into an endless reboot loop.

So there we go;  Two fat ladies, Wobbly wobbly – All the eights….88

It was too good a title not to use, so sorry for any offence.