Storagebod Rotating Header Image

Wish I was there?

It’s an unusual month which sees me travel to conferences twice but October was that month but there is a part of me who wishes that I was on the road again and off to the OpenStack Summit in Paris. At the moment, it seems that OpenStack has the real momentum and it would have been interesting to compare and contrast it with VMworld.

There does seem to be a huge overlap in the vendors attending and even the people but it feels like it is the more vibrant community at the moment. And as the OpenStack services continue to extend and expand; it seems that it is a community that is only going to grow and embrace all aspects of infrastructure.

But I have a worry in that some people are looking at OpenStack as a cheaper alternative to VMware; it’s not and it’s a long way off that and hopefully it’ll never be that…OpenStack needs to be looked as a different way of deploying infrastructure and applications, not to virtualise your legacy applications. I am sure that we will at some point get case-studies where someone has virtualised their Exchange infrastructure on it but for every success in virtualising legacy, there are going to be countless failures.

If you want an alternative to VMware for your legacy; Hyper-V is probably it and it may be cheaper in the short-term. Hyper-V is still woefully ignored by many; lacking in cool and credibility but it is certainly worth looking at as an alternative; Microsoft have done a good job and you might want to whisper this but I hear good things from people I trust about Azure.

Still, OpenStack has that Linux-type vibe and with every Tom, Dick and Harriet offering their own distribution; it feels very familiar…I wonder which distribution is going to be Redhat and which is going to be Yggdrasil.

 

Fujitsu Storage – With Tentacles..

So Fujitsu have announced the ETERNUS CD10000; their latest storage product designed to meet the demands of the hyperscale and the explosion in data growth and it’s based on….Ceph.

It seems that Ceph is quickly becoming the go-to scale-out and unified system for those companies who don’t already have an in-house file-system to work-on. Redhat’s acquisition of Ink-Tank has steadied that ship with regards to commercial support.

And it is hard to see why anyone would go to Fujitsu for a Ceph cluster; especially considering some of the caveats that Fujitsu put on it’s deployment. The CD10000 will scale to 224 nodes; that’s a lot of server to put on the floor just to support storage workloads and yet Fujitsu were very wary about allowing you to run workloads on the storage nodes despite the fact that the core operating system is Centos.

CephFS is an option with the CD10000 but the Ceph website explicitly says that this is not ready for production workloads; even with the latest release .87 Giant. Yes, you read that right; Ceph is not yet a v1.0 release; now that in itself will scare off a number of potential clients.

It’s a brave decision of Fujitsu to base a major new product on Ceph; it’s still very early days for Ceph in the production mainstream. But with large chunks of IT industry betting on OpenStack and Ceph’s close (but not core) relationship with OpenStack, it’s kind of understandable.

Personally, I think it’s a bit early and the caveats around the Eternus CD10000 deployment is limiting currently; I’d wait for the next release or so before deploying.

Done

Could VMAX3 possibly be the last incarnation of the Symmetrix that ships?

As an Enterprise Array, it feels done; there is little left to do, arguably this has been the case for some time but the missing feature for VMAX had always been ease of use and simplicity. The little foibles such as the Rule of 17, Hypers, Metas, BCVs vs Clones all added to the mystique/complexity and led to many storage admins believing that we were some kind of special priesthood.

The latest version of VMAX and the rebrand of the Enginuity into HyperMax removes much of this and it finally feels like a modern array…as easy to configure and run as any array from their competitors.

And with this ease of use; it feels like the VMAX is done as an Enterprise Array…there is little more to add. As block array, it is feature complete.

The new NAS functionality will need building upon but apart from this…it’s done.

So this leaves EMC with VNX and VMAX; two products that are very close in features and functionality; one that is cheap and one that is still expensive. So VMAX’s only key differentiator is cost…a Stellar Artois of the storage world.

I can’t help but feel that VNX should have a relatively short future but perhaps EMC will continue to gouge the market with the eye-watering costs that VMAX still attracts. A few years a go; I thought the Clariion team might win out over the Symm team, now I tend to believe that eventually the Symm will win out.

But as it stands, VMAX3 is the best enterprise array that EMC have shipped but arguably it should be the last enterprise array that they ship. The next VMAX version should just be software running on either your hardware or perhaps a common commodity platform that EMC ship with the option of running the storage personality of choice. And at that point; it will become increasingly hard to justify the extra costs that the ‘Enterprise’ array attracts.

This model is radically different to the way they sell today…so moving them into a group with the BURA folks makes sense; these folks are used to selling software and understand that is a different model..well some of them do.

EMC continue to try to re-shape themselves and are desperately trying to change their image; I can see a lot of pain for them over the next few years especially as they move out of the Tucci era.

Could they fail?

Absolutely but we live a world where it is conceivable that anyone of the big IT vendors could fail in the next five years. I don’t think I remember a time when they all looked so vulnerable but as their traditional products move to a state of ‘doneness’; they are all thrashing around looking for the next thing.

And hopefully they won’t get away with simply rebranding the old as new…but they will continue to try.

 

Scrapheap Challenge

On the way to ‘Powering the Cloud’ with Greg Ferro and Chris Evans, we got to discussing Greg’s book White Box Networking and whether there could be a whole series of books discussing White Box storage, virtualisation, servers etc and how to build a complete White Box environment.

This lead me to thinking about how you would build an entire environment and how cheap it would be if you simply used eBay as your supplier/reseller.  If you start looking round eBay, it is crazy how far you can make your money go; dual processor HP G7s with 24Gb for less than £1000.; 40 port 10 GbE switch for £1500; 10 GbE cards down to £60.  Throw in a Supermicro 36 drive storage chassis and build a hefty storage device utilising that; you can build a substantial environment for less than £10,000 without even trying.

I wonder how far you could go in building the necessary infrastructure for a start-up with very few compromises. And whether you can completely avoid going into the cloud at all?  The thing that itsstill going to hurt is the external network connectivity to the rest of the world.

But instead of ‘White Box’…perhaps it’s time for junk-box infrastructure. I don’t think it’d be any worse than quite a few existing corporate infrastructures and would probably be more up-to-date than many.

What you could build?

 

Take Provisions

Provisioning and utilisation seem to be a perennial subject; there still seems to be a massive problem with large Enterprises who appear to have far too much storage for the amount of data that they store. There are many reasons for this and there is certainly fault on both the customer and vendor side.

Firstly, as a customer; you should expect to be able to use all the capacity that you purchase and your internal pricing models should reflect this. Obviously, there is always overhead for data-protection and hot-spares but once this is taken into account, all that capacity should be useable. You need to define what useable capacity means to you.

There should not be a performance hit for utilising all useable capacity. If the vendor states that best practise is only to use 70% of useable capacity; that needs to reflected in your TCO calcuations to give you a true cost of storage.

Secondly, as a customer; you need to ensure that your processes and operational procedures are based around provisioning the right amount of storage and not over-provisioning. Allocating storage for a rainy day is not a great idea; thin provisioning can help with this but it is not a silver bullet.

Thirdly, as a vendor; you need to be up front about the useable capacity; if you can only realistically use 70%, you need to factor this into your pricing models and show the customer exactly what they are getting for their money. Be open and honest. If you want to show a price per IOP, a price per gigabyte or some other measure; that is fine. If you want to show a price based on your assumed dedupe ration; be prepared to put your money where your mouth is.

Fourthly, as a vendor; look at ways of changing how storage is allocated and presented. It is time for us to move away from LUNs and other such archiac notions; provisioning needs to be efficient and simple. And we also need the ability to deallocate as easily as we allocate. This has often been problematic. Obviously this is not just a storage problem; how many companies are spinning up VMs and then not clearing them down properly? But make it easier across the board.

Fifthly, as vendors and users; we need to look at data-mobility. Too often, the reason that an array is under-utilised is because you have reserved capacity for application growth because it is simply ‘too hard’ to move an application’s data once it is in place. This is also a reason why many customers are very wary about thin-provisioning; the rainy day scenario again.

However, large arrays bring their own issues; from change management to refresh cycles. Smaller might be better for many but unless data can be easily moved; there is a tendency to buy arrays that are large and reserving capacity for growth.

A Ball of Destruction…

I’m not sure that EMC haven’t started an unwelcome trend; I had a road-map discussion with a vendor this week where they started to talk about upcoming changes to their architecture..my questioning ‘but surely that’s not just a disruptive upgrade but destructive?’ was met with an affirmative. Of course like EMC; the upgrade would not be compulsory but probably advisable.

The interesting thing with this one is that it was not a storage hardware platform but a software defined storage product. And we tend to be a lot more tolerant of such disruptive and potentially destructive upgrades. Architecturally as we move to more storage as software as opposed to being software wrapped in hardware; this is going to be more common and we are going to have design infrastructure platforms and applications to cope with this.

This almost inevitably means that we will need to purchase more hardware than previously to allow us to build zones of availability to allow upgrades to core systems to be carried out out as non-disruptively as possible. And when we start to dig into the nitty-gritty; we may find that this starts to push costs and complexity up…whether these costs go up so much that the whole commodity storage argument starts to fall to pieces is still open to debate.

I think for some businesses it might well do; especially those who don’t really understand the cloud model and start to move traditional applications into the cloud without a great deal of thought and understanding.

Now this doesn’t let EMC off the hook at all but to be honest; EMC have a really ropey track-record on non-disruptive upgrades in the past…more so than most realise. Major Enginuity upgrades have always come with a certain amount of disruption and my experience has not always been good; the levels of planning and certification required has kept many storage contractors gainfully employed. Clariion upgrades have also been scary in the past and even today, Isilon upgrades are no-where as near as clean as they have you believe.

EMC could have of course got away with the recent debacle if they’d simply released a new hardware platform and everyone would have accepted that this was going to involve data-migration and move data around.

Still, the scariest upgrade I ever had was an upgrade of an IBM Shark which failed half-way and left us with one node at one level of software and one at a different level. And IBM scratching their heads. But recently, the smoothest upgrades have been V7000..so even elephants can learn to dance.

As storage vendors struggle with a number of issues; including the setting of the sun on traditional data protection schemes such as RAID; I would expect the number of destructive and disruptive upgrades to increase. And the marketing spin around them from everyone to reach dizzying heights. As vendors manipulate the data we are storing in more and more complex and clever ways; the potential for disruption and destructive upgrades is going increase.

Architectural mistakes are going to be made; wrong alleys will be followed…Great vendors will admit and support their customers through these changes. This will be easier for those who are shipping software products wrapped with hardware; this is going to be much harder for the software-only vendors. If a feature is so complex that it seems magic; you might not want to use it…I’m looking for simple to manage, operate and explain.

An argument for Public Cloud? Maybe, as this will take the onus away from you to arrange. Caveat Emptor though and this may just mean that disruption is imposed upon you and if you’ve not designed your applications to cope with this…Ho hum!

 

 

 

 

Heady Potential Eventually Means Catastrophe?

Amongst the storage cognoscenti today on Twitter, there’s been quite a discussion about EMC and HP possibly merging. Most people seem to be either negative or at best disbelieving that something like this would bring value or even happen.

But from a technology point of view, the whole thing might make a lot of sense. The storage folks like to point at overlap in the portfolios but I am not convinced that this really matters and the overlap might not be as great as people think. Or at least, the overlap might well finally kill off the weaker products; I’ll let the reader decide those products that deserve to die.

EMC are on a massive push to commoditise and move their technology onto a standard platform; software variants of all their storage platforms exist and just need infrastructure to run on. I’ve mentioned before that HP’s SL4500 range is an ideal platform for many of EMC’s software defined products.

But storage aside; the EMC Federation has a lot of value for HP, it is early days for Pivotal but I suspect Meg can see a lot of potential in it. She’ll see a bit of the eBay in it; she’ll get the value of some of the stuff that they are trying to do. They are still very much a start-up, a well-funded start-up tho’.

VMware, I would expect to continue as it is; it might throw up some questions about EVO-RAIL and HP have pointedly not produced an EVO-RAIL certified stack; despite being invited to. But to fold VMware into the main HP would be rash and would upset too many other vendors. But hey, with IBM pulling out of x86 servers and honestly, who cares about Oracle’s x86 servers; HP might have a decent run at dominating the server marketplace before Lenovo takes a massive bite out of it.

And customers? I’m not sure that they’d be too uncomfortable with a HP/EMC merger; mergers are almost certainly on the agenda and there are less attractive ones on the table.

HP need software to help them build their software-defined data-centre; Openstack will only take them so far today. EMC need a commodity partner to help them build a hardware platform that would be trusted. An HP/EMC stack would be solid and traditional but with potential to grow into the 3rd platform supporting infrastructure as customers move that way.

And they both need a way of fending off Amazon and Google; this might be the way for them to do it.

I know I’ve been talking about this more like a HP take-over of EMC and it’d be closer to a true merger; this makes it harder…true mergers always are but culturally, the companies are less dissimilar than most realise. They both need more rapid cultural change…perhaps a merger might force that on them.

Will it happen, I don’t know…would it be a disaster if it did? I don’t think so. It’d also be good for the industry; lots of hacked-off smart people would leave the new behemoth and build new companies or join some of the pretenders.

A shake up is needed…this might do it. Will the market like it? I’m not especially bothered…I don’t hold shares in either company. I just think it might make more sense than people realise. 

 

Singing the lowest note…

The problem with many discussions in IT, is that they rapidly descend into one that looks and feels like a religious debate; whereas reality is much more complex and the good IT specialist will develop their own syncretic religion and pinch bits that work from everywhere.

One of the things that many of us working in Enterprise IT is that our houses have many rooms and must house many differing belief systems; the one true way is not a reality. And any organisation more than fifteen years old has probably built up a fair amount of incompatible dogmas.

For all the pronouncements of the clouderatti; we are simply not in the position to move whole-scale to the Cloud in any of its many forms. We have applications that are simply not designed for scale-out; they are certainly not infrastructure aware and none of them are built for failure. But we also have a developer community who might be wanting to push ahead; use the language du jour and want to utilise cloud-like infrastructure, dev-ops and software defined everything.

So what do we in the infrastructure teams do? Well, we are going to have to implement multiple infrastructure patterns to cater for the demands of all our communities. But we really don’t want to bespoke everything and we certainly don’t want to lock ourselves into anything.

Many of the hyper-converged plays lock us into one technology or another; hence we are starting to look at building our own rack-converged blocks to give us lowest common denominator infrastructure that can be managed with standard tools.

Vendors with unique features are sent packing; we want to know why you are better at the 90%. Features will not sell; if I can’t source a feature/function from more than one vendor, I probably will not do it. Vendors who not play nice with other vendors; vendors who insist on doing it all and make this their lock-in are not where it’s at.

On top of this infrastructure; we will start to layer on the environment to support the applications. For some applications; this will be cloudy and fluffy. We will allow a lot more developer interaction with the infrastructure; it will feel a lot closer to dev-ops.

For others where it looks like a more traditional approach is required; think those environments that need a robustly designed SAN, traditional fail-over clustering; we’ll be a lot more proscriptive about what can be done.

But all of these will sit on a common, reusable infrastructure that will allow us to meet the demands of the business.  This infrastructure will be able to be quickly deployed but also quickly removed and moved away from; it will not require us to train our infrastructure teams in depth to take advantage of some unique feature.

Remember to partner well with us but also with your competitors; yes, it sometimes makes for an amusing conversation about how rubbish the other guy is but we’ll also have exactly that same conversation about you.

Don’t just play lip-service to openness, be prepared to show us evidence.

ESXi Musings…

VMware need to open-source ESXi and move on; by open-sourcing ESXi, they could start to concentrate on becoming the dominant player in the future delivery of the 3rd platform.

If they continue with the current development model with ESXi; their interactions with the OpenStack community and others will always be treated with slight suspicion. And their defensive moves with regards to VIO to try to keep the faithful happy will not stop larger players abandoning them to more open technologies.

A full open-sourcing of ESXi could bring a new burst of innovation to the product; it would allow the integration of new storage modules for example. Some will suggest that they just need to provide a pluggable architecture but that will inevitably will also leave people with the feeling that they allow preferential access to core partners such as EMC.

The reality is that we are beginning to see more and more companies running multiple virtualisation technologies. If we throw in containerisation into the mix, within the next five years, we will see large companies running three or four virtualisation technologies to support a mix of use-cases and the real headache on how we manage these will begin.

I know it is slightly insane to be even talking about us having more virtualisation platforms than operating systems but most large companies are running at least two virtualisation platforms and probably many are already at three (they just don’t realise it). This ignores those with running local desktop virtualisation by the way.

The battle for dominance is shifting up the stack as the lower layers become ‘good enough’..vendors will need to find new differentiators…

 

Death of the Salesman

Reflecting recently on the changes that I have seen in the Enterprise IT market, more specifically the Enterprise storage market; I have come to the conclusion that over the past five years or so, the changes have not been so much technological but everything  around the technology and it’s packaging.

There appears to be significantly less selling going on and a lot more marketing. This is not necessarily a good thing; there is more reliance than ever on PowerPoint and fancy marketing routines. More gimmick than ever, more focus on the big launch and less on understanding what the customer needs.

More webinars and broadcasting of information and a lot less listening than ever from the vendors.

Yet this is hardly surprising; as the margins on Enterprise hardware slowly erode away and the commoditisation continues; it is a lot harder to justify the existence of the shiny suit.

And many sales teams are struggling with this shift; the sales managers setting targets have not yet adjusted to the new rhythms  and how quickly the market can shift.

But there is a requirement for sales who understand their customers and understand the market. Sales who understand that no one solution fits all; that there is a difference between the traditional IT and the new web-scale stuff.

However, if the large vendors continue to be very target focussed; panicking over the next quarter’s figures and setting them and their staff some unrealistic targets; not realising that the customer now has a lot of choice on how they buy technology and from whom, then they are going fail.

Customers themselves are struggling with some the new paradigms and the demands that their businesses are making of them. The answers are not to be found in another webinar; another meag-launch but perhaps in the conversation.

We used to say that ears and mouth need to be used in proportion; this is never more true but has never been more ignored.