Storagebod Rotating Header Image


What Next For HPE Storage?

So I’m sitting here in my hotel room before day 2 of HPE Discover* thinking about some of the discussions that have happened on the previous days/evenings. It seems that even vendors are now coming round to the idea that Enterprise Storage is pretty much dead or at least in it’s current form.

What do we mean by dead?

Well, we don’t mean that it is going away anytime soon; like the mainframe, it’ll continue to haunt the data-centres of the future. But unlike the coming zombie apocalypse; this zombie will not take over the world.

However, there is little to no growth opportunity for the traditional Enterprise Storage Array; year on year, we probably won’t see a decline in the amount of storage shipped in this form but it’s proportion of the total shipped storage will decline massively.

In fact, the vendors themselves have only to blame themselves; modern Enterprise arrays are that much more efficient; thin provisioning, compression and data reduction technologies such as deduplication are having the impact that to maintain revenues, the vendors are having to ship twice as much storage.

It’s a great time to be a customer of Enterprise Storage; the price will continue to fall and it’s becoming so simple that swapping one vendor for another is no longer a massive deal. Our continued push for simplified interfaces and click-driven provisioning is beginning to drive procurement behaviours that mean that it is no longer a massive RFx process to change vendors; pence per gigabyte or IOP is the only measure of importance.

And it’s a scary time for many vendors who don’t have a story to tell about what comes next and how they mitigate for this change in market. Enterprise Storage has been a cash-cow; massive margins and annual increases in revenues are pretty much in decline.

So HPE; what comes next in your world because wandering round Discover; I can see an awful lot of servers filled with disks but I don’t see the next storage solution from you guys. Maybe I’ll see it today? 

(disclosure: HP have paid for my accommodation and entrance to the event but I’m under no obligation to write anything)

Punish the Pundits!!

A day rarely goes by without someone declaring one technology or another is dead…and rarely a year goes by without someone declaring this is the year of whatever product they happen to be pimping or in favour of.

And yet, you can oft find dead technologies in rude health and rarely does it actually turn out to be the year of the product it is supposed to be the year of.

It turns out that pundits (including me) really have little idea what technology is going to die or fly. And that is what makes the industry fun and interesting.

The storage industry is especially good for this; SAN is dead, DAS lives, NAS is obsolete, Object is the future, iSCSI will never work, Scale Up, Scale Out…

We know nothing…

The only thing we do know is that data volumes will keep getting bigger and we need somewhere to put it.

In the past three months; I’ve seen technologies in what everyone will have you believe are innovation-free zones that have made me stop and think ‘But I thought that was going to die….’

Yes we have far too many start-ups in some parts of the industry; far too many people have arrived at where they thought the puck was going to be.

A few people seem to be skating round where the puck was.

And there’s a few people who have picked the puck, stuck in their pocket and hidden it.

So my prediction for the next eighteen months…

‘Bumpy….with the chance of sinkholes!’

My advice…

‘Don’t listen to the pundits, we know nothing….we just love the shinies!!’

Storage People Are Different

An oft-heard comment is that ‘Storage People are weird/odd/strange’; what people really mean is that ‘Storage People are different’; Chuck sums up many of the reasons for this in his blog ‘My Continuing Infatuation with Storage‘.

Your Storage Team (and I include the BURA teams) often see themselves as the keepers to the kingdom, for without them and the services that they provide, your businesses will probably fail. They look after that which is most important to any business; its knowledge and its information. Problem is, they know it and most other people forget this; this has left many storage teams and managers with the reputation of being surly, difficult and weird but if you were carrying the responsibility for your company’s key asset, you’d be a little stressed too. Especially if no-one acknowledged it.

The problem is that for many years; companies have been hoarding corporate gold in dusty vaults which are looked after by orcs and dragons who won’t let anyone pass or access it but now people want to get access to the gold and make use of it. So now the storage team is having to not only worry about ensuring that the information is secure and maintained, people actually want to use it and want ad-hoc access to it, almost on demand.

Problem is that the infrastructures that we have in place today are not architected to allow this to happen and the storage teams do not have processes and procedures to allow this to happen. So today’s ‘Storage People maybe different’ but tomorrow’s ‘Storage People will be  a different different’. They will need to be a lot more business focussed and more open; but that asset that they’ve been maintaining is growing pretty much exponentially in size and value; so expect them to become even more stressed and maybe even more surly.

That is unless you work closely with them to invest and build a storage infrastructure which supports all your business aspirations; unless vendors invest in technologies which are manageable at scale and businesses learn to appreciate value as opposed to sheer cost.

Open, accessible, available and secure; this is the future storage domain; let’s hope that the storage teams to support this also have these qualities.

Virtual Bubble

VMware is hot and VMware with storage seems to be really hot. Just look at the spate of announcements with regards to arrays which are specifically targeted at VMware; announcement after announcement in past few weeks.

But are we looking at a bubble? We are certainly getting some bizarre announcements, iSCSI flash arrays which only allegedly support VMware? And is this targeting not a huge risk?

As an end-user, I would be loathe to purchase something which was so locked into specific infrastructure stack.

I am looking for devices which allow a certain amount of flexibility in deployment scenarios. And yes, I do have some storage which is specifically targeted for specialist workloads but I am not tied to a specialist workload platform. I can change the application which generates the workload.

Building storage arrays which only target VMware seems pretty much as dumb to me as building arrays which only support Windows. There may be a short term advantage but as a strategic play, I’m not convinced.


Cloud Storage without Cloud….

Another day, another new Cloud Storage Service; today I got an invite for AeroFS which is a Cloud Storage Service with a difference, it doesn’t necessarily store your data in the Cloud unless you ask it to, what it does do is manage a number of folders (AeroFS calls them libraries) and allow you to sync them between your various machines using a peer-to-peer protocol.

You can share the folders with other people on the service  and you can also decide which of the folders get synced to each of your machines which gives you a fairly coarse-grained sync. You also decide which of the folders get backed-up to the Cloud, so it is possible just back-up those folders that are important.

There is client support for Windows, Mac and Linux at present.

Currently the service is an invite-only alpha and I’ve not had a huge amount of time to play with it but it looks like a potentially interesting alternative to Dropbox but it will need mobile clients for it to truly compete. I do like the P2P aspects of the service and I do like that I can sync pretty much unlimited data between the clients. It is certainly one to watch.

AeroFS is here.

Serial Killing

So this is a rant, so apologies!

Sometimes vendors make me despair and I don’t know why I have never learnt my lesson but still they do. We have a mysterious problem with some Cisco MDS switches but as many of you probably know, you can’t easily buy MDS switches from Cisco; you generally go through another vendor. In this case it is IBM!

Anyway, one of my guys logs a call with IBM, first of all it gets bounced from the software call-centre to the hardware call-centre; which obviously necessitates a different PMR number? Why is beyond me but it’s always been that way with IBM.

So he logs the call against what he thinks is the correct serial number and what the switch displays as its serial number. We wait many hours and hear nothing; he prods them a few times and still we get nothing. Eventually, we escalate the call to the district support manager to have a rant and we find that there is a note on the call along the lines that the switches are not under maintenance. Funny that considering yesterday I had actually approved this quarter’s maintenance payment. That and they had actually managed to record the wrong contact number and misspell an email address.

But of course the switches are under maintenance; it’s just that IBM stick their own serial number on the back of the switch physically and to find it out, you need to have it recorded somewhere or go to have a look. Why, oh why do vendors do this; or at least, why don’t you have two fields in your maintenance database which ties the OEM and your serial numbers together?

And it’s not just Cisco kit with IBM; it’s their rebadged NetApp, rebadged LSI, rebadged DDN etc, etc. I don’t want to have my engineers to have to look up these sort of details when they are trying to fix problems; I want them to be able to read the serial number directly off the machine, they are rarely in the secure computer rooms and to gain access requires raising access requests etc.

[Of course, I would also have expected IBM to tell me at point of first contact that they didn’t recognise the serial number and they believed that there was no maintenance contract.]

I want to have a single serial number to work off; at the present, I need two for a single piece of kit and that is crap. And I do need the OEM serial number generally because all of the software licensing is tied to that and not to your made-up number.

Of course, don’t get me started on IBM part-numbers!

BTW, we don’t have this problem with EMC; we can give them an Cisco serial number and they can cope with it!

Still IBM are not the only problem who do stupid things like this but please can I suggest that a piece of kit should have a single serial number and it should be the one that the piece of kit reports as its serial number.

And So(NAS) the Silliness Continues

I had a horrible feeling that it was all going to go this way this year, like boy racers proving their manliness, the vendors have decided to drag-race their various devices around the SpecFS track.

EMC and IBM loading up their systems with nitro and blasting down a straight-track which has little to do with reality. NetApp standing on the side-lines and pointing that someone is cheating. Is it IBM, is it EMC? Do we care?

Obviously the vendors do….I do wonder if it’s not time to come up with a new benchmark; I’m thinking like a Top Gear challenge.

1) Arbitrary budget set; let’s say £100,000

2) A number of challenges to be set such as

  • how fast it takes to rack, stack and configure?
  • how fast can you make it go?
  • how much data can you store now you’ve configured it for performance?
  • how quickly can a non-storage person add shares etc?

You know realistic things?

3) And then we get a pair of cranes and play conkers by hanging the devices off them and smashing them into one another!

Merry Christmas and all that..

This might be the last post of the year, not sure yet but it is surely the last post before Christmas.

I would just like to thank all my readers and especially those of you
who take the time to comment sharing your knowledge and opinions…

So all there is left to do is you wish you all a Very Merry Christmas and a Happy New Year…

See you all the other side of the Turkey!

Get Focused….

I was interested to see Chris Mellor's story that NetApp are pulling all their development resource off of existing workloads and focusing on getting GX into OnTap. I used to work for a guy whose maxim was 'get focused or get f****d' and in NetApp's case it looks like get focused or get cluster f****d. I see lots of vendors touting clustered NAS solutions; HP, Exanet, iBrix, onStor, IBM, Isilon too just name just a few; all of them bring up NetApp and GX, all of them are very disparaging about GX and probably with good reason but I think NetApp have finally been poked with the stick enough times and are really waking up at last. They can't keep telling us that GX is great but the next release will be the one which really works; it needs to be the one which really works, offers all of Ontap's functionality and it needs to be with us in the next six months.

If NetApp get GX properly integrated into Ontap, they are going to have a big job countering all the FUD out there but at least they will have a working solution; I'm waiting to see EMC's play in the Clustered NAS Space. I can see an acquistion but then they're going to have to be careful to not to do an NetApp and take 5 years+ to integrate it; if they do, they can forget it or just capitulate the high-end.

And then there is Microsoft's move into Cloud computing and what that means for the market; Chris is predicting doom and gloom with mass consolidation in the storage market. I think even without Microsoft's move into the cloud, we were going to see consolidation in the industry; there are just too many array vendors at the moment. I think he's actually massively under-estimating the number (and that includes that fact that LSI make arrays for a number of people and Hitachi's storage is rebranded by HDS, HP and Sun).

The cloud could also spell trouble for Microsoft, its a move into a business that I don't think they understand. Can you imagine an outage which takes out tens of thousands of businesses? Amazon's outage shows what can and will happen. Perhaps a couple of outages of this sort of scale will scare enough people and will save the storage vendors.

Whatever happens large-scale clustered storage solutions are going to be very important; how these scale-out storage solutions are built and delivered and on whose platform will be interesting. Maybe storage virtualisation to enable anybody's spinning rust to be utilised will be very transient because there just won't be that much choice? Maybe MAUI will be EMC's silver-lining to the cloud? And maybe NetApp will get GX working just in time? And maybe someone will develop some management tools as opposed to administration tools.

That’s Magic

I was chatting to one of the XIV sales-men at Storage Expo and he threw a figure at me which has been bugging me for a few days; he was claiming that an XIV box can do 60,000+ IOPs.

Now I don't have any science to go on but that seems awfully high for a box with 180 7.2k SATA drives in; lets be really generous and say that a 7.2k SATA can sustain 100 IOPs, that gives me 18,000 IOPs that the disk can handle. XIV's cache algorithms must be fantastic or the workloads they are testing must be the most cache friendly in the world.

I didn't get a chance to quiz him more than that i.e what type of workload, read/write mix etc, etc and the what the latency goes to.

But I did get to play/see the XIV GUI; if you get a chance, go and have a play! I think IBM got their money's worth for the GUI alone!! Not as good as a VR Network Management interface concept that I saw at BT's labs once but it's still pretty damn nice!