Storagebod Rotating Header Image

Storage

Fundamental…

I’m a big fan of Etherealmind and his blog; I like that it is a good mix of technical and professional advice; he’s also a good guy to spend an hour or so chatting to, he’s always generous with his time to peers and even when he knows a lot more than you about a subject, you never really feel patronised or lectured to.

I particularly liked this blog, myself and Greg are really on the same page with regards to work/life balance but it is this paragraph that stands out..

 

Why am I focussed on work life ? After 25 or so years in technology, I have developed some level of mastery.  Working on different products is usually just a few days work to come up to speed on the CLI or GUI. Takes a few more weeks to understand some of the subtle tricks. Say a month to be competent, maybe two months. The harder part is refreshing my knowledge on different technologies – for example, SSL, MPLS, Proxy, HTTP, IPsec, SSL VPN. I often need to refresh my knowledge since it fades from my brain or there is some advancement. IPsec is a good example where DMVPN is a solid advancement but takes a few weeks to update the knowledge to an operational level.

Now although he is talking about networking technologies; what he says is true about storage technologies and actually pretty much all of IT these days. You should be able to become productive on most technologies in a matter of days providing you have the fundamentals; spend your early days becoming knowledgeable about the underlying principles and avoid vendor-specific traps.

Try not to run a translation layer in your mind; too many storage admins are translating back to the first array that they worked on; they try to turn hypers and metas into aggregates, they worry about fan-outs without understanding why you have to in some architectures and not necessarily so in others.

Understanding the underlying principles means that you can evaluate new products that much quicker; you are not working why product ‘A’ is better than product ‘B’, this often results in biases. You understand why product ‘A’ is a good fit for your requirement and you also understand why neither product is a good fit.

Instead of iSCSI bad, FC good…you will develop an idea as to the appropriate use-case for either.

You will become more useful…and you will find that you are less resistant to change; it becomes less stressful and easier to manage. Don’t become an EMC dude, become a Storagebod…Don’t become a Linux SysAdmin, become a SysAdmin.

Am I advocating generalism? To a certain extent, yes but you can become expert within a domain and not a savant for a specific technology.

And a final bit of advice; follow Etherealmind….he talks sense for a network guy!

 

 

A two question RFP….

Is it easy?

Is is cheap?

Pretty much these are the only two questions which interest me when talking to a vendor these days; after years of worrying about technology, it has all boiled down to those two questions. Of course, if I was to produce an RFx document with simply those two questions, I’d probably be out of a job fairly swiftly.

But those two questions are not really that simple to answer for many vendors.

Is it easy? How simply can I get your product to meet my requirements and business need? My business need may be to provide massive capacity; it could be to support many thousands of VMs, it could be to provide sub-millisecond latency.  This all needs to be simple.

It doesn’t matter if you provide me with the richest feature-set, simplest GUI or backwards compatibility with the ENIAC  if it is going to take a cast of thousands to do this. Yet still vendors struggle to answer the questions posed and you often get the response to a question you didn’t ask but the vendor wants to answer.

Is it cheap? This question is even more complicated as the vendor likes to try to hide all kinds of things but I can tell you; if you are not upfront with your costs and you start to present me with surprises, this is not good.

Of course features like deduplication and compression mean that the capacity costs are even more opaque but we are beginning to head towards the idea that capacity is free; performance costs. But as capacity becomes cheaper, the real value of primary storage dedupe and compression for your non-active set that sits on SATA and the likes begins to diminish.

So just make it easy, just make it cheap and make my costs predictable.

Be honest, be up-front and answer the damn questions….

A Press Release From The Future…

Future-View, CA – March 2018

Evian Storage – Storage so Pure it’s like a torrent of glacial water announced today the end of the All-Flash-Array with the announcement of it’s StupendoStore 20000 based around the HyperboleHype-based storage device.

Our research shows that All Flash Arrays are slowing down businesses in their move to meet the new business paradigms brought about by computing at the quantum scale. Their architectures simply can’t keep up and storage is yet again the bottle-neck and yet scaling economically also seems to be beyond them.  Customers have found themselves locked into an architecture which promised no more fork-lift upgrades but has delivered technology lock-in and all the agility of a dancing hippo. Forget about fork-lifts, we are talking cranes!

Fortunately our team’s experience in delivering hybrid arrays at such companies as EMC, HDS, NetApp and other vendors has enabled us to take advantage of the newest technology on the block but also leverage the economies of flash and indeed the huge capacity and scale of magnetic disk; we know that your data should live in the right place and although we admit that our arrays might not be as fast the Purest arrays…I’m sure we’re not the only ones who prefer their rocket fuel with a little mixer…

Yes, this is a dig at the All-Flash players…but it doesn’t matter how great your technology is today; there will always be something newer and faster round the corner. And as a customer, it is worth remembering that the future is always closer than you think. It could be only a single depreciation cycle away, a single tech-refresh away. The challenge for all vendors is delivering a sustainable model and product-set.

And no-one product will meet all your needs….no matter what the vendor tells you!

Chop Their Fingers Off!

This is a very good piece on FAST-VP on VMAX, well-written and some good advice in it but it sums up almost everything that is wrong with VMAX today. VMAX has too many nerd-knobs and so people think they should fiddle and try and out-do the machine.

And hence probably make a right-old mess, FAST-VP ends up not working quite as well as it should and so people tend to fiddle even more and the next thing you know, you are trying to manage your VMAX in the way you would have managed an old-school Symm.

I think it is time that EMC and their users seriously consider breaking away from the past; the old-school nerd-knob fettling needs to stop. I know that is why storage admins get paid the big bucks but I do wonder if we might be better paying them to stop?

I long for the day when we see VMAX managed without worrying about what the internal engines are doing; when we set various performance parameters and let the array sort it out. When we pay for performance and capacity without worrying how the system gets to it.

There is at least one amusing part of advice in the article tho’ and it although it is well-argued and there appears to be good reason to do so; you still should keep the FC-tier on RAID-1 mirrored disks…Nothing really changes in the world of Symm!

 

 

VSANity?

So VSAN is finally here in a released form; on paper, it sure looks impressive but it’s not for me.

I spend an awful lot of time looking at Scale-Out Storage systems; looking at ways to do them faster, cheaper and better. And although I welcome VMware and VSAN to the party; I think that their product falls some-way from the mark but I don’t think that I’m really the target market; it’s not really ready or appropriate for Media and Entertainment or anyone interested in HyperScale.

But even so I’ve got thoughts that I’d like to share.

So VSAN is better because it runs in the VMware kernel? This seems logical but this has tied VSAN to VMware in a way that some of the competing products are not; if I want to run a Gluster Cluster which encompasses not just VMware but also XEN, bare-metal and anything else, I could. And there might be some excellent reasons why I would want to do so, I’d transcode on bare-metal machines for example but might present out on VM-ed application servers. Of course, it is not only Media and Entertainment who have such requirements; there are plenty of other places where heavy lifting would be better done on the bare-metal.

I think that VMware need to be much more open about allowing third party access to the kernel interfaces; they should allow more pluggable options; so I could run GPFS, ScaleIO, Gluster, Stornext within the VMWare kernel.

VSAN limits itself by tying itself so closely to the VMware stack; it’s scalability is limited by the current cluster size. Now there are plenty good architectural reasons for doing so but most of these are enforced by a VMware-only mindset.

But why limit to only 35 disks per server? An HP ProLiant SL4540 takes 60 disks and there are SuperMicro chassis that take 72 disks. Increasing the spindle count not only increases the maximum capacity but the RAW IOps of the solution. Of course, there might be some saturation issues with regards to the inter-server communication.

Yet, I do think it is interesting how the converged IT stacks are progressing; the differences in approach; VMware itself is pretty much a converged stack now but it is a software converged stack; VCE and Nutanix converge onto hardware as well. And yes, VMware is currently the core of all of this.

I actually prefer the VMware-only approach in many ways as I think I could scale computer and storage separately within some boundaries; I’m not sure what the impact of having unbalanced clusters will be on VSAN? Whether it would make sense to have some Big Flipping Dense VSAN appliances rather than distributing the storage equally across the nodes?

But VSAN is certainly welcome in the market; it certainly validates the approaches being taken by a number of other companies…I just wish it were more flexible and open.

 

Drowning in Roadmaps…

Roadmap after roadmap at the moment; bring out your roadmaps. Of course, this causes me a problem as I’ve now seen roadmaps going way off into the future and it is a pain because as soon as I start speculating about the future of storage; people seem to get very worried about breach of NDAs.

But some general themes are beginning to appear

1) Traditional RAID5 and RAID6 data protection schemas are still in general the go to for most of the major vendors…but all are acknowledging there are problems and are roadmapping different ways of protecting against data loss in the event of drive failures. XIV were right in that you need as many drives as possible taking part in the rebuild; they may have been wrong with specifics.

2)Every vendor is struggling with the appliance versus software model. It is almost painful to watch the thought-processes and the conflict. Few are willing to take the leap into a pure software model and yet they all want to talk about Software Defined Storage. There are some practical considerations but it is mostly dogma and politics.

3)Still the discussions about running workloads on storage arrays directly seem to rage and with little real clue as to what, how and why you would do so. There are some workloads that you might but the use-cases are not as compelling as you might think.

4)Automated Storage Tiering, it appears to be getting better but it still seems that people do not yet trust it fully and are wasting a huge amount of cycles second guessing the automation. Most vendors are struggling with where to go next.

5)Vendors still seem to be overly focussed on building features into general purpose arrays to meet the corner-cases. VDI and Big Data related features pepper roadmaps but with little comprehension of the real demand and requirement.

6)Intel have won the storage market or at least x86 has. And it is making it increasingly hard for vendors to distinguish between generations of their storage…the current generations of x86 could well power storage arrays way into the future.

7)FCoE still seems to be more discussed than implemented; a tick-box feature that currently outside some markets has no demand. 16 Gig Fibre-channel is certainly beginning to appear on the concrete side of the roadmaps; I’ve seen 40GbE on a couple now.

8)Flexibility of packaging and physical deployment options is actually a feature; vendors are more willing to allow you to re-rack their kit to fit your environment and data-centre.

9)The new boys on the block feel a lot like the old boys on the block…mostly because they are.

10)Block and File storage are still very resilient against the putative assaults of Object Storage.

11)The most compelling feature for many of us at the high-end is the procurement model that moves us to linear pricing. There are still struggles how to make this happen.

And yet expect big announcements with marketing splashes in May…Expect more marketing than ever!!!

Gherkins

I can only write from my experience and your mileage will vary somewhat but 2014 is already beginning to get interesting from a storage point of view. And it appears to have little to do with technology or perhaps too little technology.

Perhaps the innovation has stopped? Or perhaps we’re finally beginning to see the impact of Google/Amazon and Azure on the Enterprise market. Pricing models seem to be being thrown out of the window as the big vendors try to work out how to defend themselves against the big Cloud players.

Historically high margins are being sacrificed in order to maintain footprint; vendors are competing against themselves internally. Commodity plays are competing with existing product sets; white-box implementations, once something that they all liked to avoid and FUD, are seriously on the agenda.

It won’t be completely free-for-all but expect to start seeing server-platforms certified as target-platforms for all but the highest-value storage. Engineering objections are being worked around as hardware teams transition to software development teams; those who won’t or can’t will become marginalised.

Last year I saw lip-service being paid to this trend; now I’m beginning to see this happening. A change in focus…long overdue.

If you work in the large Enterprise, it seems that you can have it your way….

And yet, I still see a place for the hardware vendor. I see a place for the vendor that has market leading support and the engineering smarts that means that support does not cost a fortune to provide or procure.

Reducing call volumes and onsite visits but still ensuring that the call is handled and dealt with by smart people. This is becoming more and more of a differentiator for me; I don’t want okay support, I want great support.

The move to commoditisation is finally beginning….but I wonder if we are going to need new support models to at least maintain and hopefully improve the support we get today.

 

Storage Blues…

January not even out yet and already we have an interesting technology market happening; IBM’s withdrawal from the x86 server market does lead to a number of questions. Both on the future of IBM but also on what IBM feel the future of the market is; yet could this be another market that they withdraw from only to long-term regret as they did with the network market allowing Cisco to dominate?

IBM’s piecemeal withdrawal from the hardware market; a retreat to the highlands of the legacy enterprise market in hardware will lead to questions across the board as to what the future is for any IBM hardware. I am not sure of the market acceptance of their converged compute/network/storage strategy in the form of PureSystems; their me-too ‘Block’ offering but surely this is dead-duck now; Lenovo may continue to make the x86 components for IBM but how committed can we feel that IBM is to this. IBM appear to have completely ceded this space to their competitors; personally I’m not convinced by most of the converged offerings and the value but to completely cede a market seems to be rash.

But how does this impact IBM storage?

The heart of IBM’s Storwize product set is x86-based servers; SVC especially was ‘just’ an IBM server. IBM were one of the first companies who really leveraged the idea of the server as storage; Shark is and was simply a pair of RS/6000 or pSeries boxes, this has allowed them to utilise and share R&D across divisions. Something which should have been an advantage and enabled them to do some clever stuff; this stuff  they demonstrated yet never delivered.

Now there is no reason for them to simply source the servers from others, the same as almost every other storage company in the world and it moves the Storwize product set firmly into the realms of software (it was anyway) but will IBM move Storwize to a software-only product?

There is part of me who really feels that this is inevitable, it may be as a reaction to a move by a competitor; it may be as a move to enable a vV7000 to run as a cloud appliance? It may well end up being the only way that IBM can maintain any kind of foothold in the storage market.

No I haven’t forgotten XIV or IBM’s Flash offerings; XIV is a solid Tier 1.5 offering but it is also a collection of servers. XIV’s issue is really scalability and simply putting larger drives in is just reducing the IOP density. The Flash offering is as good as many and if you want raw performance without features; it is worth considering.

IBM’s GSS could be built into something which scales and with many of the ‘features’ of XIV. And in a software only IBM Storage strategy; it could develop into a solid product if some of the dependency on specific disk controllers could be relaxed. Yet the question has to be whether IBM has time.

And yet without either a scalable NAS or Object store; IBM have some real problems. None of which are are really hardware problems but moving away from building your base platform probably makes none of them easier to solve.

Or perhaps if they concentrate on software and services….

Already Getting Busy…

I’ve not been away but a mixture of illness, Christmas and general lethargy have meant that I’ve not bothered with writing for a bit. But 2014 and a new year appears to be upon us and I do wonder what it is going to bring us, especially in the world of IT infrastructure.

As we ended 2013, we saw both winners and losers in the world of Flash for example; Violin crashing as they struggle to increase sales and reduce burn; yet Pure seem to be on a stellar rise and hiring like maniacs. A UK launch is imminent and they are going to be interesting to watch. All Flash Arrays are still very much niche and even companies who need them are holding off on making any big decisions.

I’ve already spoken to a hybrid vendor this year; pushing their hybrid is good enough for most cases, very tied to the virtualisation use-case. And yes, VDI all over their powerpoints as a use-case. 2014, the year when VDI happens!!

I expect that I’ll spend time with more hybrid vendors who are playing some kind of chicken with SSD/Disk ratios; how low can they go? However, I’m also seeing more KVM/Openstack appearing on road-maps as they begin to realise that VMware might not be the only game in town.

I’m sure we’ll see more hype around hyper-convergence as attempts continue to build a new mainframe and I shall continue to struggle to work out why anyone wants to? I like being able to scale my infrastructure in right place; I don’t want to have to increase my compute to increase my storage and vice versa. Flexibility around compute/storage and network ratios is important.

Yet convergence of storage and compute will continue and there’s potentially some real challenge to the traditional storage technologies there. If I was building a new infrastructure today, I’d be looking hard whether I needed a SAN at all. But I wouldn’t be going straight to a hyper-converged infrastructure; there be dragons there I suspect.

I’ve already had my first vendor conversation where I’ve suggested that they are actually selling a software product and perhaps they should drop the hardware part; that and asking why the hell were they touting their own REST API for cloud-like storage…if industry giants like EMC have struggled against the Amazon juggernaut, what makes they think that they are any different?

And marketing as differentiation will probably continue….especially as the traditional vendors get more defensive around their legacy products.  No-one should get rich selling disk any more but it won’t stop them all trying.

 

2014 – A Look Forward….

As as we come to the end of another year, it is worth looking forward to see what if anything is going to change in the storage world next year because this year has pretty much been a bust as to innovation and radical new products.

So what is going to change?

I get the feeling not a huge amount.

Storage growth is going to continue for the end-users but the vendors are going to continue to experience a plateau of revenues. As end-users, we will expect more for our money but it will be mostly more of the same.

More hype around Software-Defined-Everything will keep the marketeers and the marchitecture specialists well employed for the next twelve months but don’t expect anything radical. The only innovation is going to be around pricing and consumption models as vendors try to maintain margins.

Early conversations this year point to the fact that the vendors really have little idea how to price their products in this space; if your software+commodity-hardware=cost-of-enterprise-array, what is in it for me?  If vendors get their pricing right; this could be very disruptive but at what cost to their own market position?

We shall see more attempts to integrate storage into the whole-stacks and we’ll see more attempts to converge compute, network and storage at hardware and software levels. Most of these will be some kind of Frankenpliance and converged only in shrink-wrap.

Flash will continue to be hyped as the saviour of the data-centre but we’ll still struggle to find real value in the proposition in many places as will many investors. There is a reckoning coming. I think some of the hybrid manufacturers might do better than the All-Flash challengers.

Hopefully however the costs of commodity SSDs will keep coming down and it’ll finally allow everyone to enjoy better performance on their work-laptops!

Shingled Magnetic Recording will allow storage densities to increase and we’ll see larger capacity drives ship but don’t expect them to appear in mainstream arrays soon; the vibration issues and re-write process is going to require some clever software and hardware to fully commercialise these. Still for those of us who are interested in long-term archive disks, this is an area worth watching.

FCoE will continue to be a side-show and FC, like tape, will soldier on happily. NAS will continue to eat away at the block storage market and perhaps 2014 will be the year that Object storage finally takes off.