Storagebod Rotating Header Image

All The Gear

IBM are a great technology company; they truly are great at technology and so many of the technologies we take for granted can be traced to back to them. And many of today’s implementations still are poorer than the original implementations.

And yet IBM are not the dominant force that they once were; an organisational behemoth, riven with politics and fiefdoms doesn’t always lend itself to agility in the market and often leads to products that are undercooked and have a bit of a ‘soggy bottom’.

I’ve been researching the GSS offering from IBM, GPFS Storage Server; as regular readers of this blog will know, I’m a big fan of GPFS and have a fair amount installed. But don’t think that I’m blinkered to some of the complexities around GPFS; yet it deserves a fair crack of the whip.

There’s a lot to like about GSS; it builds on the solid foundations of GPFS and brings a couple of excellent new features into play.

GPFS Native RAID; also known as declustered RAID is a software implementation of micro-RAID; RAID is done at a block level as opposed to a disk level; this generally means that the cost of rebuilds can be reduced and the time to get back to a protected level can be shortened. As disks continue to get larger, conventional RAID implementations struggle and you can be looking at hours if not days to get back to a protected state.

Disk Hospital; by constantly monitoring the health of the individual disks and collecting metrics for them; the GSS can detect failing disks very early on but there is a dirty secret in the storage world; most disk failures in a storage array are not really failures and could be simply recovered from, a simple power-cycle can be enough or a firmware reflash can be enough to prevent a failure and going into a recovery scenario.

X-IO have been advocating this for a long time; this can reduce maintenance windows and prevent unnecessary rebuilds. It should reduce maintenance costs as well.

Both of these technologies are great and very important to a scalable storage environment.

So why aren’t IBM pushing GSS in general; it’s stuffed full of technology and useful stuff?

The problem is GPFS…GPFS is currently too complicated for many, it’s never going to be a general purpose file system. The licensing model alone precludes that; so if you want to utilise it with a whole bunch of clients, you are going to be rolling your own NFS/SMB 3.0 gateway. Been there, done that…still doing that but it’s not really a sensible option for many.

If IBM really want the GSS to be a success; they need a scaleable and supported NAS gateway in front of it; it needs to be simple to manage. It needs integration with the various virtualisation platforms and they need to simplify the GPFS license model…when I say simplify, I mean get rid of the client license cost.

I want to like product and not just love the technology.

Until then…IBM have got all the gear and no idea….

Hats and Homes..

As Chad breaks his principles to pimp his product and go negative on the other guy here; he hits on something interesting, well I think it’s interesting. It’s how a product becomes a feature; in this case ‘sync n’ share’ functionality.

We’ve seen products become features before; deduplication has moved from being a product to becoming a feature of most storage arrays. And I don’t think it’ll be too long before we see it beginning to appear in consumer storage devices either.

But ‘Sync n’ Share’ is of a whole different order; the valuations of some the companies is quite scary and Chad is probably right about the general unrealism of them. The ‘Sync n’ Share’ companies are vulnerable to attack via a number of vectors; this is not a criticism of the products…Dropbox for example is a great product on many levels; it has great functionality but maybe some questions about security and privacy.

However its ease of use and access means that it has been embraced by both the consumer and the business user (Yes, I know they are consumers); this has scared the crap out of the IT department who find it very hard to compete with ‘free’…you try and build a business case which competes with free; you can talk till you are tired about security concerns.

Few want to pay for it, certainly at scale; it starts to amount to a frightening figure. [Hmmm, business cases and responsibility for presenting them; that’s a whole different blog.] So what happens; the end-users, even if banned by security policies, will continue to use the services. The services are just too damn useful.

And as mobile/BYOD/desktop/laptop/home-working proliferates; they become necessary. People’s home directories are migrating to these services. Work on a document on your desktop, present it on your tablet…without having to transfer it; this workflow simply works.

What we are going to see is vendors of operating systems and storage systems start to build this functionality into their products as a feature. If you are a NAS vendor; you are going to provide an app that allows the user to access their home directories from their mobile device or the web etc…If you are Microsoft or Apple; you are going build this into the operating system. If you are sensible, you are not going to charge a huge amount to provide this functionality; those business cases become a lot simpler, especially if you are simply layering on top of existing home directories and shares.

And what was once a product..is now simply a feature.

Those valuations are going to plummet; I don’t think that application integration and APIs will save them. If I were Dropbox or Box…I’d be looking to sell myself off to a vendor who wants the feature. 

Comparisons with the fate of Netscape might well be made…

 

Fundamental…

I’m a big fan of Etherealmind and his blog; I like that it is a good mix of technical and professional advice; he’s also a good guy to spend an hour or so chatting to, he’s always generous with his time to peers and even when he knows a lot more than you about a subject, you never really feel patronised or lectured to.

I particularly liked this blog, myself and Greg are really on the same page with regards to work/life balance but it is this paragraph that stands out..

 

Why am I focussed on work life ? After 25 or so years in technology, I have developed some level of mastery.  Working on different products is usually just a few days work to come up to speed on the CLI or GUI. Takes a few more weeks to understand some of the subtle tricks. Say a month to be competent, maybe two months. The harder part is refreshing my knowledge on different technologies – for example, SSL, MPLS, Proxy, HTTP, IPsec, SSL VPN. I often need to refresh my knowledge since it fades from my brain or there is some advancement. IPsec is a good example where DMVPN is a solid advancement but takes a few weeks to update the knowledge to an operational level.

Now although he is talking about networking technologies; what he says is true about storage technologies and actually pretty much all of IT these days. You should be able to become productive on most technologies in a matter of days providing you have the fundamentals; spend your early days becoming knowledgeable about the underlying principles and avoid vendor-specific traps.

Try not to run a translation layer in your mind; too many storage admins are translating back to the first array that they worked on; they try to turn hypers and metas into aggregates, they worry about fan-outs without understanding why you have to in some architectures and not necessarily so in others.

Understanding the underlying principles means that you can evaluate new products that much quicker; you are not working why product ‘A’ is better than product ‘B’, this often results in biases. You understand why product ‘A’ is a good fit for your requirement and you also understand why neither product is a good fit.

Instead of iSCSI bad, FC good…you will develop an idea as to the appropriate use-case for either.

You will become more useful…and you will find that you are less resistant to change; it becomes less stressful and easier to manage. Don’t become an EMC dude, become a Storagebod…Don’t become a Linux SysAdmin, become a SysAdmin.

Am I advocating generalism? To a certain extent, yes but you can become expert within a domain and not a savant for a specific technology.

And a final bit of advice; follow Etherealmind….he talks sense for a network guy!

 

 

A two question RFP….

Is it easy?

Is is cheap?

Pretty much these are the only two questions which interest me when talking to a vendor these days; after years of worrying about technology, it has all boiled down to those two questions. Of course, if I was to produce an RFx document with simply those two questions, I’d probably be out of a job fairly swiftly.

But those two questions are not really that simple to answer for many vendors.

Is it easy? How simply can I get your product to meet my requirements and business need? My business need may be to provide massive capacity; it could be to support many thousands of VMs, it could be to provide sub-millisecond latency.  This all needs to be simple.

It doesn’t matter if you provide me with the richest feature-set, simplest GUI or backwards compatibility with the ENIAC  if it is going to take a cast of thousands to do this. Yet still vendors struggle to answer the questions posed and you often get the response to a question you didn’t ask but the vendor wants to answer.

Is it cheap? This question is even more complicated as the vendor likes to try to hide all kinds of things but I can tell you; if you are not upfront with your costs and you start to present me with surprises, this is not good.

Of course features like deduplication and compression mean that the capacity costs are even more opaque but we are beginning to head towards the idea that capacity is free; performance costs. But as capacity becomes cheaper, the real value of primary storage dedupe and compression for your non-active set that sits on SATA and the likes begins to diminish.

So just make it easy, just make it cheap and make my costs predictable.

Be honest, be up-front and answer the damn questions….

A Press Release From The Future…

Future-View, CA – March 2018

Evian Storage – Storage so Pure it’s like a torrent of glacial water announced today the end of the All-Flash-Array with the announcement of it’s StupendoStore 20000 based around the HyperboleHype-based storage device.

Our research shows that All Flash Arrays are slowing down businesses in their move to meet the new business paradigms brought about by computing at the quantum scale. Their architectures simply can’t keep up and storage is yet again the bottle-neck and yet scaling economically also seems to be beyond them.  Customers have found themselves locked into an architecture which promised no more fork-lift upgrades but has delivered technology lock-in and all the agility of a dancing hippo. Forget about fork-lifts, we are talking cranes!

Fortunately our team’s experience in delivering hybrid arrays at such companies as EMC, HDS, NetApp and other vendors has enabled us to take advantage of the newest technology on the block but also leverage the economies of flash and indeed the huge capacity and scale of magnetic disk; we know that your data should live in the right place and although we admit that our arrays might not be as fast the Purest arrays…I’m sure we’re not the only ones who prefer their rocket fuel with a little mixer…

Yes, this is a dig at the All-Flash players…but it doesn’t matter how great your technology is today; there will always be something newer and faster round the corner. And as a customer, it is worth remembering that the future is always closer than you think. It could be only a single depreciation cycle away, a single tech-refresh away. The challenge for all vendors is delivering a sustainable model and product-set.

And no-one product will meet all your needs….no matter what the vendor tells you!

Chop Their Fingers Off!

This is a very good piece on FAST-VP on VMAX, well-written and some good advice in it but it sums up almost everything that is wrong with VMAX today. VMAX has too many nerd-knobs and so people think they should fiddle and try and out-do the machine.

And hence probably make a right-old mess, FAST-VP ends up not working quite as well as it should and so people tend to fiddle even more and the next thing you know, you are trying to manage your VMAX in the way you would have managed an old-school Symm.

I think it is time that EMC and their users seriously consider breaking away from the past; the old-school nerd-knob fettling needs to stop. I know that is why storage admins get paid the big bucks but I do wonder if we might be better paying them to stop?

I long for the day when we see VMAX managed without worrying about what the internal engines are doing; when we set various performance parameters and let the array sort it out. When we pay for performance and capacity without worrying how the system gets to it.

There is at least one amusing part of advice in the article tho’ and it although it is well-argued and there appears to be good reason to do so; you still should keep the FC-tier on RAID-1 mirrored disks…Nothing really changes in the world of Symm!

 

 

VSANity?

So VSAN is finally here in a released form; on paper, it sure looks impressive but it’s not for me.

I spend an awful lot of time looking at Scale-Out Storage systems; looking at ways to do them faster, cheaper and better. And although I welcome VMware and VSAN to the party; I think that their product falls some-way from the mark but I don’t think that I’m really the target market; it’s not really ready or appropriate for Media and Entertainment or anyone interested in HyperScale.

But even so I’ve got thoughts that I’d like to share.

So VSAN is better because it runs in the VMware kernel? This seems logical but this has tied VSAN to VMware in a way that some of the competing products are not; if I want to run a Gluster Cluster which encompasses not just VMware but also XEN, bare-metal and anything else, I could. And there might be some excellent reasons why I would want to do so, I’d transcode on bare-metal machines for example but might present out on VM-ed application servers. Of course, it is not only Media and Entertainment who have such requirements; there are plenty of other places where heavy lifting would be better done on the bare-metal.

I think that VMware need to be much more open about allowing third party access to the kernel interfaces; they should allow more pluggable options; so I could run GPFS, ScaleIO, Gluster, Stornext within the VMWare kernel.

VSAN limits itself by tying itself so closely to the VMware stack; it’s scalability is limited by the current cluster size. Now there are plenty good architectural reasons for doing so but most of these are enforced by a VMware-only mindset.

But why limit to only 35 disks per server? An HP ProLiant SL4540 takes 60 disks and there are SuperMicro chassis that take 72 disks. Increasing the spindle count not only increases the maximum capacity but the RAW IOps of the solution. Of course, there might be some saturation issues with regards to the inter-server communication.

Yet, I do think it is interesting how the converged IT stacks are progressing; the differences in approach; VMware itself is pretty much a converged stack now but it is a software converged stack; VCE and Nutanix converge onto hardware as well. And yes, VMware is currently the core of all of this.

I actually prefer the VMware-only approach in many ways as I think I could scale computer and storage separately within some boundaries; I’m not sure what the impact of having unbalanced clusters will be on VSAN? Whether it would make sense to have some Big Flipping Dense VSAN appliances rather than distributing the storage equally across the nodes?

But VSAN is certainly welcome in the market; it certainly validates the approaches being taken by a number of other companies…I just wish it were more flexible and open.

 

IT’s choking the life out of me.

I’ve been fairly used to the idea that my PC at home is substantially better than my work one; this has certainly been the case for me for more than a decade. I’m a geek and I spend more than most on my personal technology environment.

However, it is no longer just my home PC; I’ve got better software tools and back-end systems; my home workflow is so much better than my work workflow; it’s not even close. And the integration with my mobile devices, it’s a completely different league altogether. I can edit documents on my iPad, my MBA, my desktop, even my phone and they’ll all sync up and be in the same place for me. My email is a common experience across all devices. My media; it’s just there.

With the only real exception of games; it doesn’t matter which device I’m using to do stuff.

And what is more; it’s not just me; my daughter has the same for her stuff as does my wife. We’ve not had to do anything clever, there’s no clever scripting involved, we just use consumer-level stuff.

Yet our working experience is so much poorer; if my wife wants to work on her stuff for her job, she’s either got to email it to herself or use ‘GoToMyPC’ provided by her employer.

Let’s be honest, for most of us now…our work environment is quite frankly rubbish. It has fallen so far behind consumer IT, it’s sad.

It’s no longer the technology enthusiast who generally has a better environment…it’s almost everyone who has access to IT. And not only that, we pay a lot less for it than the average business.

Our suppliers hide behind a cloak of complexity; I’m beginning to wonder if IT as it is traditionally understood by business is no longer an enabler, it’s just a choke-point.

And yes there are many excuses as to why this is the case; go ahead…make them! I’ve made them myself but I don’t really believe them any more…do you?

Drowning in Roadmaps…

Roadmap after roadmap at the moment; bring out your roadmaps. Of course, this causes me a problem as I’ve now seen roadmaps going way off into the future and it is a pain because as soon as I start speculating about the future of storage; people seem to get very worried about breach of NDAs.

But some general themes are beginning to appear

1) Traditional RAID5 and RAID6 data protection schemas are still in general the go to for most of the major vendors…but all are acknowledging there are problems and are roadmapping different ways of protecting against data loss in the event of drive failures. XIV were right in that you need as many drives as possible taking part in the rebuild; they may have been wrong with specifics.

2)Every vendor is struggling with the appliance versus software model. It is almost painful to watch the thought-processes and the conflict. Few are willing to take the leap into a pure software model and yet they all want to talk about Software Defined Storage. There are some practical considerations but it is mostly dogma and politics.

3)Still the discussions about running workloads on storage arrays directly seem to rage and with little real clue as to what, how and why you would do so. There are some workloads that you might but the use-cases are not as compelling as you might think.

4)Automated Storage Tiering, it appears to be getting better but it still seems that people do not yet trust it fully and are wasting a huge amount of cycles second guessing the automation. Most vendors are struggling with where to go next.

5)Vendors still seem to be overly focussed on building features into general purpose arrays to meet the corner-cases. VDI and Big Data related features pepper roadmaps but with little comprehension of the real demand and requirement.

6)Intel have won the storage market or at least x86 has. And it is making it increasingly hard for vendors to distinguish between generations of their storage…the current generations of x86 could well power storage arrays way into the future.

7)FCoE still seems to be more discussed than implemented; a tick-box feature that currently outside some markets has no demand. 16 Gig Fibre-channel is certainly beginning to appear on the concrete side of the roadmaps; I’ve seen 40GbE on a couple now.

8)Flexibility of packaging and physical deployment options is actually a feature; vendors are more willing to allow you to re-rack their kit to fit your environment and data-centre.

9)The new boys on the block feel a lot like the old boys on the block…mostly because they are.

10)Block and File storage are still very resilient against the putative assaults of Object Storage.

11)The most compelling feature for many of us at the high-end is the procurement model that moves us to linear pricing. There are still struggles how to make this happen.

And yet expect big announcements with marketing splashes in May…Expect more marketing than ever!!!

Disrupt?

So you’ve founded a new storage business; you’ve got a great idea and you want to disrupt the market? Good for you…but you want to maintain the same-old margins as the old crew?

So you build it around commodity hardware; you use the same commodity hardware as I can buy off the shelf; basically the same disks that I can buy off the shelf from PC World or order from my preferred Enterprise tin-shifter.

You tell me that you are lean and mean? You don’t have huge sales overheads, no huge marketing budget and no legacy code to maintain?

You tell me that it’s all about the software but you still want to clothe it in hardware.

And then you tell me it’s cheaper than the stuff that I buy from my current vendor? How much cheaper? 20%, 30%, 40%, 50%??

Then I do the calculations; your cost base and your BoM is much lower and you are actually making more money per terabyte than the big old company that you used to work for?

But hey, I’m still saving money, so that’s okay….

Of course, then I dig a bit more…I want support? Your support organisation is tiny; I do my due diligence,  can you really hit your response times?

But you’ve got a really great feature? How great? I’ve not seen a single vendor come up with a feature that is so awesome and so unique that no-one manages to copy it…few which aren’t in a lab somewhere.

In a race to the bottom; you are still too greedy. You still believe that customers are stupid and will accept being ripped off.

If you were truly disruptive….you’d work out a way of articulating the value of your software without clothing it in hardware. You’d work with me on getting it onto commodity hardware and no I’m not talking about some no-name white-box; you’d work with me on getting it onto my preferred vendor’s kit; be it HP, Dell, Lenovo, Oracle or whoever else…

For hardware issues; I could utilise the economies of scale and the leverage I have with my tin-shifter; you wouldn’t have to set-up a maintenance function or sub-contract it to some third party who will inevitably let us both down.

And for software support; well you could concentrate on those…

You’d help me be truly disruptive…and ultimately we’d both be successful…