Storagebod Rotating Header Image

April, 2014:

Licensed To Bill

‘*sigh* Another change to a licensing model and you can bet it’s not going to work out any cheaper for me’ was the first thought that flickered through my mind during a presentation about  GPFS 4.1 at the GPFS UG meeting in London (if you are a GPFS user in the UK, you should attend this next time…probably the best UG meeting I’ve been at for a long time).

This started up another train of thought; in this new world of Software Defined Storage, how should the software be licensed? And how should the value be reflected?

Should we be moving to a capacity based model?

Should I get charged per terabyte of storage being ‘managed’?

Or perhaps per server that has this software defined storage presented to it?

Perhaps per socket? Per core?

But this might not work well if I’m running at hyperscale?

And if I fully embrace a programmatic provisioning model that dynamically changes the storage configuration…does any model make any sense apart from some kind of flat-fee, all-you-can-eat model.

Chatting to a few people; it seems that no-one really has any idea what the licensing model should look like. Funnily enough; it is this sort of thing which could really de-rail ServerSAN and Software Defined Storage; it’s not going to be a technical challenge but if the licensing model gets too complex, hard to manage and generally too costly, it is going to fail.

Of course inevitably someone is going to pop-up and mention Open-Source…and I will simply point out, RedHat make quite a lot of money out of Open-Source; you pay for support based on some kind of model. Cost of acquisition is just a part of IT infrastructure spend.

So what is a reasonable price? Anyone?

 

 

FUD Returns?

Are we drifting into another round of the storage wars where FUD starts to fly? It always makes for good copy. The little guy complaining about the big guy who is complaining about the little guy; David with his sling versus Goliath with his rocket launcher.

EMC versus 3Par NetApp Pure Storage?

The thing with FUD is that it doesn’t really work unless there is the tiniest grain of truth in their somewhere; you take the smallest and most inconsequential thing and magnify it. And in storage, it is really easy to do it….why?

Because it depends!

Storage workloads can be very different and have very different characteristics; a great majority of my workloads are very different to most peoples. And I find myself taking exception to almost every marketing message out there from all vendors.

Tape Is Dead? Not in my workloads; tape is really the only current economical media for long term digital archives.

Disk is cheaper than tape? Only if you can get significant dedupe and compression.

SSD is better than disk? When throughput is king; disk and SSD end-up about the same.

SSD same price as disk?   Only if you can get significant dedupe and compression.

Scale-Up versus Scale-Out? I happen to think that Scale-Out is the best architecture, it suits my applications..if you have a large legacy estate, you might find that Scale-Up works better for you.

There are so many factors to account for in many workloads; from application design to the nature of the data…

Storage doesn’t really distil down into a nice simple marketing message but it doesn’t stop them from trying and hence we get FUD.

It’s sometimes Funny

It’s mostly Useless

It’s written by Drones…

And we’ll get more of it this year than we have had for a few years…

And where’s Marc Farley when you need him…

 

 

Too Cheap To Manage…

Five years or so when I started this blog, I spent much time venting my spleen at EMC and especially the abomination that was Control-Center; a product so poor that a peer in the inudstry once described it as being too expensive even if it was free.

And yet still the search for the perfect storage management product still continues; there have been contenders along the way and yet they still continue to fall short and as the administration tools have got better and easier to use, the actual management tools have still fallen some way short of the mark.

But something else has happened and it was only a chance conversation today that highlighted this to me; the tenuous business case that many have been purchased on has collapsed…many storage management products are purchased with the business case that ultimately that they will save you money by allowing you to right-size your storage estate….they will maximise the usage of the estate that you have on the floor.

Unfortunately and it surprises to say this; the price of enterprise storage has collapsed…seriously, although it is still obviously too expensive (I have to say that); the price of storage management products has not declined at the same rate. This means that it is doubtful that I can actually save enough capacity to make it worth my time trying too hard and putting in a tool to do so, the economics don’t actually stack up.

So there has to be whole new business case around risk mitigation, change-planning, improved agility…or the licensing model that tends to be capacity-based in some form or another has to be reviewed.

Do we still need good storage management tools? Yes but they need to focused on automation and service delivery; not on simply improving the utilisation of the estate.

Thin-provisioning, deduplication, compression and the likes are already driving down these costs; they do this ways that are easier than reclaiming orphaned storage and even under-utilised SAN ports.And as long as I am clever, I can pick-up a lot of orphaned storage on refresh.

If ‘Server-SAN’ is a real thing; these tools are going to converge into the general management tools, giving me a whole new topic to vent at..because most of these aren’t especially great either.

p.s If you want to embarrass EMC and make them sheepish…just mention Control-Center…you’d think it’d killed someone..

Buying Any Kind of IT..

Thanks to Sean Horne for reminding me of this…

I think we’ve all been there and at times the feature lists seem to defy the brightest salesbod to explain. Although, the sales audience seems to be rather too interested in what the product they are expected to sell actually does and what value it might bring..

Buying High-End Storage

I was in the process of writing a blog about buying High-End storage..then I remembered this sketch. So in a purely lazy blog entry, I think this sums at the experience of many storage buyers…

 

I think as we head into a month or so of breathless announcements, bonkers valuations and industry nonsense..it is worth a watch…

But I do have some special audiophile SAN cables which will enhance the quality of your data if you want some!! It may even enbiggen it!

Buy Savvy….

Howard has a piece titled ‘Separating Storage Startups From Upstarts’; it actually feels more like a piece on how to be a technology buyer and how to be a savvy buyer. As someone who on occasion buys a bit of technology or at least influences buying decisions…here’s some of my thoughts.

List price from any vendor is completely meaningless; most vendors only seem to have list prices to comply with various corporate governance regimes. And of course having a list price means that the procurement department can feel special when they’ve negotiated the price down to some stupidly low percentage of the original quote; in a world where 50%+ discounts are common, list is nonsense.

What is true is that often a start-ups list price will be lower than the traditional vendor; it’s got to be even to start a conversation.

In my experience; the biggest mistake an end-user can make is not being willing to take any bid competitive; dual supplier type arrangements are good but often can lead to complexity in an environment. If you can split your infrastructure into domains; say for example you buy all your block from one vendor and your file from another vendor..or perhaps, you have a tiering strategy that allows you to do something similar.

But loyalty can bring rewards as well; partnership is thrown around but learning to work with your vendor is important. Knowing which buttons to press and learning how a vendor organisation works is often key to getting the most out of infrastructure procurement.

Howard’s assertion about a three-year life of an array in a data centre? This doesn’t seem to ring true for me and many of my peers; four-five years seems  to the minimum life in general. If it were three years, we would generally be looking at an actual two-year useful life of an array; six months to get on, two years running and six months to get off. Many organisations are struggling with four years and as arrays get bigger; this is getting longer.

And the pain of going through a technology refresh every three years; well we’d be living in a constant sea of moving data whilst trying to do new things as well. So my advice, plan for a five year refresh cycle…

My advice to any technology buyer is to pay close attention to the ‘UpStarts’ but also pay attention to your existing relationships; know what you want and what you need. Make sure that any vendor or potential vendor can do what they say; understand what they can do when there are problems. Test their commitment and flexibility.

Look very carefully at any new offering from anyone; is it a product or a feature?  If it is a feature; is it one that is going to change your world substantially? Violin arguably fell into the trap of being a feature; extreme performance…it’s something that few really need.

And when dealing with a new company; understand where their sales-culture has come from…if you had a bad experience with their previous employer, there’s a fair chance that you might have a similar experience again.

 

New Service Offering…

I like sales people and marketeers; they are often nice, genuine and good people….mostly!

But..

I’ve got a new service to offer; if you think that you’ve invented a new product sector, a new market, a new concept…email me and we’ll arrange a call.

If you can convince me that you’ve invented a completely new concept; the call is free and I’ll even write a blog on it but I won’t pimp your product. If I call ‘Bullsh*t’, you buy me something off my Amazon wishlist and I won’t laugh at you in public!

And I’ll give you a starter…if your new concept is Anything Defined Anything….it’s ‘Bullsh*t…total and utter crap…’!

 

All The Gear

IBM are a great technology company; they truly are great at technology and so many of the technologies we take for granted can be traced to back to them. And many of today’s implementations still are poorer than the original implementations.

And yet IBM are not the dominant force that they once were; an organisational behemoth, riven with politics and fiefdoms doesn’t always lend itself to agility in the market and often leads to products that are undercooked and have a bit of a ‘soggy bottom’.

I’ve been researching the GSS offering from IBM, GPFS Storage Server; as regular readers of this blog will know, I’m a big fan of GPFS and have a fair amount installed. But don’t think that I’m blinkered to some of the complexities around GPFS; yet it deserves a fair crack of the whip.

There’s a lot to like about GSS; it builds on the solid foundations of GPFS and brings a couple of excellent new features into play.

GPFS Native RAID; also known as declustered RAID is a software implementation of micro-RAID; RAID is done at a block level as opposed to a disk level; this generally means that the cost of rebuilds can be reduced and the time to get back to a protected level can be shortened. As disks continue to get larger, conventional RAID implementations struggle and you can be looking at hours if not days to get back to a protected state.

Disk Hospital; by constantly monitoring the health of the individual disks and collecting metrics for them; the GSS can detect failing disks very early on but there is a dirty secret in the storage world; most disk failures in a storage array are not really failures and could be simply recovered from, a simple power-cycle can be enough or a firmware reflash can be enough to prevent a failure and going into a recovery scenario.

X-IO have been advocating this for a long time; this can reduce maintenance windows and prevent unnecessary rebuilds. It should reduce maintenance costs as well.

Both of these technologies are great and very important to a scalable storage environment.

So why aren’t IBM pushing GSS in general; it’s stuffed full of technology and useful stuff?

The problem is GPFS…GPFS is currently too complicated for many, it’s never going to be a general purpose file system. The licensing model alone precludes that; so if you want to utilise it with a whole bunch of clients, you are going to be rolling your own NFS/SMB 3.0 gateway. Been there, done that…still doing that but it’s not really a sensible option for many.

If IBM really want the GSS to be a success; they need a scaleable and supported NAS gateway in front of it; it needs to be simple to manage. It needs integration with the various virtualisation platforms and they need to simplify the GPFS license model…when I say simplify, I mean get rid of the client license cost.

I want to like product and not just love the technology.

Until then…IBM have got all the gear and no idea….

Hats and Homes..

As Chad breaks his principles to pimp his product and go negative on the other guy here; he hits on something interesting, well I think it’s interesting. It’s how a product becomes a feature; in this case ‘sync n’ share’ functionality.

We’ve seen products become features before; deduplication has moved from being a product to becoming a feature of most storage arrays. And I don’t think it’ll be too long before we see it beginning to appear in consumer storage devices either.

But ‘Sync n’ Share’ is of a whole different order; the valuations of some the companies is quite scary and Chad is probably right about the general unrealism of them. The ‘Sync n’ Share’ companies are vulnerable to attack via a number of vectors; this is not a criticism of the products…Dropbox for example is a great product on many levels; it has great functionality but maybe some questions about security and privacy.

However its ease of use and access means that it has been embraced by both the consumer and the business user (Yes, I know they are consumers); this has scared the crap out of the IT department who find it very hard to compete with ‘free’…you try and build a business case which competes with free; you can talk till you are tired about security concerns.

Few want to pay for it, certainly at scale; it starts to amount to a frightening figure. [Hmmm, business cases and responsibility for presenting them; that’s a whole different blog.] So what happens; the end-users, even if banned by security policies, will continue to use the services. The services are just too damn useful.

And as mobile/BYOD/desktop/laptop/home-working proliferates; they become necessary. People’s home directories are migrating to these services. Work on a document on your desktop, present it on your tablet…without having to transfer it; this workflow simply works.

What we are going to see is vendors of operating systems and storage systems start to build this functionality into their products as a feature. If you are a NAS vendor; you are going to provide an app that allows the user to access their home directories from their mobile device or the web etc…If you are Microsoft or Apple; you are going build this into the operating system. If you are sensible, you are not going to charge a huge amount to provide this functionality; those business cases become a lot simpler, especially if you are simply layering on top of existing home directories and shares.

And what was once a product..is now simply a feature.

Those valuations are going to plummet; I don’t think that application integration and APIs will save them. If I were Dropbox or Box…I’d be looking to sell myself off to a vendor who wants the feature. 

Comparisons with the fate of Netscape might well be made…

 

Fundamental…

I’m a big fan of Etherealmind and his blog; I like that it is a good mix of technical and professional advice; he’s also a good guy to spend an hour or so chatting to, he’s always generous with his time to peers and even when he knows a lot more than you about a subject, you never really feel patronised or lectured to.

I particularly liked this blog, myself and Greg are really on the same page with regards to work/life balance but it is this paragraph that stands out..

 

Why am I focussed on work life ? After 25 or so years in technology, I have developed some level of mastery.  Working on different products is usually just a few days work to come up to speed on the CLI or GUI. Takes a few more weeks to understand some of the subtle tricks. Say a month to be competent, maybe two months. The harder part is refreshing my knowledge on different technologies – for example, SSL, MPLS, Proxy, HTTP, IPsec, SSL VPN. I often need to refresh my knowledge since it fades from my brain or there is some advancement. IPsec is a good example where DMVPN is a solid advancement but takes a few weeks to update the knowledge to an operational level.

Now although he is talking about networking technologies; what he says is true about storage technologies and actually pretty much all of IT these days. You should be able to become productive on most technologies in a matter of days providing you have the fundamentals; spend your early days becoming knowledgeable about the underlying principles and avoid vendor-specific traps.

Try not to run a translation layer in your mind; too many storage admins are translating back to the first array that they worked on; they try to turn hypers and metas into aggregates, they worry about fan-outs without understanding why you have to in some architectures and not necessarily so in others.

Understanding the underlying principles means that you can evaluate new products that much quicker; you are not working why product ‘A’ is better than product ‘B’, this often results in biases. You understand why product ‘A’ is a good fit for your requirement and you also understand why neither product is a good fit.

Instead of iSCSI bad, FC good…you will develop an idea as to the appropriate use-case for either.

You will become more useful…and you will find that you are less resistant to change; it becomes less stressful and easier to manage. Don’t become an EMC dude, become a Storagebod…Don’t become a Linux SysAdmin, become a SysAdmin.

Am I advocating generalism? To a certain extent, yes but you can become expert within a domain and not a savant for a specific technology.

And a final bit of advice; follow Etherealmind….he talks sense for a network guy!