Storagebod Rotating Header Image

Corporate IT

Too Cheap Too Manage…

Five years or so when I started this blog, I spent much time venting my spleen at EMC and especially the abomination that was Control-Center; a product so poor that a peer in the inudstry once described it as being too expensive even if it was free.

And yet still the search for the perfect storage management product still continues; there have been contenders along the way and yet they still continue to fall short and as the administration tools have got better and easier to use, the actual management tools have still fallen some way short of the mark.

But something else has happened and it was only a chance conversation today that highlighted this to me; the tenuous business case that many have been purchased on has collapsed…many storage management products are purchased with the business case that ultimately that they will save you money by allowing you to right-size your storage estate….they will maximise the usage of the estate that you have on the floor.

Unfortunately and it surprises to say this; the price of enterprise storage has collapsed…seriously, although it is still obviously too expensive (I have to say that); the price of storage management products has not declined at the same rate. This means that it is doubtful that I can actually save enough capacity to make it worth my time trying too hard and putting in a tool to do so, the economics don’t actually stack up.

So there has to be whole new business case around risk mitigation, change-planning, improved agility…or the licensing model that tends to be capacity-based in some form or another has to be reviewed.

Do we still need good storage management tools? Yes but they need to focused on automation and service delivery; not on simply improving the utilisation of the estate.

Thin-provisioning, deduplication, compression and the likes are already driving down these costs; they do this ways that are easier than reclaiming orphaned storage and even under-utilised SAN ports.And as long as I am clever, I can pick-up a lot of orphaned storage on refresh.

If ‘Server-SAN’ is a real thing; these tools are going to converge into the general management tools, giving me a whole new topic to vent at..because most of these aren’t especially great either.

p.s If you want to embarrass EMC and make them sheepish…just mention Control-Center…you’d think it’d killed someone..

Buying Any Kind of IT..

Thanks to Sean Horne for reminding me of this…

I think we’ve all been there and at times the feature lists seem to defy the brightest salesbod to explain. Although, the sales audience seems to be rather too interested in what the product they are expected to sell actually does and what value it might bring..

Buying High-End Storage

I was in the process of writing a blog about buying High-End storage..then I remembered this sketch. So in a purely lazy blog entry, I think this sums at the experience of many storage buyers…


I think as we head into a month or so of breathless announcements, bonkers valuations and industry is worth a watch…

But I do have some special audiophile SAN cables which will enhance the quality of your data if you want some!! It may even enbiggen it!

Buy Savvy….

Howard has a piece titled ‘Separating Storage Startups From Upstarts’; it actually feels more like a piece on how to be a technology buyer and how to be a savvy buyer. As someone who on occasion buys a bit of technology or at least influences buying decisions…here’s some of my thoughts.

List price from any vendor is completely meaningless; most vendors only seem to have list prices to comply with various corporate governance regimes. And of course having a list price means that the procurement department can feel special when they’ve negotiated the price down to some stupidly low percentage of the original quote; in a world where 50%+ discounts are common, list is nonsense.

What is true is that often a start-ups list price will be lower than the traditional vendor; it’s got to be even to start a conversation.

In my experience; the biggest mistake an end-user can make is not being willing to take any bid competitive; dual supplier type arrangements are good but often can lead to complexity in an environment. If you can split your infrastructure into domains; say for example you buy all your block from one vendor and your file from another vendor..or perhaps, you have a tiering strategy that allows you to do something similar.

But loyalty can bring rewards as well; partnership is thrown around but learning to work with your vendor is important. Knowing which buttons to press and learning how a vendor organisation works is often key to getting the most out of infrastructure procurement.

Howard’s assertion about a three-year life of an array in a data centre? This doesn’t seem to ring true for me and many of my peers; four-five years seems  to the minimum life in general. If it were three years, we would generally be looking at an actual two-year useful life of an array; six months to get on, two years running and six months to get off. Many organisations are struggling with four years and as arrays get bigger; this is getting longer.

And the pain of going through a technology refresh every three years; well we’d be living in a constant sea of moving data whilst trying to do new things as well. So my advice, plan for a five year refresh cycle…

My advice to any technology buyer is to pay close attention to the ‘UpStarts’ but also pay attention to your existing relationships; know what you want and what you need. Make sure that any vendor or potential vendor can do what they say; understand what they can do when there are problems. Test their commitment and flexibility.

Look very carefully at any new offering from anyone; is it a product or a feature?  If it is a feature; is it one that is going to change your world substantially? Violin arguably fell into the trap of being a feature; extreme performance…it’s something that few really need.

And when dealing with a new company; understand where their sales-culture has come from…if you had a bad experience with their previous employer, there’s a fair chance that you might have a similar experience again.


Hats and Homes..

As Chad breaks his principles to pimp his product and go negative on the other guy here; he hits on something interesting, well I think it’s interesting. It’s how a product becomes a feature; in this case ‘sync n’ share’ functionality.

We’ve seen products become features before; deduplication has moved from being a product to becoming a feature of most storage arrays. And I don’t think it’ll be too long before we see it beginning to appear in consumer storage devices either.

But ‘Sync n’ Share’ is of a whole different order; the valuations of some the companies is quite scary and Chad is probably right about the general unrealism of them. The ‘Sync n’ Share’ companies are vulnerable to attack via a number of vectors; this is not a criticism of the products…Dropbox for example is a great product on many levels; it has great functionality but maybe some questions about security and privacy.

However its ease of use and access means that it has been embraced by both the consumer and the business user (Yes, I know they are consumers); this has scared the crap out of the IT department who find it very hard to compete with ‘free’…you try and build a business case which competes with free; you can talk till you are tired about security concerns.

Few want to pay for it, certainly at scale; it starts to amount to a frightening figure. [Hmmm, business cases and responsibility for presenting them; that's a whole different blog.] So what happens; the end-users, even if banned by security policies, will continue to use the services. The services are just too damn useful.

And as mobile/BYOD/desktop/laptop/home-working proliferates; they become necessary. People’s home directories are migrating to these services. Work on a document on your desktop, present it on your tablet…without having to transfer it; this workflow simply works.

What we are going to see is vendors of operating systems and storage systems start to build this functionality into their products as a feature. If you are a NAS vendor; you are going to provide an app that allows the user to access their home directories from their mobile device or the web etc…If you are Microsoft or Apple; you are going build this into the operating system. If you are sensible, you are not going to charge a huge amount to provide this functionality; those business cases become a lot simpler, especially if you are simply layering on top of existing home directories and shares.

And what was once a now simply a feature.

Those valuations are going to plummet; I don’t think that application integration and APIs will save them. If I were Dropbox or Box…I’d be looking to sell myself off to a vendor who wants the feature. 

Comparisons with the fate of Netscape might well be made…



I’m a big fan of Etherealmind and his blog; I like that it is a good mix of technical and professional advice; he’s also a good guy to spend an hour or so chatting to, he’s always generous with his time to peers and even when he knows a lot more than you about a subject, you never really feel patronised or lectured to.

I particularly liked this blog, myself and Greg are really on the same page with regards to work/life balance but it is this paragraph that stands out..


Why am I focussed on work life ? After 25 or so years in technology, I have developed some level of mastery.  Working on different products is usually just a few days work to come up to speed on the CLI or GUI. Takes a few more weeks to understand some of the subtle tricks. Say a month to be competent, maybe two months. The harder part is refreshing my knowledge on different technologies – for example, SSL, MPLS, Proxy, HTTP, IPsec, SSL VPN. I often need to refresh my knowledge since it fades from my brain or there is some advancement. IPsec is a good example where DMVPN is a solid advancement but takes a few weeks to update the knowledge to an operational level.

Now although he is talking about networking technologies; what he says is true about storage technologies and actually pretty much all of IT these days. You should be able to become productive on most technologies in a matter of days providing you have the fundamentals; spend your early days becoming knowledgeable about the underlying principles and avoid vendor-specific traps.

Try not to run a translation layer in your mind; too many storage admins are translating back to the first array that they worked on; they try to turn hypers and metas into aggregates, they worry about fan-outs without understanding why you have to in some architectures and not necessarily so in others.

Understanding the underlying principles means that you can evaluate new products that much quicker; you are not working why product ‘A’ is better than product ‘B’, this often results in biases. You understand why product ‘A’ is a good fit for your requirement and you also understand why neither product is a good fit.

Instead of iSCSI bad, FC good…you will develop an idea as to the appropriate use-case for either.

You will become more useful…and you will find that you are less resistant to change; it becomes less stressful and easier to manage. Don’t become an EMC dude, become a Storagebod…Don’t become a Linux SysAdmin, become a SysAdmin.

Am I advocating generalism? To a certain extent, yes but you can become expert within a domain and not a savant for a specific technology.

And a final bit of advice; follow Etherealmind….he talks sense for a network guy!



A Press Release From The Future…

Future-View, CA – March 2018

Evian Storage – Storage so Pure it’s like a torrent of glacial water announced today the end of the All-Flash-Array with the announcement of it’s StupendoStore 20000 based around the HyperboleHype-based storage device.

Our research shows that All Flash Arrays are slowing down businesses in their move to meet the new business paradigms brought about by computing at the quantum scale. Their architectures simply can’t keep up and storage is yet again the bottle-neck and yet scaling economically also seems to be beyond them.  Customers have found themselves locked into an architecture which promised no more fork-lift upgrades but has delivered technology lock-in and all the agility of a dancing hippo. Forget about fork-lifts, we are talking cranes!

Fortunately our team’s experience in delivering hybrid arrays at such companies as EMC, HDS, NetApp and other vendors has enabled us to take advantage of the newest technology on the block but also leverage the economies of flash and indeed the huge capacity and scale of magnetic disk; we know that your data should live in the right place and although we admit that our arrays might not be as fast the Purest arrays…I’m sure we’re not the only ones who prefer their rocket fuel with a little mixer…

Yes, this is a dig at the All-Flash players…but it doesn’t matter how great your technology is today; there will always be something newer and faster round the corner. And as a customer, it is worth remembering that the future is always closer than you think. It could be only a single depreciation cycle away, a single tech-refresh away. The challenge for all vendors is delivering a sustainable model and product-set.

And no-one product will meet all your needs….no matter what the vendor tells you!

Chop Their Fingers Off!

This is a very good piece on FAST-VP on VMAX, well-written and some good advice in it but it sums up almost everything that is wrong with VMAX today. VMAX has too many nerd-knobs and so people think they should fiddle and try and out-do the machine.

And hence probably make a right-old mess, FAST-VP ends up not working quite as well as it should and so people tend to fiddle even more and the next thing you know, you are trying to manage your VMAX in the way you would have managed an old-school Symm.

I think it is time that EMC and their users seriously consider breaking away from the past; the old-school nerd-knob fettling needs to stop. I know that is why storage admins get paid the big bucks but I do wonder if we might be better paying them to stop?

I long for the day when we see VMAX managed without worrying about what the internal engines are doing; when we set various performance parameters and let the array sort it out. When we pay for performance and capacity without worrying how the system gets to it.

There is at least one amusing part of advice in the article tho’ and it although it is well-argued and there appears to be good reason to do so; you still should keep the FC-tier on RAID-1 mirrored disks…Nothing really changes in the world of Symm!




So VSAN is finally here in a released form; on paper, it sure looks impressive but it’s not for me.

I spend an awful lot of time looking at Scale-Out Storage systems; looking at ways to do them faster, cheaper and better. And although I welcome VMware and VSAN to the party; I think that their product falls some-way from the mark but I don’t think that I’m really the target market; it’s not really ready or appropriate for Media and Entertainment or anyone interested in HyperScale.

But even so I’ve got thoughts that I’d like to share.

So VSAN is better because it runs in the VMware kernel? This seems logical but this has tied VSAN to VMware in a way that some of the competing products are not; if I want to run a Gluster Cluster which encompasses not just VMware but also XEN, bare-metal and anything else, I could. And there might be some excellent reasons why I would want to do so, I’d transcode on bare-metal machines for example but might present out on VM-ed application servers. Of course, it is not only Media and Entertainment who have such requirements; there are plenty of other places where heavy lifting would be better done on the bare-metal.

I think that VMware need to be much more open about allowing third party access to the kernel interfaces; they should allow more pluggable options; so I could run GPFS, ScaleIO, Gluster, Stornext within the VMWare kernel.

VSAN limits itself by tying itself so closely to the VMware stack; it’s scalability is limited by the current cluster size. Now there are plenty good architectural reasons for doing so but most of these are enforced by a VMware-only mindset.

But why limit to only 35 disks per server? An HP ProLiant SL4540 takes 60 disks and there are SuperMicro chassis that take 72 disks. Increasing the spindle count not only increases the maximum capacity but the RAW IOps of the solution. Of course, there might be some saturation issues with regards to the inter-server communication.

Yet, I do think it is interesting how the converged IT stacks are progressing; the differences in approach; VMware itself is pretty much a converged stack now but it is a software converged stack; VCE and Nutanix converge onto hardware as well. And yes, VMware is currently the core of all of this.

I actually prefer the VMware-only approach in many ways as I think I could scale computer and storage separately within some boundaries; I’m not sure what the impact of having unbalanced clusters will be on VSAN? Whether it would make sense to have some Big Flipping Dense VSAN appliances rather than distributing the storage equally across the nodes?

But VSAN is certainly welcome in the market; it certainly validates the approaches being taken by a number of other companies…I just wish it were more flexible and open.


IT’s choking the life out of me.

I’ve been fairly used to the idea that my PC at home is substantially better than my work one; this has certainly been the case for me for more than a decade. I’m a geek and I spend more than most on my personal technology environment.

However, it is no longer just my home PC; I’ve got better software tools and back-end systems; my home workflow is so much better than my work workflow; it’s not even close. And the integration with my mobile devices, it’s a completely different league altogether. I can edit documents on my iPad, my MBA, my desktop, even my phone and they’ll all sync up and be in the same place for me. My email is a common experience across all devices. My media; it’s just there.

With the only real exception of games; it doesn’t matter which device I’m using to do stuff.

And what is more; it’s not just me; my daughter has the same for her stuff as does my wife. We’ve not had to do anything clever, there’s no clever scripting involved, we just use consumer-level stuff.

Yet our working experience is so much poorer; if my wife wants to work on her stuff for her job, she’s either got to email it to herself or use ‘GoToMyPC’ provided by her employer.

Let’s be honest, for most of us now…our work environment is quite frankly rubbish. It has fallen so far behind consumer IT, it’s sad.

It’s no longer the technology enthusiast who generally has a better environment…it’s almost everyone who has access to IT. And not only that, we pay a lot less for it than the average business.

Our suppliers hide behind a cloak of complexity; I’m beginning to wonder if IT as it is traditionally understood by business is no longer an enabler, it’s just a choke-point.

And yes there are many excuses as to why this is the case; go ahead…make them! I’ve made them myself but I don’t really believe them any more…do you?