Storagebod Rotating Header Image

February, 2012:

Internal Pre-Sales

I’ve long argued that there needs to be more movement between vendors and users; I think that there is immense value, especially for vendors who can sometimes be isolated from the trials and tribulations of the end-user. And at times, as an end-user it is useful to understand the pressures and the reality of the vendor world. I like to say that ‘I may take the piss in some of my requests but I do know of what I ask…’ But of course having worked both sides of the fence, I would say that!

However as we move to more service focused IT delivery organisations; perhaps there is real value having worked in a vendor in a pre-Sales capacity; especially if you have learnt to use ears and mouth in the right proportions. Do we need a new role of ‘Internal Pre-Sales’ and is it really a new role?

Unfortunately, I think that the answer is for many organisations that it is a new role but it shouldn’t be. Learning to listen to customers should not be a big surprise (although even amongst some vendors, you’d think it was) but it is debatable whether we would be in the situation we are today if we had all been a bit better at listening and grokking what we were being told.

Listening to the problems and the desires of our users is what we should be good at and unlike a vendor, we potentially have the whole paintbox to play with; we are not stuck with EMC Blue or IBM Blue or HP Grey etc. We can build our service offerings out of best of breed if we want.

Yet we often carry on like the most arrogant vendor in the world? Why is that? Is it learnt behaviour from our vendors or have they learnt it from us? Hard to say!

Our Design and Architecture teams should be doing this but often they are too busy playing with the latest toy and failing the Business. This is not any individual’s fault but a failing in a culture which is very introspective and often closed. Too often we focus on telling each other how cool a technology is as opposed to listening to the Business about a cool problem that technology might help them with.

But we can learn to sell and market; our business is our Business and we should know our verticals better than anyone. Marketing should not be a dirty word; selling should not be anathema. Funnily enough, I think if we got better at it, I think that the vendors might concentrate our selling to us rather than our Businesses. We should have the advantage and the vendors should want to partner with the most likely winner; it makes more sense that way!

[partially inspired by Chuck’ Blog here and riffing on the theme of needing different people for today’s IT world…thanks for inspiration Chuck]

Break the Cycle…

It seems the more things change, the more things stay the same…or at least history has a habit of repeating itself and no-one learns from the mistakes of the past. I have read a couple of articles recently which suggest that the role of the CIO is under threat and surprise, surprise, it’s the CFO who has eyes on the kingdom of IT.

Now, in days of yore when mainframes ruled the roost; the IT department often came under the CFO; well it’s all numbers isn’t it? It was only when the PC came along, that IT became something more and IT became more relevant to everyone; it was this era which really saw the arrival of the CIO.

Yet, as IT becomes ever more personal and ubiquitous; we seem to be moving back in time from an organisational structure. We regularly hear nonsensical statements driven by the adoption of Cloud Computing; if we move to the Cloud, do we need an IT department? Do we need a CIO? Does the CIO really need any kind of technical knowledge and should they not be purely business focused?

IT departments need to be business focused; this is very true but IT is a technical function and you need people with an understanding of what is technically possible and feasible, you need people who understand technology. Even if you move your function entirely to the Cloud, you need people who understand technology to manage and administer the Cloud; even if you outsource your entire IT function, you still need people who understand technology who can manage your partners and keep them honest.

IT is the oxygen when enables many businesses; the CIO needs to understand the business but they also need to understand technology enablers. The CIO needs to understand the value of their organisation and needs to move away from a purely cost based model; if the CIO is too differentiate themselves from the CFO, this is an absolutely key area to focus on.

In fact, the CIO needs to come to the fore and lead; championing IT as the enabler for business growth and development. I would argue that where we are today, we have never needed strong CIOs more with a vision based on technology investment model which drives innovation for their businesses.


Working in the creative industry sector, we get used to seeing a lot of Apple systems and the problems they bring but recent discussions with colleagues in the sector leads us to interesting conclusions. Are we going to see an interesting reversal in the world of desktop IT?

The general feeling amongst many I talk too is that Apple are no longer really that interested in supporting the creative professional; look at the situation with FCP X and the ructions that caused. Yes, Apple have added enhancements to the FCP X to better support the professional community but it did appear grudgingly and an afterthought. Apple would obviously prefer to sell lots of copies to Joe Public as opposed to a few copies to the creative sector; it’s sensible economics.

Companies who were talking about standardising on FCP for their video editing requirements are no longer progressing that strategy. Adobe has certainly started to pick up the slack and there are the more traditional niche media developers who were under pressure from Apple who are now feeling a lot more confident about their future. All of this is beginning to drive a shift to Windows and to a certain extent Linux as well for the media professional. Linux certainly has a big foothold in the specialised rendering environment, where it makes especial sense for large deployments.

But on the corporate side of things, Apple has never really been healthier; the BYOD meme driven by iOS devices has put Apple firmly on the corporate IT agenda.

So are we going to see something really rather peculiar, where the operating system of choice for many corporate IT users is Apple but for the media sector, it’s anything but Apple?

Funny old world we live in!

p.s this was typed on a MBA which sits on a home-network with Linux, Windows, Android and iOS devices all attached…so it’s not a Fanboi article either way.

Desktop, Data, Devilry

In the post-PC era; the battle for the desktop has moved on to the battle for your data; Microsoft’s leaked new features for SkyDrive demonstrates this nicely; joining Dropbox, iCloud, the soon to be announced Google Drive and a myriad of others, where you store your data is becoming more and more of a battle-ground. The Battle of the Desktop has moved from the Battle of the Browser to the Battle for Your Data; throw Social Media such as Facebook, Twitter and sites as Flickr into the mix; this is heading to one hell of mess and one hell of a fight.

Where on earth are you going to store your content? And once it is there, how do you get it out and more importantly will this drive stickiness? Apple seem to think so, Apple are making tighter integrations with their operating systems and the iCloud; Mountain Lion and iOS6 will see more features leveraging iCloud natively; Microsoft will do so similar things with Windows 8 and SkyDrive; yes, you will be able to access your data from other operating systems and devices but it will not be the experience you will get from the native operating systems.

Native Operating Systems? Will we see even tighter integration with the operating systems? Will we see Cloud-Storage gateways built into the operating system? For example as broadband gets faster, is there need for large local storage devices? Could your desktop become a caching device with just local SSD storage and intelligently moving your data in and out of the Cloud? Mobile devices are pretty much there but they deal with much smaller storage volumes, is the desktop the next frontier for this?

But could the battle for your data produce the next big monopoly opportunity for Microsoft and Apple? Building hooks in at the operating system level would seem to make technical sense but I can hear the cries from a multitude; service providers, content providers and the likes will have a massive amount to say about this.

For example, there are the media devices such as PVRs etc; with content providers and broadcasters increasingly providing non-linear access to their content, why is this not all on demand and why do we need a PVR any more? A smaller device with a local SSD cache would make considerably more sense; they’d be greener and removing the spinning disk would probably reduce failures but this would mean a pretty much whole-scale move to IPTV, something which is a little way off.

But arguably, this is something that Apple are really moving towards; owning your content, your data and your life will be theirs. And where Apple go, expect Microsoft to be not far behind; you think the Desktop is irrelevant? I for one don’t believe it; this story has a long way to run. It’s still about the Desktop, just that the Desktop has changed.

Sticky Servers

I read the announcements from HP around their Gen 8 servers with some interest and increasing amusement. Now HP are an intrinsically amusing funny company but it isn’t that which is amusing me, it’s the whole server industry and an interesting trend.

The Intel server industry was built on the back of the ‘PC Compatible’ desktop; where you could buy a PC from pretty much any vendor and run MS-DOS and run the same application anywhere. They all looked the same and if you could maintain one, you could maintain any of them.

Along came the PC Server and it was pretty the same thing; if you could maintain Server Brand X, you could maintain Server Brand Y. And so it pootled along until blade-servers came along and muddied the water a bit but it wasn’t so hard.

If you wanted to migrate between server vendors, it wasn’t rocket science; if you wanted to move from Compaq to Dell to IBM, it was not a big deal to be honest. Although sometimes the way people carried on, you would have thought you were moving from gas-powered computers to electric computers to computers with their own nuclear reactors in.

And then along come Cisco with UCS and the Intel server got bells, whistles and fancy pants. All in the name of ‘Ease of Use and Management’; it’s all fancy interfaces and APIs; new things to learn and all slightly non-standard.

And now HP follow along with Gen-8; it’s all going to be slightly non-standard and continue to drift away from the original whitebox server. The rest of the vendors are all moving this way, how do I make sure that customers remain loyal and sticky.

It’s all going to get increasingly hard to migrate between server vendors without major rethinks and retrains. Perhaps this is all going to accelerate the journey to the public cloud because I don’t want to care about that!

And as a storage guy, I can’t help but laugh!  Welcome to our world!

Storage People Are Different

An oft-heard comment is that ‘Storage People are weird/odd/strange’; what people really mean is that ‘Storage People are different’; Chuck sums up many of the reasons for this in his blog ‘My Continuing Infatuation with Storage‘.

Your Storage Team (and I include the BURA teams) often see themselves as the keepers to the kingdom, for without them and the services that they provide, your businesses will probably fail. They look after that which is most important to any business; its knowledge and its information. Problem is, they know it and most other people forget this; this has left many storage teams and managers with the reputation of being surly, difficult and weird but if you were carrying the responsibility for your company’s key asset, you’d be a little stressed too. Especially if no-one acknowledged it.

The problem is that for many years; companies have been hoarding corporate gold in dusty vaults which are looked after by orcs and dragons who won’t let anyone pass or access it but now people want to get access to the gold and make use of it. So now the storage team is having to not only worry about ensuring that the information is secure and maintained, people actually want to use it and want ad-hoc access to it, almost on demand.

Problem is that the infrastructures that we have in place today are not architected to allow this to happen and the storage teams do not have processes and procedures to allow this to happen. So today’s ‘Storage People maybe different’ but tomorrow’s ‘Storage People will be  a different different’. They will need to be a lot more business focussed and more open; but that asset that they’ve been maintaining is growing pretty much exponentially in size and value; so expect them to become even more stressed and maybe even more surly.

That is unless you work closely with them to invest and build a storage infrastructure which supports all your business aspirations; unless vendors invest in technologies which are manageable at scale and businesses learn to appreciate value as opposed to sheer cost.

Open, accessible, available and secure; this is the future storage domain; let’s hope that the storage teams to support this also have these qualities.

Don’t SNIA at #storagebeers

I have just noticed that SNIA Data Centre Technology Academy London is this year at the The Grange Tower Bridge Hotel, this is the same road as my favourite posh curry restaurant; also as is traditional for SNIA, it clashes with EMC-World and is on May 23rd.

Assuming that no-one decides to treat me and send me to EMC-World; I am probably going to organise #storagebeers followed by #storagecurry at Cafe Spice Namaste; I will post more nearer the date. But I know many attendees love a good curry and this is really good curry.

So this is an early warning and a good place to be if you can’t make it to EMC-World; you can console yourself in much better beer and a damn fine curry!!!

Soft Cache

VFCache is really yet more evidence that Storage Vendors are simply becoming software suppliers*; although ostensibly a hardware product; the smarts is down in the software layer and that is where the smarts are going to live. EMC are simply leveraging everything that they have learnt from PowerPath (Free PowerPath, Keep Fighting the Fight!) and using that to build on to introduce storage functionality on the server.

From the presentations we have seen so far, it seems that DeDupe is going to run on the server and not the card;  well, that’s my interpretation. Obviously this is going to have some interesting impact on CPU and memory utilisation meaning that EMC are going to have get this right or they risk the whole reason for putting cache closer to the server. Replication and cache consistency may also  be a server level function.

This does have some interesting implications; although it appears to be hardware product, how hard would it be for EMC to use any ‘flash’ technology which is installed in the server; do we have to have EMC hardware and could we use a Fusion-IO card or even a bog-standard SSD? What happens to the pricing?

EMC are already talking about accelerating any array; although it’ll be better with their own array but will they take this further and use anyone’s cache hardware? We’ll probably end up with another gargantuan certification matrix because EMC like those but it does seem possible. Perhaps Powerpath/Cache which allows a number of different vendor’s caching products to be used? This way EMC can monetise their software even further or perhaps, Powerpath/Cache comes free with VFCache but you need to pay to use it with third party cache products?

And what about file-based acceleration? Where do NAS and indeed Object storage fit into this? Do they? One of the biggest complaints or issues oft heard about object storage is that it can be slow, so would it benefit from some kind of caching; could cache hints be carried in the object metadata?

Also, it might be interesting to see how this could be integrated in the hypervisor stack? Perhaps a PowerPath/VE which support multiple vendor’s caching products?

Now EMC have validated PCIe local flash-cache concepts; we can start to move on and see where this takes us. And yes, someone other than EMC could have validated the concept but they didn’t; so let’s move on from there.

What are the other big vendors going to react with? IBM, HP, perhaps NetApp with SPAM (Server Performance Acceleration Module), HDS?

[* I’ve been thinking a lot about my recent experience with storage and whether storage teams may have more in common in with application teams than first thought; storage infrastructures tend to be more heterogeneous than other data centre infrastructures, certainly there is more vendor differentiation. ]

Complex is the new Simple

As EMC add yet another storage array to their product line-up in the form of Project Thunder or as I like to call it, The Thunderbox; questions are going to be asked about product confusion and clash. VMAX, VNX, vPlex, Isilon, Atmos and now Thunder form a product line which is formidable but may lack a certain amount of clarity.

But is this the problem that many of us think,maybe we need a different way of thinking? If you look at most workloads, I would say that about 80% of them could be met by a single product but the problem is which 80% and can the other 20% be met by another product? This would at least keep it down to two products.

However my experience is beginning to suggest otherwise and although a great majority of our workload can be met by a single solution; we have some some niche requirements which can only be met by specific tools. There is some irony that one of my niche requirements is actually general purpose IT storage but that’s the industry I work in; your niche requirements will probably be different but there is no point trying to hammer those requirements onto devices which will do a slightly less than adequate job at best.

At the moment, we manage over a dozen different storage technologies; granted some of them do overlap and some of them are only there because of slightly dubious requirements from application vendors but we don’t stress about a new technology coming in and supporting it. The principles for management are common and once you can manage one technology, you can pretty much manage them all.

Our job would be a lot harder if we tried to force applications onto a common platform; so despite appearances from the outside looking in, our platform’s complexity has actually ensured that our jobs are simpler.

What vendors could do and some have started to do is to ensure that their own product families have common interfaces with converged look and feel. IBM for example have made great strides in this and it is one thing that EMC could take away from IBM.

But a rich product set does not have to be complex although it does need to be explained so that customers understand the use case.

Cache Splash

It’s funny, I had a brief discussion about my blog with my IT director at work today; I wasn’t aware that he was aware that I blogged but it seems a couple of people outside of work had outed me, in what appears to be very complementary terms; he was pretty relaxed about my blog and one of his comments was that he assumed I discussed new products and I said I did.

But on the way home, I thought about it and to be quite frank, I used to talk a lot about new products but I don’t really do so these days. So it is ironic that today, I’m going to knock out a quick blog about EMC’s VFCache announcement; they don’t need the publicity but I’m going to talk about it anyway.

VFCache is very much a version 1.0 product from what I can see; EMC appeared to have set their bar quite low in what they are trying to achieve with this release; it appears that they’ve very much targeted Fusion-IO pretty much directly and decided to go after them from the get-go. Trash them early and don’t let another NetApp happen.

Go for engineering simplicity and don’t fill the product full of features….yet! Keeping it simple means that EMC can accelerate any array, not just an EMC array but in the future when new features come along, many of these might well only be available with an EMC back-end array. You’ve bought your Flash card, if you really want value….you need to partner it with an EMC array.

And in fact, to really leverage any server-side flash product; you probably do need array-awareness to ensure that you don’t do economically silly things like storing multiple copies of the same information in different caches; how many times do you want to cache the same data.

You need an efficient way of telling the array, ‘Oi I’ve cached this, you don’t need to’; this will allow you to utilise the array cache for workloads which might not easily support server-side caching currently. Perhaps at some point we’ll see a standard but standards are rarely fast moving in storage.

I also expect to see EMC build in some intelligence to allow it to leverage the split card capability; perhaps using PowerPath to flag that actually you might want to consider using the split card capability to gain performance?

I’d also be interested in seeing advancing modelling tools which allowed you to identify those servers and workloads which would most benefit from VFCache and what the impact is on the other workloads in the data-centre. If you accelerate one workload with VFCache and hence free up cache on the shared-array, do all workloads benefit? Can I target the deployment at key servers?

Deduplication is coming but it needs to be not at the expense of latency.

And of course there is the whole cluster-awareness and cache-consistency thing to sort out and perhaps this whole thing is a cul-de-sac whilst we move to flash-only-shared-storage-arrays…that’s until the next super-fast technology comes along.

Yes, EMC’s annoucement is very product 1.0 and a bit ‘ho-hum’ but the future is more interesting. Storage, Snorage? Sometimes but it’s impact sometimes wakes you up with a bit of a shudder’.

I wonder who is going to announce next or what the next announcement might be. 2012 might be a bit more interesting.