Storagebod Rotating Header Image

Cloud

Stretching…

So EMC have finally productised Nile and given it the wonderful name of ‘Elastic Cloud Storage’; there is much to like about it and much I have been asking for…but before I talk about what I like about it, I’ll point out one thing…

Not Stretchy

It’s not very Elastic, well not when compared to the Public Cloud Offerings unless there is a very complicated finance model behind it and even then it might not be that Elastic. One of the things that people really like about Public Cloud Storage is that they pay for what they use and if their consumption goes down….then their costs go down.

Now EMC can probably come up with a monthly charge based on how much you are using; they certainly can do capacity on demand. And they might be able to do something with leasing to allow downscaling as well at a financial level but what they can’t easily do is take storage away on demand. So that 5 petabytes will be on premise and using space; it will also need maintaining even if it spins down to save power.

Currently EMC are stating 9%-28% lower TCO over Public Cloud…it needs to be. Also that is today; Google and Amazon are fighting a price-war, can EMC play in that space and react quickly enough? They claim that they are cheaper after the last round of price cutting but after the next?

So it’s not as Elastic as Public Cloud and this might matter…unless they are relying on the fact that storage demands never seem to go away.

Commodity

I can’t remember when I started writing about commodity storage and the convergence between storage and servers. Be it roll-your-own or when vendors were going to start doing something very similar; ZFS really sparked a movement who looked at storage and thought why do we need big vendors like EMC, NetApp, HDS and HP for example.

Yet there was always the thorny issue of support and for many of us; it was a bridge too far. In fact, it actually started to look more expensive than buying a supported product..and we quite liked sleeping at night.

But there were some interesting chassis out there that really started to catch our eyes and even our traditional server vendors were shipping interesting boxes. It was awfully tempting.

And so I kept nagging the traditional vendors…

Many didn’t want to play or were caught up in their traditional business. Some didn’t realise that this was something that they could do and some still don’t.

Acquisition

The one company who had the most to loose from a movement to commodity storage was EMC; really, this could be very bad news. There’s enough ‘hate’ in the market for a commodity movement to get some real traction. So they bought a company that could allow commoditisation of storage at scale; I think at least some of us thought that would be the end of that. Or it would disappear down a rabbit hole to resurface as an overpriced product.

And the initial indications were that it wasn’t going to disappear but it was going to be stupidly expensive.

Also getting EMC to talk sensibly about Scale-IO was a real struggle but the indication is that it was a good but expensive product.

Today

So what EMC have announced at EMC-World is kind of surprising in that it looks like that they may well be willing to rip the guts out of their own market. We can argue about the pricing and the TCO model but it looks a good start; street prices and list prices have a very loose relationship. The four year TCO they are quoting needs to drop by a bit to be really interesting.

But the packaging and the option to deploy on your own hardware; although this is going to be from a carefully controlled catalogue I guess; is a real change from EMC. But you will also notice that EMC have got into the server-game; a shot across the bows of the converged players?

And don’t just expect this to be a content dump; Scale-IO can do serious I/O if you deploy SSDs.

Tomorrow

My biggest problem with Scale-IO is that it breaks EMC; breaks them in a good way but it’s a completely different sales model. For large storage consumers, an Enterprise License Agreement with all you can eat and deploying onto your chosen commodity platform is going to be very attractive. Now the ELA might be a big-sum but as a per terabyte cost; it might not be so big and the more you use; the cheaper it gets.

And Old EMC might struggle a bit with that. They’ll probably try to sell you a VMAX to sit behind your ViPR nodes.

Competitors?

RedHat have an opportunity now with Ceph; especially amongst those who hate EMC for being EMC. IBM could do something with GPFS. HP have a variety of products.

There are certainly smaller competitors as well.

And then there’s VMware with VSAN; which I still don’t understand!

There’s an opportunity here for a number of people…they need to grasp it and compete. This isn’t going to go away any more.

 

 

All The Gear

IBM are a great technology company; they truly are great at technology and so many of the technologies we take for granted can be traced to back to them. And many of today’s implementations still are poorer than the original implementations.

And yet IBM are not the dominant force that they once were; an organisational behemoth, riven with politics and fiefdoms doesn’t always lend itself to agility in the market and often leads to products that are undercooked and have a bit of a ‘soggy bottom’.

I’ve been researching the GSS offering from IBM, GPFS Storage Server; as regular readers of this blog will know, I’m a big fan of GPFS and have a fair amount installed. But don’t think that I’m blinkered to some of the complexities around GPFS; yet it deserves a fair crack of the whip.

There’s a lot to like about GSS; it builds on the solid foundations of GPFS and brings a couple of excellent new features into play.

GPFS Native RAID; also known as declustered RAID is a software implementation of micro-RAID; RAID is done at a block level as opposed to a disk level; this generally means that the cost of rebuilds can be reduced and the time to get back to a protected level can be shortened. As disks continue to get larger, conventional RAID implementations struggle and you can be looking at hours if not days to get back to a protected state.

Disk Hospital; by constantly monitoring the health of the individual disks and collecting metrics for them; the GSS can detect failing disks very early on but there is a dirty secret in the storage world; most disk failures in a storage array are not really failures and could be simply recovered from, a simple power-cycle can be enough or a firmware reflash can be enough to prevent a failure and going into a recovery scenario.

X-IO have been advocating this for a long time; this can reduce maintenance windows and prevent unnecessary rebuilds. It should reduce maintenance costs as well.

Both of these technologies are great and very important to a scalable storage environment.

So why aren’t IBM pushing GSS in general; it’s stuffed full of technology and useful stuff?

The problem is GPFS…GPFS is currently too complicated for many, it’s never going to be a general purpose file system. The licensing model alone precludes that; so if you want to utilise it with a whole bunch of clients, you are going to be rolling your own NFS/SMB 3.0 gateway. Been there, done that…still doing that but it’s not really a sensible option for many.

If IBM really want the GSS to be a success; they need a scaleable and supported NAS gateway in front of it; it needs to be simple to manage. It needs integration with the various virtualisation platforms and they need to simplify the GPFS license model…when I say simplify, I mean get rid of the client license cost.

I want to like product and not just love the technology.

Until then…IBM have got all the gear and no idea….

VSANity?

So VSAN is finally here in a released form; on paper, it sure looks impressive but it’s not for me.

I spend an awful lot of time looking at Scale-Out Storage systems; looking at ways to do them faster, cheaper and better. And although I welcome VMware and VSAN to the party; I think that their product falls some-way from the mark but I don’t think that I’m really the target market; it’s not really ready or appropriate for Media and Entertainment or anyone interested in HyperScale.

But even so I’ve got thoughts that I’d like to share.

So VSAN is better because it runs in the VMware kernel? This seems logical but this has tied VSAN to VMware in a way that some of the competing products are not; if I want to run a Gluster Cluster which encompasses not just VMware but also XEN, bare-metal and anything else, I could. And there might be some excellent reasons why I would want to do so, I’d transcode on bare-metal machines for example but might present out on VM-ed application servers. Of course, it is not only Media and Entertainment who have such requirements; there are plenty of other places where heavy lifting would be better done on the bare-metal.

I think that VMware need to be much more open about allowing third party access to the kernel interfaces; they should allow more pluggable options; so I could run GPFS, ScaleIO, Gluster, Stornext within the VMWare kernel.

VSAN limits itself by tying itself so closely to the VMware stack; it’s scalability is limited by the current cluster size. Now there are plenty good architectural reasons for doing so but most of these are enforced by a VMware-only mindset.

But why limit to only 35 disks per server? An HP ProLiant SL4540 takes 60 disks and there are SuperMicro chassis that take 72 disks. Increasing the spindle count not only increases the maximum capacity but the RAW IOps of the solution. Of course, there might be some saturation issues with regards to the inter-server communication.

Yet, I do think it is interesting how the converged IT stacks are progressing; the differences in approach; VMware itself is pretty much a converged stack now but it is a software converged stack; VCE and Nutanix converge onto hardware as well. And yes, VMware is currently the core of all of this.

I actually prefer the VMware-only approach in many ways as I think I could scale computer and storage separately within some boundaries; I’m not sure what the impact of having unbalanced clusters will be on VSAN? Whether it would make sense to have some Big Flipping Dense VSAN appliances rather than distributing the storage equally across the nodes?

But VSAN is certainly welcome in the market; it certainly validates the approaches being taken by a number of other companies…I just wish it were more flexible and open.

 

IT’s choking the life out of me.

I’ve been fairly used to the idea that my PC at home is substantially better than my work one; this has certainly been the case for me for more than a decade. I’m a geek and I spend more than most on my personal technology environment.

However, it is no longer just my home PC; I’ve got better software tools and back-end systems; my home workflow is so much better than my work workflow; it’s not even close. And the integration with my mobile devices, it’s a completely different league altogether. I can edit documents on my iPad, my MBA, my desktop, even my phone and they’ll all sync up and be in the same place for me. My email is a common experience across all devices. My media; it’s just there.

With the only real exception of games; it doesn’t matter which device I’m using to do stuff.

And what is more; it’s not just me; my daughter has the same for her stuff as does my wife. We’ve not had to do anything clever, there’s no clever scripting involved, we just use consumer-level stuff.

Yet our working experience is so much poorer; if my wife wants to work on her stuff for her job, she’s either got to email it to herself or use ‘GoToMyPC’ provided by her employer.

Let’s be honest, for most of us now…our work environment is quite frankly rubbish. It has fallen so far behind consumer IT, it’s sad.

It’s no longer the technology enthusiast who generally has a better environment…it’s almost everyone who has access to IT. And not only that, we pay a lot less for it than the average business.

Our suppliers hide behind a cloak of complexity; I’m beginning to wonder if IT as it is traditionally understood by business is no longer an enabler, it’s just a choke-point.

And yes there are many excuses as to why this is the case; go ahead…make them! I’ve made them myself but I don’t really believe them any more…do you?

Disrupt?

So you’ve founded a new storage business; you’ve got a great idea and you want to disrupt the market? Good for you…but you want to maintain the same-old margins as the old crew?

So you build it around commodity hardware; you use the same commodity hardware as I can buy off the shelf; basically the same disks that I can buy off the shelf from PC World or order from my preferred Enterprise tin-shifter.

You tell me that you are lean and mean? You don’t have huge sales overheads, no huge marketing budget and no legacy code to maintain?

You tell me that it’s all about the software but you still want to clothe it in hardware.

And then you tell me it’s cheaper than the stuff that I buy from my current vendor? How much cheaper? 20%, 30%, 40%, 50%??

Then I do the calculations; your cost base and your BoM is much lower and you are actually making more money per terabyte than the big old company that you used to work for?

But hey, I’m still saving money, so that’s okay….

Of course, then I dig a bit more…I want support? Your support organisation is tiny; I do my due diligence,  can you really hit your response times?

But you’ve got a really great feature? How great? I’ve not seen a single vendor come up with a feature that is so awesome and so unique that no-one manages to copy it…few which aren’t in a lab somewhere.

In a race to the bottom; you are still too greedy. You still believe that customers are stupid and will accept being ripped off.

If you were truly disruptive….you’d work out a way of articulating the value of your software without clothing it in hardware. You’d work with me on getting it onto commodity hardware and no I’m not talking about some no-name white-box; you’d work with me on getting it onto my preferred vendor’s kit; be it HP, Dell, Lenovo, Oracle or whoever else…

For hardware issues; I could utilise the economies of scale and the leverage I have with my tin-shifter; you wouldn’t have to set-up a maintenance function or sub-contract it to some third party who will inevitably let us both down.

And for software support; well you could concentrate on those…

You’d help me be truly disruptive…and ultimately we’d both be successful…

2014 – A Look Forward….

As as we come to the end of another year, it is worth looking forward to see what if anything is going to change in the storage world next year because this year has pretty much been a bust as to innovation and radical new products.

So what is going to change?

I get the feeling not a huge amount.

Storage growth is going to continue for the end-users but the vendors are going to continue to experience a plateau of revenues. As end-users, we will expect more for our money but it will be mostly more of the same.

More hype around Software-Defined-Everything will keep the marketeers and the marchitecture specialists well employed for the next twelve months but don’t expect anything radical. The only innovation is going to be around pricing and consumption models as vendors try to maintain margins.

Early conversations this year point to the fact that the vendors really have little idea how to price their products in this space; if your software+commodity-hardware=cost-of-enterprise-array, what is in it for me?  If vendors get their pricing right; this could be very disruptive but at what cost to their own market position?

We shall see more attempts to integrate storage into the whole-stacks and we’ll see more attempts to converge compute, network and storage at hardware and software levels. Most of these will be some kind of Frankenpliance and converged only in shrink-wrap.

Flash will continue to be hyped as the saviour of the data-centre but we’ll still struggle to find real value in the proposition in many places as will many investors. There is a reckoning coming. I think some of the hybrid manufacturers might do better than the All-Flash challengers.

Hopefully however the costs of commodity SSDs will keep coming down and it’ll finally allow everyone to enjoy better performance on their work-laptops!

Shingled Magnetic Recording will allow storage densities to increase and we’ll see larger capacity drives ship but don’t expect them to appear in mainstream arrays soon; the vibration issues and re-write process is going to require some clever software and hardware to fully commercialise these. Still for those of us who are interested in long-term archive disks, this is an area worth watching.

FCoE will continue to be a side-show and FC, like tape, will soldier on happily. NAS will continue to eat away at the block storage market and perhaps 2014 will be the year that Object storage finally takes off.

Five Years On (part 3)

So all the changes referenced in part 2, what do they mean? Are we are at an inflection point?

The answer to the latter question is probably yes but we could be at a number of inflection points both localised vendor inflection points but also industry-wide ones as well. But we’ll probably not know for a couple more years and then with hindsight we can look back and see.

The most dramatic change that we have seen in the past five years is the coming of Flash-based storage devices; this is beginning to change our estates and what we thought was going to become the norm.

Five years ago; we were talking about general purpose, multi-tier arrays; automated tiering and provisioning but all coming together in a single monolithic device. The multi-protocol filer model was going to become the dominant model; this was going to allow us to break down silos in the data centre and to simply the estate.

Arrays were getting bigger as were disks; i/o density was a real problem and generally the slowest part of any system was the back-end storage.

And then SSDs began to happen; I know that flash-based/memory-based arrays have been around for a long time but they were very much specialist and a niche market. But the arrival of the SSD; flash in familar form-factor at a slightly less eye-watering price was a real change-bringer.

EMC and others scrambled to make use of this technology; treat them as a faster disk tier in the existing arrays was the order of the day. Automated Storage Tiering technology was the must have technology for many array manufacturers; few customers could afford to run all of their workloads on an entirely SSD-based infrastructure.

Yet if you talk to the early adopters of SSDs in these arrays; you will soon hear some horror stories; the legacy arrays simply were not architected to make best use of the SSDs in them. And arguably still aren’t; yes, they’ll run faster than your 15k spinning rust tier but you are not getting the full value from them.

I think that all the legacy array manufacturers knew that there were going to be bottle-necks and problems; the different approaches that the vendors take almost points to this and the different approaches taken by a single vendor..from using flash as a cache to utilising it simply as a faster disk…using it as extension of the read cache to using it as both a read and write cache.

Vendors claiming that they had the one true answer….none of them did.

This has enabled a bunch of start-ups to burgeon; where confusion reigns, there is opportunity for disruption. That and the open-sourcing of ZFS has built massive opportunity for smaller start-ups, the cost of entry into the market has dropped. Although if you examine many of the start-ups offerings; they are really  a familiar architecture but aimed at a different price point and market as opposed to the larger storage vendors.

And we have seen a veritable snow-storm of cash both in the form of VC-money but also acquisition as the traditional vendors realise that they simply cannot innovate quickly enough within their own confines.

Whilst all this was going on; there has been an incredible rise in the amount of data that is now being stored and captured. The more traditional architectures struggle; scale-up has it’s limits in many cases and techniques from the HPC market place began to become mainstream. Scale-out architectures had begun to appear; firstly in the HPC market, then into the media space and now with the massive data demands of the traditional enterprises…we see them across the board.

Throw SSDs, Scale-Out together with Virtualisation; you have created a perfect opportunity for all in the storage market to come up with new ways of fleecing providing value to their customers.

How do you get these newly siloed data-stores to work in harmonious and easy to manage way? How do we meet the demands of businesses that are growing ever faster. Of course we invent a new acronym that’s how….’SDS’ or ‘Software Defined Storage’

Funnily enough; the whole SDS movement takes me right back to the beginning; many of my early blogs were focused on the terribleness of ECC as a tool to manage storage. Much of it due to the frustration that it was both truly awful and was trying to do to much.

It needed to be simpler; the administration tools were getting better but the umbrella tools such as ECC just seemed to collapse under their own weight. Getting information out of them was hard work; EMC had teams devoted to writing custom reports for customers because it was so hard to get ECC to report anything useful. There was no real API and it was easier to interrogate that database directly.

But even then it struck me that it should have been simple to code something which sat on top of the various arrays (from all vendors); queried them and pulled back useful information. Most of them already had fully featured CLIs; it should have been not beyond the wit of man to code a layer that sat above the CLIs that took simple operations such as ‘allocate 10x10Gb LUNs to host ‘x’ ‘ and turn them into the appropriate array commands; no matter which array.

I think this is the promise of SDS. I hope the next five years will see the development of this; that we see storage with in a data-centre becoming more standardised from an programmatic point of view.

I have hopes but I’m sure we’ll see many of the vendors trying to push their standard and we’ll probably still be in a world of storage silos and ponds…not a unified Sea of Storage.

 

 

Keep On Syncing…Safely..

Edward Snowden’s revelations about the activities of the various Western security organisations have been both neither a surprise and yet also an a wake-up call to how the landscape of our own personal data security has changed. Multiple devices and increased mobility have meant that we have looked for ways to ensure that we have access to our data where-ever and when-ever;  gone are the days when even the average household has a single computing device and it is also increasingly uncommon to find an homogeneous household in the terms of manufacturer or operating-system. It is now fairly common to find Windows, OSX, Android, iOS and even Linux devices all within a single house; throw in digital cameras and smart-TVs, it is no wonder that we have a situation that makes data-sharing in a secure fashion more and more complex for the average person. So file-syncing and sharing products such as Dropbox, Box, SkyDrive and GoogleDrive are pretty much inevitable consequences and if you are anything like me;  you have a selection of these, some free and some charged but pretty much all of them are insecure; some terribly so. Of course it would be nice if the operating system manufacturers could agree on a standard which included encryption of data in-flight and at rest with a simple and easy to use key-sharing mechanism. Yet even with this, we would probably not trust it anymore but it might at least provide us an initial level of defence. I have started to look at ways of adding encryption to the various cloud services I use; in the past, I made fairly heavy use of TrueCrypt but it is not especially seamless and can be clunky. However this is becoming more feasible as apps such as Cryptonite and DiskDecipher are appearing for mobile devices. Recently I started to play with BoxCryptor and EncFS; BoxCryptor seems nice and easy to use; certainly on the desktop. It supports multiple Cloud providers; although the free version only supports a single Cloud provider; if you want to encrypt your multiple cloud stores, you will have to pay. There are alternatives such as Cloudfogger but development for BoxCryptor seems to be ongoing. And there perhaps there is the option of building your own ‘Sync and Share’ service; Transporter recently successfully kickstarted and looks good; Plug is in the process of kickstarting. Synology Devices have Cloud Station; QNAP have myQNAPcloud. You can go totally build your own and use ownCloud. In the Enterprise, you have a multitude of options as well but there is one thing; you do not need to store your stuff in the Cloud in an insecure manner. You have lots of options now, from keeping it local to using an Cloud service provider; encryption is still not as user-friendly as it could be but it has got easier. You can protect your data; you probably should…    

From Servers to Service?

Should Enterprise Vendors consider becoming Service Providers? When Rich Rogers of HDS  tweeted this and my initial response was

This got me thinking, why does everyone think that Enterprise Vendors shouldn’t become Service Providers? Is this a reasonable response or just a knee-jerk, get out of my space and stick to doing what you are ‘good’ at.

It is often suggested that you should not compete with your customers; if Enterprise Vendors move into the Service Provider space, they compete with some of their largest customers, the Service Providers and potentially all of their customers; the Enterprise IT departments.

But the Service Providers are already beginning to compete with the Enterprise Vendors, more and more of them are looking at moving to a commodity model and not buying everything from the Enterprise Vendors; larger IT departments are thinking the same. Some of this is due to cost but much of it is that they feel that they can do a better job of meeting their business requirements by engineering solutions internally.

If the Enterprise Vendors find themselves squeezed by this; is it really fair that they should stay in their little box and watch their revenues dwindle away? They can compete in different ways, they can compete by moving their own products to more of a commodity model, many are already beginning to do so; they could compete by building a Service Provider model and move into that space.

Many of the Enterprise Vendors have substantial internal IT functions; some have large services organisations; some already play in the hosting/outsourcing space.  So why shouldn’t they move into the Service Provider space? Why not leverage the skills that they already have?

Yes, they change their business model; they will have to be careful that they ensure that they compete on a level playing field and look very carefully that they are not utilising their internal influence on pricing and development to drive an unfair competitive advantage. But if they feel that they can do a better job than the existing Service Providers; driving down costs and improving capability in this space….more power to them.

If an online bookstore can do it; why shouldn’t they? I don’t fear their entry into the market, history suggests that they have made a bit of a hash of it so far…but guys fill your boots.

And potentially, it improves things for us all; as the vendors try to manage their kit at scale, as they try to maintain service availability, as they try to deploy and develop an agile service; we all get to benefit from the improvements…Service Providers, Enterprise Vendors, End-Users…everyone.

 

The Reptile House

I was fortunate enough to spend an hour or so with Amitabh Srivastava of EMC; Amitabh is responsible for the Advanced Software division in EMC and one of the principal architects behind ViPR. It was an open discussion about the inspiration behind ViPR and where storage needs to go. And we certainly tried to avoid the ‘Software Defined’ meme.

Amitabh is not a storage guy; in fact his previous role with Microsoft sticks him firmly in the compute/server camp but it was his experience in building out the Azure Cloud offering which brought him appreciation of the problems that storage and data face going forward. He has some pretty funny stories about how the Azure Cloud came about and the learning experience it was; how he came to realise that this storage stuff was pretty interesting and more complex that just allocating some space.

Building dynamic compute environments is pretty much a solved problem; you have a choice of solutions and fairly mature ones. Dynamic networks are well on the way to being solved.

But building a dynamic and agile storage environment is hard and it’s not a solved problem yet. Storage and more importantly the data it holds has gravity or as I like to think of it, long-term persistence. Compute resource can be scaled up and down; data rarely has the idea of scaling down and generally hangs around. Data Analytics just means that our end-users are going to hug data for longer. So you’ve got this heavy and growing thing…it’s not agile but there needs to be some way of making it appear more agile.

You can easily move compute workloads and it’s relatively simple to change your network configuration to reflect these movements but moving large quantities of data around, this is a non-trivial thing to do…well at speed anyway.

Large Enterprise Storage environments are heterogeneous environments, dual supplier strategies are common; sometimes to keep vendors honest but often there is an acceptance the different arrays have difference capabilities and use-cases. Three or four years ago, I thought we were heading towards general purpose storage arrays; we now have more niche and siloed capabilities than ever before. Driven by developments in all-flash arrays, commodity hardware and new business requirements; the environment is getting more complex and not simpler.

Storage teams need a way of managing these heterogenous environments in a common and converged manner.

And everyone is trying to do things better, cheaper and faster; operational budgets remain pretty flat, headcounts are frozen or shrinking. Anecdotally, talking to my peers; arrays are hanging around longer, refresh cycles have lengthened somewhat.

EMC’s ViPR is attempt to solve some of these problems.

Can you lay a new access protocol on top of already existing and persistent data?  Can you make so that you don’t have to migrate many petabytes of data to enable a new protocol?  And can you ensure that your existing applications and new applications can use the same data without a massive rewrite? Can you enable your legacy infrastructure to support new technologies?

The access protocol in this case is Object; for some people Object Storage is religion…all storage should be object, why the hell do you want some kind of translation layer. But unfortunately, life is never that simple; if you have a lot of legacy applications running and generating useful data, you probably want to protect your investment and continue to run those applications but you might want to mine that data using newer applications.

This is heresy to many but reflects today’s reality; if you were starting with a green-field, all your data might live in an object-store but migrating a large existing estate to an object-store is just not realistic as a short term proposition.

ViPR enables your existing file-storage to be accessible as both file and object. Amitabh also mentioned block but I struggle with seeing how you would be able to treat a raw block device as an object in any meaningful manner. Perhaps that’s a future conversation.

But in the world of media and entertainment, I could see this capability being useful; in fact I can see it enabling some workflows to work more efficiently, so an asset can be acquired and edited in the traditional manner; then ‘moving’ into play-out as an object with rich-metadata but without moving around the storage environment.

Amitabh also discussed possibilities of being able to HDFS your existing storage, allowing analytics to be carried out on data-in-place without moving it. I can see this being appealing but challenges around performance, locking and the like become challenging.

But ultimately moving to an era where data persists but is accessible in appropriate ways without copying, ingesting and simply buying more and more storage is very appealing. I don’t believe that there will ever be one true protocol; so multi-protocol access to your data is key. And even in a world where everything becomes objects, there will almost certainly be competing APIs and command-sets.

The more real part of ViPR; when I say real, I mean it is the piece I can see huge need for today; is the abstraction of the control-plane and making it look and work the same for all the arrays that you manage. Yet after the abomination that is Control Center; can we trust EMC to make Storage Management easy, consistent and scalable? Amitabh has heard all the stories about Control Center, so lets hope he’s learnt from our pain!

The jury doesn’t even really have any hard evidence to go on yet but the vision makes sense.

EMC have committed to open-ness around ViPR as well; I asked the question…if someone implements your APIs and makes a better ViPR than ViPR? Amitabh was remarkably relaxed about that, they aren’t going to mess about with APIs for competitive advantage and if someone does a better job than them; then that someone deserves to win. They obviously believe that they are the best; if we move to a pluggable and modular storage architecture, where it is easy to drop-in replacements without disruption; they better be the best.

A whole ecosystem could be built around ViPR; EMC believe that if they get it right; it could be the on-ramp for many developers to build tools around it. They are actively looking for developers and start-ups to work with ViPR.

Instead of writing tools to manage a specific array; it should be possible to write tools that manage all of the storage in the data-centre. Obviously this is reliant on either EMC or other storage vendors implementing the plug-ins to enable ViPR to manage a specific array.

Will the other storage vendors enable ViPR to manage their arrays and hence increase the value of ViPR? Or will it be left to EMC to do it; well, at launch, NetApp is already there. I didn’t have time to drill into which versions of OnTap however and this where life could get tricky; the ViPR-control layer will need to keep up with the releases from the various vendors. But as more and more storage vendors are looking at how their storage integrates with the various virtualisation-stacks; consistent and early publications of their control functionality becomes key. EMC can use this as enablement for ViPR.

If I was a start-up for example, ViPR could enable me to fast-track management capability of my new device.I could concentrate on storage functionality and capability of the device and not on the periphery management functionality.

So it’s all pretty interesting stuff but it’s certainly not a forgone conclusion that this will succeed and it relies on other vendors coming to play. It is something that we need; we need the tools that will enable us to manage at scale, keeping our operational costs down and not having to rip and replace.

How will the other vendors react? I have a horrible suspicion that we’ll just end up with a mess of competing attempts and it will come down to the vendor who ships the widest range of support for third party devices. But before you dismiss this as just another attempt from EMC to own your storage infrastructure; if a software vendor had shipped/announced something similar, would you dismiss it quite so quickly? ViPR’s biggest strength and weakness is……EMC!

EMC have to prove their commitment to open-ness and that may mean that in the short term, they do things that seriously assist their competitors at some cost to their business. I think that they need to almost treat ViPR like they did VMware; at one point, it was almost more common to see a VMware and NetApp joint pitch than one involving EMC.

Oh, they also have to ship a GA product. And probably turn a tanker around. And win hearts and minds, show that they have changed…

Finally, let’s forget about Software Defined Anything; let’s forget about trying to redefine existing terms; it doesn’t have to be called anything…we are just looking for Better Storage Management and Capability. Hang your hat on that…