Storagebod Rotating Header Image

More Virtual Discussion

Barry Whyte writes on one of my favourite topics 'Storage Virtualisation'.

Now, I do actually think that both SVC and the USP-V have a place in the data centre; they shouldn't but they do! They (and the v-Series working in block from NetApp) provide a common layer of abstraction to the various storage devices which exist in a data centre. If you have more than one type of storage or indeed more than one array and especially if you are attempting to tier data across arrays of many types; you should probably be considering these.

If you are going to attempt a mass data migration, the SVC is something that should certainly be considered due to the relative ease of getting into SVC and getting back out of it. IBM have thought long and hard about the migration use-case for SVC and added the necessary functionality to get out of SVC. A sensible move because one of the major objections to most major virtualisation projects is; what do I do if it doesn't work for me? How the hell do I get out?

Also, virtualisation devices allow me to drive up utilisation by enabling pockets of storage dotted across arrays to be pooled up and used. In fact, often this is the major selling point of the virtualisation devices; it makes the previous unuseable, useable. However, I would argue that these little pockets of storage are an aberation caused by the legacy architectures caused by the traditional arrays and how they are carved up into logical (some would say virtual) disks.

The virtualisation devices can take these little pools of disk and like a baker with pastry offcuts; roll them back into a ball to be re-used. However, you do have to be careful when collecting your pastry offcuts that you don't end with a ball which doesn't stick together especially well and has a habit of flaking and breaking when baked.

This could well be your experience when rolling up your little pockets of storage; disks from different arrays and vendors will perform in different ways. Spreading virtual devices across multiple arrays from different vendors may cause some interesting performance challenges. If the array with the pocket of storage already is fully utilised from an I/O point of view; attempting to use the disk might well have a negative impact on application performance.

Actually spreading a single application across multiple arrays needs careful planning and consideration. It will complicate your change management, problem determination, configuration management and many other necessary processes. You still need to maintain the underlying arrays and understanding the impact of an outage to one of these arrays when you have fully virtualised might be interesting; an outage to a single array could take out your whole data-centre environment.

You might find you get more benefit in the long term by considering an array architecture which inherently enables you to use disk in a more flexible and efficient manner. You may also find that using automation and allowing the array to make decisions about where it puts data drives up efficiency.

So as we move to more scalable, efficient and automated environments; I wonder if we will look back at things like USP-V, SVC etc as a cul-de-sac driven by today's necessity! Or perhaps they truly are the future?


2 Comments

  1. Hi Barry,
    I agree with you, tiered storage and dynamic provisioning are use cases which drive customer to investigate these virtualization offerings. Hitachi Data systems growth has been fueled by enterprise customers implementing our Universal Storage Platform V virtualization capabilities to support their data mobility requirements across their data center.
    I am not aligned with your views on the capacity reclaim statement. These capacity reclaim savings are common with our customers. The root cause of this previously wasted capacity has less to do with legacy architecture but more by procurement strategy, budgeting process and business impact of acquisitions. It is not uncommon for customers to have a different vendor by storage tier or even divide up their capacity requirement between vendors to insure competitive pricing. That being said, dynamic provisioning is often implemented for the performance benefits of wide stripping, not for the capacity saving.
    To add to your last question, I do believe that some virtualization offerings are being implemented as a management layer in data center storage architectures; the need to manage heterogeneous storage platform is not going away for these customers.
    As far as which virtualization offerings will be part of the future or end up in a cul-de-sac? This projection depends on enterprise adoption, not all virtualization offerings are implemented the same way. Storage virtualization offerings differ greatly in term of maturity and features. Not all customers have the same performance, scalability, replication or management requirements. That is why the Universal Storage Platform V is being adopted for data center wide requirement and solution like SVC or Invista are being selected due to lower entry price for smaller requirements or niche use cases like migration.

  2. Martin G says:

    Hi Bob! You can call me Martin or Storagebod! I was linking to Barry’s blog…
    Well, in my experience; capacity is often wasted simply due to the complexity of the legacy implementations which means it is very hard to fully utilise the disk. Also performance considerations such as disk being maxed out from an I/O point of view play a very part in this. Short-stroking to get performance is not uncommon.
    Wide-striping can drive up capacity utilisation simply due to it’s ease of use and the fact that managing for performance becomes a lot simpler. Wide-striping has both performance and capacity benefits; you can have your cake and eat it in most cases.
    I am not convinced we will be talking about storage virtualisation in five years in the same terms as we are today. Much of the functionality delivered by Storage Virtualisation is moving up the stack.
    The concepts of Data Centre Operating Systems and levels of end-to-end infrastructure integration brought by these may consign the current virtualisation architectures to the annals of history.

Leave a Reply

Your email address will not be published. Required fields are marked *