Storagebod Rotating Header Image


I can only write from my experience and your mileage will vary somewhat but 2014 is already beginning to get interesting from a storage point of view. And it appears to have little to do with technology or perhaps too little technology.

Perhaps the innovation has stopped? Or perhaps we’re finally beginning to see the impact of Google/Amazon and Azure on the Enterprise market. Pricing models seem to be being thrown out of the window as the big vendors try to work out how to defend themselves against the big Cloud players.

Historically high margins are being sacrificed in order to maintain footprint; vendors are competing against themselves internally. Commodity plays are competing with existing product sets; white-box implementations, once something that they all liked to avoid and FUD, are seriously on the agenda.

It won’t be completely free-for-all but expect to start seeing server-platforms certified as target-platforms for all but the highest-value storage. Engineering objections are being worked around as hardware teams transition to software development teams; those who won’t or can’t will become marginalised.

Last year I saw lip-service being paid to this trend; now I’m beginning to see this happening. A change in focus…long overdue.

If you work in the large Enterprise, it seems that you can have it your way….

And yet, I still see a place for the hardware vendor. I see a place for the vendor that has market leading support and the engineering smarts that means that support does not cost a fortune to provide or procure.

Reducing call volumes and onsite visits but still ensuring that the call is handled and dealt with by smart people. This is becoming more and more of a differentiator for me; I don’t want okay support, I want great support.

The move to commoditisation is finally beginning….but I wonder if we are going to need new support models to at least maintain and hopefully improve the support we get today.



  1. bartek says:

    There are already many enterprises that are adopting cloud storage. Due
    to bandwidth/speed limitations as well as security concerns, the better
    deployment would be a hybrid structure, which means putting cold
    data onto the cloud and keeping hot data inside the private datacentre.
    In this way the storage cost can usually be reduced by a significant level. But
    protecting company data from a cloud failure is administrators’ responsibility
    as the IT professional. The trend of putting enterprise data on to the cloud
    seems to be the inevitable. For example, when the Nirvanix cloud storage shut
    down in October, 2013 its customers didn’t stop using cloud storage, they just
    moved to other vendors.

    Bartłomiej Mytnik
    Business Development Manager EMEA

  2. chris james says:

    I think you are right Martin but the transition is being caused by innovation
    rather than lack of it. The big hardware players are being attacked by the new
    wave of flash technology. These flash vendors are small and agile so can
    react quickly to changes in demand – however on their own their products are
    not enterprise strength.

    What we are seeing is a change whereby flash together with
    Infrastructure Performance monitoring is being implemented together. The
    joint solution can give a guaranteed performance SLA and so ticks all the boxes
    regarding enterprise needs and delivers at a much reduced cost.

    Service and support are growing in this sector more rapidly than
    elsewhere. The key word is performance – you need a performance guarantee for
    your applications not an availability SLA. Hosting companies are starting to
    provide this so you know your applications are performing end-to-end across a
    heterogeneous environment, in real time and without impacting performance. If
    you know that everything is performing as it should then you don’t have to
    worry about what the technology supporting you is – and the host can therefore
    reduce their costs accordingly.

    We are also seeing a rise in ‘Critical Infrastructure Audits’. This is
    where the infrastructure supporting the critical applications is assessed, as a
    service, so a baseline or benchmark can be set for the application performance.
    This is done by an impartial Infrastructure Performance Management platform
    (not a tool provided by your hardware vendor – as surprise, surprise they
    are biased towards their own element). Once application performance has been
    accessed changes can be made to improve it. This must be done across the entire data centre infrastructure as elements impact each other and one change can have a negative impact on another area.

    Application performance services will continue to grow throughout 2014
    as organisations realise that IT is again a source of competitive advantage
    rather than an overhead. Cost can be reduced and the systems improved at the
    same time.

    Chris James
    Marketing Director, EMEA, Virtual Instruments

  3. cljsf says:

    Really interesting article. So they can’t hold back the sea any longer? Is this something I might be able to learn more about at the storagebeers on Wednesday? Or is that more a social meet up? Thanks


Leave a Reply

Your email address will not be published. Required fields are marked *