Storagebod Rotating Header Image

Many Solutions, Many Problems…

Instead of focusing on product, more and more of us are concentrating on solutions; products don't solve business problems, solutions do; I like this entry on Eigenmagic which looks at the differences between products and solutions.

But concentrating on solutions can be dangerous because next thing you know you have a data centre full of point solutions and a support nightmare. Many vendors like to try and package their solutions as black-box type solutions which require their own OS builds, own hardware, do not support virtualisation, do not co-exist well with others. 

Software appliances which run on industry-standard hardware may well be the answer but even these often have stringent requirements. I have come across software appliances which specify that they only support certain types of server underneath; we need to move away from this and to very high-level requirements i.e how much memory, how much CPU and how I/O is required to support the application; not it requires a specific HP server. 

Solutions are great as long as they are not used as a way to sell me expensive OEMed hardware which I can buy off the shelf from half the cost. And you may claim that the TCO will be lower; let me tell you, often the TCO goes up because I still have to manage the security patching, multiple support contracts, non-standard form-factors etc, etc.


8 Comments

  1. paulrmc says:

    Storagebod,
    Our object storage stuff (@Caringo, full disclosure;) is an integrated, single deliverable SW appliance that will boot equally on any industry standard HW from an Atom-powered EeeBox (I can hold my 4-node home cluster in one hand) to a dual 4-core 48-disk monster. It will even combine these extremes – and everything in between – in a single cluster and leverage their respective forte.
    Is that what you mean? I know that’s what I wanted. But you’d be surprised how many prospects and customers anxiously enquire about the brand and model of HW they should buy to run it on. Freedom is really hard to get used to I guess. Maybe Mr. Pavlov can tell us more.

  2. Barry Whyte says:

    Hmm, depends if you care about SLA’s and the like – run it on anything but don’t complain when it does run very well. We often thought about a software only SVC play, but when it comes to needing to meet gurantees for performance and so on, much better to have a tried and tested fixed configuration – with a few options.
    I guess I’m coming from an appliance that needs to guarantee 99.999+% availability with guaranteed performance, where using any old off the shelf – self built box is not going to work.

  3. Hi Martin,
    Thanks for the link!
    You’re right to be cautious about buying a bunch of point solutions. You need to look at those higher level requirements and build something that meets those, higher even than CPUs and I/O.
    That’s where it’s all heading with this notion of an ‘internal cloud’. Think of what a hosting company does for people’s websites, and now expand that to the rest of IT.
    The business people shouldn’t even know what storage you use.

  4. paulrmc says:

    Barry W,
    I’d be interested to know what would make you change your mind, as any (manually) tested configuration will be strongly running behind SW releases and existing HW capabilities. It will also drive up costs unnecessarily, to the point that it is far less expensive today to run with a largely over-dimensioned cluster that isn’t certified than one that is tight but certified. Don’t ask which one works better 😉
    IMHO, automation is the answer: how about a “self-certification” capability built into the software? Boot up the cluster in self-certification mode and it delivers a report certifying both HW compatibility and performance on node as well as actual cluster level, taking into account the very configuration you are testing on, including SW rev level. Extrapolations to more nodes wouldn’t be too hard either once we have some data.
    Your opinion?

  5. Barry Whyte says:

    Paul,
    IMHO the problem would be IPMI. Until you completely understand how box X’s power management interface works you run the risk of losing the cluster. What temperature does it shut down? When does it try to suspend, when would it reduce the clock speeds, etc. The most critical is the shutdown temperature, with SVC we want the software to perform a controlled shutdown a few degrees before the box rips the power away or you will lose the cache contents.
    Now as I say, maybe with SVC we are in a minority of enterprise class business critical appliances at the heart of the SAN.
    However I would inject a counter argument that testing would be more complicated as we’d have to get a variety of hardware in to test from different vendors and would end up chasing down other peoples problems (as we often do today with our interop testing!)

  6. Martin G says:

    Of course, you don’t want to get into the Java model of write once, debug everywhere!
    But SVC IMNSHO is a product not a solution! And to be honest, it works at such a low level that I really don’t have a problem with it being supplied as it is. There is obviously a balance to be met! But when you get business application vendors specifying a specific model when they have no obvious requirement for low-level interaction with hardware or even the operating system; the balance is not right!
    Still, I’d love a software version of SVC or at least a SVC simulator like the simulators from NetApp, EMC et al!

  7. Barry Whyte says:

    Yeah, I know… I have a software version of SVC on the USB stick attached to my keyring – come mug me at IP Expo next month and you may have your wise 😉

  8. Barry Whyte says:

    wise = wish – damn crappy new keyboard

Leave a Reply

Your email address will not be published. Required fields are marked *