Storagebod Rotating Header Image

Time to Build?

Any half-way competent storage administrator or systems administrator should be able to build a storage array themselves these days. It’s never really been easier and building yourself a dual-head filer that does block and network attached should be a doddle for anyone with a bit of knowledge, a bit of time and some reasonable google-fu skills. I built a block-storage array using an old PC, a couple of HBAs and linux about five years ago; it was an interesting little project, it could present LUNs via FC/iSCSI and file-share via SMB and NFS. It couldn’t do Object but if I was doing it again today, it would.

And it was a single-head device but it was good enough to use as a target device to play about with FC and generally support my home devices. I only recently switched it off because I’m not running FC at home any more.

But if I could build a storage array five years ago; you can do so today. I am not that good a storage/server guy; I’m a tinkerer and dilettante. You are probably much more competent than me.

Another factor that makes it easier is that FC is slowly going away; it’s slow progress but iSCSI making headway for those who really need block, 10 GbE is coming down in price. I’m also interested to see whether some the proposed intermediate speeds of Ethernet have an impact in this space; many data-centres are not yet 10 GbE and there is still quite a cost differential but 1 GbE is not really good enough for a data-centre storage network but 5 GbE and maybe even 2.5GbE might good enough in some cases. And as FC goes away; building your own storage endpoints becomes a lot simpler.

Throw in commodity flash with one of ‘new’ file-systems and you have a pretty decent storage array at a cost per terabyte that is very attractive. Your cost of acquistion is pretty low, you’ll learn a whole lot and be positioned nicely for Infrastructure as Code tsunami.

If you do a great job, you might even be able to spin yourself out as a new flash-startup. Your technology will very similar to a number of start-ups out there.

So why are you sitting here, why are you still raising POs against the three or four letter name vendors?

Imagine never having to speak to them again, what a perfect world.


3 Comments

  1. Greg Ferro says:

    The use of 2.5/5G Ethernet is best suited to campus especially when connecting wireless access points to the wired network. So far, there are no NICs that do 2.5/5G although someone will be foolish enough to want them. Although it could be used for storage, the price point of 10GbE twinax is roughly the same as 2.5/5G and you would always use that in the data centre.

  2. Martin Glassborow says:

    Thanks Greg…I still see a big difference between the cost of 10 GbE and 1 GbE; was wondering if there was any chance in the 2.5/5 space that the differential would be bridged. A lot of the time, I see storage bandwidth utilisation that doesn’t touch the sides of a 10 GbE link but would saturate a 1 GbE link.

  3. Gordon Fraser says:

    An interesting idea, especially when you consider that the big guys (AWS, MS, Google, etc) are already doing this.
    Whereas conventional wisdom says that you shouldn’t do this (who is responsible for maintenance? And support issues where the recovery manager is screaming to ‘Get the vendor on the call’), I think that this is a new possibility at the lower end of the storage scale.

    For years there has been a storage array growth where everyone starts to want a slice of what you have available, and we have generally put everything on the same arrays. But maybe this is an option for the lower end storage asks, leaving your high end performant array for the crown jewels?

    But is it all about the extremes of the scale of our operations?
    For big companies they can pay for a team to develop and then maintain the code for this. It is interesting that the bigger companies can generally put enough squeeze on vendors to get the best prices, but still they find it better to grow their own.
    At the low end of the scale maybe smaller companies (who generally – by the converse of the above argument – get shafted on prices) and can convince their bosses that this a viable option (and that they can give the ongoing support, development, monitoring, failed component swap,..) and can live with the recovery options should it all go south late on a Friday afternoon at month end; then maybe this works for them too.

    Its that larger middle size company where this comes unstuck. Where so many barriers will be put in your way for you not to do this. Internal, external, from the incumbent vendor even (Greg did a good piece on this recently), that you may wonder if it is all worth the pain.

    But then maybe this is the new way forward for us. The death of the storage admin, the dawn of the storage appliance kings?
    (even if you do just hit the compromise of a VSA solution, software solution on grey box kit, etc)

Leave a Reply

Your email address will not be published. Required fields are marked *