Storagebod Rotating Header Image

May, 2012:

Virtually Pragmatic?

So why have EMC joined the storage virtualisation party and although they are calling it federation, it is what IBM, HDS and NetApp amongst others call storage virtualisation? So why do this at this time after warning about dire consequences about doing so in the past.

There are probably a number of reasons to do this; there have certainly been commercial pressures to do so, I know of a number of RFPs which have gone out from large corporates which have mandated this capability; money talks and in an increasingly competitive market, EMC probably have to tick this feature box.

The speed of change in the spinning rust market appears to be slowing, certainly the incessant increase in the size of hard disks is slowing and there means that there might be less pressure to technically refresh the spindles and a decoupling of the disk from the controller makes sense. EMC can protect their regular upgrade revenues at the controller level and forgo some of the spinning rust revenues. They can more than make up for this out of maintenance revenues on the software.

But I wonder if there is a more pressing technological reason and trend that means that it is a good time to do this; that is the rapid progress of flash into the data-centre and how EMC can work to increase the acceleration of adoption. It is conceivable that EMC could be looking shipping all-flash arrays which allow a customer to continue to enjoy their existing array infrastructure and realise the investment that they have made. It is also conceivable that EMC could use a VMAX like appliance to integrate their flash-in-server more simply with a third party infrastructure.

I know nothing for sure but the size of this about turn from EMC should not be understated; Barry Burke has railed against this approach to storage virtualisation for such a long time, there must be some solid reasoning to justify it in his mind.

Pragmatism or futurism, a bit of both I suspect.

The Last of the Dinosaurs?

Myself and Chris ‘The Storage Architect’ Evans were having a twitter conversation during the EMC keynote where they announced the VMAX 40K; Chris was watching the live-stream and I was watching the Chelsea Flower Show, from Chris’ comments, I think that I got the better deal.

But we got to talking about the relevance of the VMAX and the whole bigger is better thing. Every refresh, the VMAX just gets bigger and bigger, more spindles and more capacity. Of course EMC are not the only company guilty of the bigger is better hubris.

VMAX and the like are the ‘Big Iron’ of the storage world; they are the choice of the lazy architect, the infrastructure patterns that they support are incredibly well understood and text-book but do they really support Cloud-like infrastructures going forward?

Now, there is no doubt in my mind that you could implement something which resembles a cloud or let’s say a virtual data-centre based on VMAX and it’s competitors. Certainly if you were a Service Provider which aspirations to move into the space; it’s an accelerated on-ramp to a new business model.

Yet just because you can, does that mean you should? EMC have done a huge amount of work to make it attractive, an API to enable to you to programmatically deploy and manage storage allows portals to be built to encourage self-service model. Perhaps you believe that this will allow light-touch administration and the end of the storage administrator.

And then myself and Chris started to talk about some of the realities; change control on a box of this size is going to be horrendous; in your own data-centre, co-ordination is going to be horrible but as a service provider? Well, that’s going to be some interesting terms and conditions.

Migration, in your own environment,  to migrate a petabyte array in a year means migrating 20 terabytes a week more or less. Now, depending on your workload, year-ends, quarter-ends and known peaks, your window for migrations could be quite small. And depending how you do it, it is not necessarily non-service impacting; mirroring at the host level means significantly increasing your host workload.

As a service provider; you have to know a lot about the workloads that you don’t really influence and don’t necessarily understand. As a service provider customer, you have to have a lot of faith in your service provider. When you are talking about massively-shared pieces of infrastructure, this becomes yet more problematic. You are going to have to reserve capacity and capability to support migration; if you find yourself overcommitting on performance i.e you make assumptions that peaks don’t all happen at once, you have to understand the workload impact of migration.

I am just not convinced that these massively monolithic arrays are entirely sensible; you can certainly provide secure multi-tenancy but can you prevent behaviours impacting the availability and performance of your data? And can you do it in all circumstances, such as code-level changes and migrations.

And if you’ve ever seen the back-out plan for a failed Enginuity upgrade; well the last time I saw one, it was terrifying.

I guess the phrase ‘Eggs and Baskets’ comes to mind; yet we still believe that bigger is better when we talk about arrays.

I think we need to have some serious discussion about optimum array sizes to cope with exceptions and when things go wrong. And then some discussion about the migration conundrum. Currently I’m thinking that a petabyte is as large as I want to go and as for the number of hosts/virtual hosts attached, I’m not sure. Although it might be better to think about the number of services an array supports and what can co-exist, both performance-wise but also availability window-wise.

No, the role of the Storage Admin is far from dead; it’s just become about administering and managing services as opposed to LUNs. Yet, the long-term future of the Big Iron array is limited for most people.

If you as an architect continue to architect all your solutions around Big Iron storage…you could be limiting your own future and the future of your company.

And you know what? I think EMC know this…but they don’t want to scare the horses!

A New Symm?

So EMC-World is here and the breathless hype begins all over again and in amongst the shiny, shiny,shiny booths; the acolytes worship the monolith that is the new Symmetrix. Yet a question teases the doubters, do we need a new Symmetrix?

Okay, enough of the ‘Venus in Furs’ inspired imagery; although it might be strangely appropriate for the Las Vegas setting but there is a question which needs to be asked, do we need a new Symmetrix?

Now I am probably these days far enough removed but not so distant that I can have a stab at an answer. And the answer is; no, I don’t believe that we actually needed a new Symmetrix but EMC needed to develop one anyway.

There are certainly lots of great improvements; a simpler management interface and the bringing it into the Unisphere world has been long overdue. It seems that many manufacturers are beginning to realise that customers want commonality and that shiny GUIs can help to sell a product.

Improvements to Timefinder snaps are welcome; we’ve come a long way from BCVs and mirror poistions; there’s still a long way to go to get customers to come along with you tho’. Many cling onto the complex rules with tenacity.

Certainly the mirroring of FAST-VP so that in the event of fail-over, there is a Performance Recovery Point of 0 is  achievable is very nice; it’s  something I’ve blogged about before and is a weakness in many automated tiering solutions.

eMLC SSDs; bringing the cost of SSD down whilst maintaining performance, this is another over-due capability as is the support of 2.5″ SAS drives improving density and I suspect performance of spinning rust.

Physical dispersal of cabinets; you probably won’t believe how long this has been discussed and asked for. Long, long overdue but hey, EMC are not the only guilty parties.

And of course, Storage ‘Federation’ of 3rd party arrays; I’m sure HDS and IBM will welcome the vindication of their technology by EMC or at least have a good giggle.

But did we need a new Symmetrix to deliver all this? Or would the old one have done?

Probably but where’s the fun in that?

I don’t know but perhaps concentrating on the delivery to the business before purchasing a new Big Iron array might be more fitting. I don’t know about you but in the same way that I look at mainframes with nostalgia and affection; I’m beginning to look at the Symmetrix and the like in the same way.

If you need one, you need one but ask yourself…do I really need one?

Flash Changed My Life

All the noise about all flash arrays and acquisitions set me thinking a bit about SSDs and flash; how it has changed things for me.

To be honest, the flash discussions haven’t yet really impinged on my reality in my day-to-day job, we do have the odd discussion about moving metadata onto flash but we don’t need it quite yet; most of the stuff we do is sequential large I/O and spinning rust is mostly adequate. Streaming rust i.e tape is actually adequate for a great proportion of our workload. But we keep a watching-eye on the market and where the various manufacturers are going with flash.

But flash has made a big difference to the way that I use my personal machines and if I was going to deploy flash in a way that would make the largest material difference to my user-base, I would probably put it in their desktops.

Firstly, I now turn my desktop off; I never used to unless I really had to but waiting for it to boot or even awake from sleep was at times painful. And sleep had a habit of not sleeping or flipping out on a restart; turning the damn thing off is much better. This has had the consequence that I now have my desktops on an eco-plug which turns off all the peripherals as well; good for the planet and good for my bills.

Secondly, the fact that the SSD is smaller means that I keep less crap on it and am a bit more sensible about what I install. Much of my data is now stored on the home NAS environment which means I am reducing the number of copies I hold; I find myself storing less data. There is another contributing factor; fast Internet access means that I tend to keep less stuff backed-up and can stream a lot from the Cloud.

Although the SSD is smaller and probably needs a little more disciplined house-keeping; running a full virus check which I do on occasion is a damn sight quicker and there are no more lengthy defrags to worry about.

Thirdly, applications load a lot faster; although my desktop has lots of ‘chuff’ and can cope with lots of applications open, I am more disciplined about not keeping applications open because their loading times are that much shorter. This helps keeping my system running snappily, as does shutting down nightly I guess.

I often find on my non-SSD work laptop that I have stupid numbers of documents open; some have been open for days and even weeks. This never happens on my desktop.

So all-in-all; I think if you really want bang-for-buck and to put smiles on many of your users’ faces; the first thing you’ll do is flash-enable the stuff that they do everyday.

 

 

#Storagebeers, SNIA Stuff

I posted a whilst ago giving early warning about a #storagebeers event in London.

This event will be Wednesday 23rd  May whilst most of the EMC storage world is partying in Vegas and having a great time (who knows, maybe one day I’ll work out how to get there). But we’ll hopefully also be having a good time as well in London.

1) It is Data Centre  Technology Academy time during the day;  come and heckle Alex McDonald or give him generous support as he works his way through a vendor-neutral presentation trying not to abuse his competitors. Always marvellous fun.

2) And then it will be London #storagebeers; Princess of Prussia is my proposed venue as it’s just a little walk from the DCTA venue. Hopefully then we’ll go and find somewhere to have a curry. I’m hoping we can get into Cafe Spice Namaste but failing that, we should be able to wander up the road and go to the Halal.

I’ve also been doing a little bit of work with SNIA Europe around how we get a bit more community involvement with SNIA and get more than the vendors involved. Please go here for the first SNIA Europe Blogger page/question…

 

 

 

 

Long Term Review: Synology DS409

Over the past three years, my primary home NAS has been the Synology DS409; in this time, I’ve built my own NAS solutions as well and have trialled a number of home-build solutions but my core home NAS remains the DS409.

When I bought the DS409, I looked and considered a number of competing solutions; Drobo and QNAP boxes came highly recommended and there are still plenty of people who swear by them.

The build quality of the DS409 is excellent and still looks pretty much good as new but then again it is not as it I am kicking it across the room on a regular basis. I give it regular clean-out with compressed air, just to blow the dust out of the fans; it still runs quiet and cool.

It currently has 4x1Tb Western Digital drives in a RAID-5 format; it has an additional e-SATA drive attached to it to provide additional storage. These are carved up to provide NFS, SMB and iSCSI shares.

As well as providing traditional file-sharing capability, it is also the print server for the house and also works as a DNLA and an Airplay server. If I didn’t have a separate web-server, VPN server etc; it could also do that for me.

You can integrate into an Active Directory domain if you so wish and you have a variety of options for backing up; you can use an rsync-based back-up solution, back-up in to the s3 Cloud or simply back-up to a locally attached external disk.

Synology continue to support and update the DS409 with firmware and features; the feature-set is constantly being improved features like Synology Hybrid RAID which allows mixed sized drives to be used in a similar way to the Drobo; to CloudStation which enabled your Synology device to work as a private Cloud-storage device.

Synology are constantly improving their software and it is fairly admirable that they continue to update their software for products which they no longer sell. The user interface has improved significantly over time; it is simple and intuitive and if you need to, you can always drop back into the Linux command-line. Having access to the Linux command line means that there are a number of third party applications which can also be installed, it is a very hacker-friendly box.

The only thing it really lacks, is significant integration with VMware but most home-users and probably most small businesses will not miss this at all.

When the time comes to replace my home NAS, Synology will be top of my list.

Highly recommended.

Death of the Home Directory

Well, when I say that the Home Directory is dying; I mean that it is probably moving and with it some problems are going to be caused.

As I wander round our offices, I often see a familiar logo in people’s system trays; that of a little blue open box. More and more people are moving their documents into the Cloud; they really don’t care about security, the just want the convenience of their data where ever they are. As the corporate teams enforce a regime of encryption on USB flash-disks; everyone has moved onto Cloud-based storage. So yes, we are looking at ways that we can build internal offerings which bring the convenience but feel more secure. Are they any more secure? And will people use them?

I suspect that unless there are very proscriptive rules which block access to sites such as Dropbox, Box, Google Drive and the likes; this initiative will completely fail. The convenience of having all your data in one place and being able to work on any device will over-ride security concerns. If your internal offering does not support every device that people want to use; you may well be doomed.

And then this brings me onto BYOD; if you go down this route and evidence suggests that many will do so..you have yet more problems. Your security perimeter is changing and you are allowing potential hostile systems onto your network; in fact, you always probably did and hadn’t really thought about it.

I have heard of companies who are trying to police this by endorsing a BYOD policy but insisting that all devices should be ‘certified’ prior to being attached to the corporate network. Good luck with that! Even if you manage to certify the multitude of devices that your staff could turn up with as secure and good to go; that certification is only valid at that point or as long as nothing changes, no new applications installed, no updates installed and probably no use made of the device at all.

Security will need to move to the application and this could mean all of the applications; even those familiar applications such as Word and Excel. Potentially, this could mean locking down data and never allowing it be stored in a non-encrypted format on a local device.

The responsibility for ensuring your systems are secure is moving; the IT security teams will need to deal with a shifting perimeter and an increasingly complicated threat model. Forget about updating anti-virus and patching operating systems; forget about maintaining your firewall; well don’t but if you think that is what security is all about, you are in for a horrible shock.

 

Big Data Values for All?

The jury is probably still out on the real value of ‘Big Data’ and what it will mean to our lives; whether it is a power for good or ill or even if it is a power for anything is probably still up for debate. But there is one thing which is probably true, ‘Big Data’ will change data-processing for the better.

At present, you will find that the prevailing wisdom is that if you have Data to store, you should store it in a relational database but the ‘new’ data processing techniques which ‘Big Data’ brings to the party changes this or at least seriously questions this wisdom.

I know many applications that currently store their data into relational databases that could possibly benefit from a change of focus; these are often log-oriented applications which are only using one or two tables to store their Data and often the indexes to enable fast processing are larger than the data stored.

So even if you have no ‘Big Data’, you may find that you have more candidates than you realise for ‘Big Data’ processing techniques….and I suspect this is what really scares our friends at Oracle. For too long now, serious Data processing required serious relational databases and that road took us into the realms of Oracle; increasing costs and infrastructure complexity.

The problem is that re-writes show little immediate business value and the investment will take two or three years to pay-off; it is this that your RDMS account manager is counting on. Yet as soon as you start to factor in maintenance, upgrade and recurring costs; this should be an economic no-brainer for the IT Manager with foresight.

 

 

No Pain, No Gain?

I always reserve my right to change my mind and I am almost at the stage that I have changed my mind on blocks/stacks or whatever you want to call them? And for non-technical and non-TCO related reasons.

I think in general componentised and commodity-based stacks make huge sense; whether you are building out private or a public infrastructure; a building block approach is the only really scalable and sustainable approach. And I wrote internal design documents detailing this approach eight or nine years ago; I know I’m not the only one and we didn’t call it cloud…we called it governance and sensible.

But where I have changed my opinions is on the pre-integrated vendor stacks; I think that they are an expensive way of achieving a standardised approach to deploying infrastructure and I have not changed from this opinion.

However I think that this cost may well be the important catalyst for change; if you can convince a CFO/CEO/CIO/CTO etc that this cost is actually an investment but to see a return on the investment that you need to re-organise and change the culture of IT, it might well worth be paying.

If you can convince them that without the cultural change, they will fail….you might have done us all a favour. If it doesn’t hurt, it probably won’t work. If it is too easy to write things off when it’s tough…it’ll be too easy to fall back into the rut.

So EMC, VCE, Cisco, IBM, NetApp, HP etc….make it eye-wateringly expensive but very compelling please. Of course, once we’ve made the hard yards, we reserve the right to go and do the infrastructure right and cheap as well.

Archicultural….

It seems the more that I consider the architectural and technical challenges and changes to the Corporate IT world, the more I come back to the cultural issues which exist within many IT departments and the more I find myself feeling strongly that this is where the work really needs to be done.

Unfortunately it is pretty hard to buy a culture from a vendor, even though I suspect if Chuck could work out exactly how to do so; we’d have a product from EMC called V-CLT (or is that VMware?); so building a culture is going to be have to be an internal thing and that means it is going to be tough.

Too often the route into IT Management means either promoting excellent techies into management or sometimes promoting people into positions where they can do no more harm as opposed to moving people into positions which suits them and their personalities. I am sure that we can all think of examples of both; this is especially true in end-user organisations as the career paths are less varied than that of the vendor organisation. Vendor organisations have sales, marketing and other avenues for progression; they also have the traditional IT paths as well.

But all IT organisations are suffering from cultures which neither scale or are sustainable in the long term. There needs to be a long term shift which ensure that training and development are in more than just technical skills; there needs to be a move away from a hero culture that sees staff at all levels of an organisation regularly halving their hourly rates by working longer than their contracted hours, not taking leave and forgetting that you ‘Work to Live’.

Careers need to be thought of more than the fastest route to the top and when people find their natural level; this does not mean that they do not stop being valuable members of an organisation. Work on developing people horizontally (and you with the dirty mind can stop sniggering); I think that there is something relatively unhealthy when you find managers who have worked their way up through a team and only worked in one team.  Horizontal moves have immense value; I have learnt such a lot in the past couple of years running a test team as well as a storage team.

Horizontal moves will help to break down some of the siloed mentality; even if you do not believe in DevOps, moving people between these two disciplines even on secondment must have value.

If you have a graduate scheme in place, the natural roles that most graduates gravitate to are in development; make sure that they have a placement in an Operations/Infrastructure team. They will learn so much.

And if you work in management; you are doing a pretty hard job, make it easier on yourself by standing on the shoulders of giants and actually study the art of management and leadership. Most get to management by being good at something; being good at that something does not mean you know anything about management.