Storagebod Rotating Header Image

gestaltit

iBlock?

What can Apple teach us about Enterprise IT? Apple and Enterprise IT, words which don't really belong in the same sentence but perhaps we can learn quite a lot about the future of Enterprise IT by looking at Apple and its current strategy. 

Firstly, like many geeks I must admit to having a very uneasy relationship with Apple and it's products; I still keep thinking style over substance, overpriced and under performing kit. So why is my laptop of choice a MacBook, why do I own an iPhone and an iPad? Why am I looking forward to June 7th and Steve's keynote where he'll certainly announce a new iPhone? 

Like it or not, Apple's stuff just works; my MacBook boots up in the half the time of my Windows Laptop (actually it's even faster since I put an SSD in it), applications just work; hardware and software work in harmony because they have been designed in concert to work together. I don't measure TCO for my home kit but the time I save with at least one piece of kit which just works is great. It gives me the time to hack about with Linux, ESX and Windows. And of course, hidden under the covers, there beats the heart of the ultimate geek operating system, Unix! 

And then there's the iPhone and the iPad; Apple have taken control-freakery to extremes; even telling you what languages you can develop in and then controlling the method of distribution and if Steve doesn't like it, it isn't coming in. But the app-store is so unbelievably convenient; installation of applications is just a tap away and despite the fact that Steve's control-freakery is simply wrong, I still happily use the devices and ignore that nagging voice in the back of mind.

Sure Apple's stuff is more expensive but it just works; it's a fairly sad indictment that to get stuff that just works, we are willing to pay more but that appears where we are at the moment. Apple have developed the iBlock or various iBlocks; perhaps quietly and subconsciously, various strategists in the Enterprise Industry have been influenced by this seductive idea that things should just work? 

People are getting used to the idea that there's an app for everything and it's simply a tap away to get. Our users are getting used to this on the iPhones and now their iPads; we can expect that they are going to ask why they first can't get the same service for their desktops and eventually for their enterprise servers. And they'll just expect everything to work and work *now*. 

But a word of caution and take this from the voice of experience; Apple's TCO in a heterogeneous environment soars, it is painful to get it to work with anything else. It wants to do everything it's own way and plays very begrudgingly with others. If you need to do something slightly out of the ordinary, you will struggle to do so. 

Apple is great as long as what you do is what Apple wants you to do in the what it wants; which is why it will always struggle in the Enterprise. Let's hope that the various Enterprise stack vendors learn both the positive lessons from Apple but also take account of the downsides.

New Device or Future Array?

Okay, some more riffing on the subject of VPLEX; just idle speculation and I expect either no comment or complete denial from EMC. 

In VPLEX, do we see the future of EMC storage? Or more accurately, do we see the future of Symmetrix? Is it the beginning of the end for Symmetrix and more importantly, Enginuity. The final break from the past?

It's certainly an interesting thought; firstly despite regular denials from EMC; bushfires regularly break out between the Clariion and the Symmetrix camps. Should all the development efforts be focussed on Flare or Enginuity? Some people might say that at times, there have been more than bushfires. This is to be expected, when development teams are competing for finite resource, this sort of thing happens. But this never really gets resolved, especially when you have two successful products like Symm and Clariion. 

Perhaps what is needed is a break from the past? Could VPLEX be this break? Well certainly from what has been announced so far, probably not. But if we look at the various EMC blogs where it is suggested that they might add things like snaps, clones etc; this seems a possible way forward. 

VPLEX as an array controller may well be more interesting long-term as opposed to this 'new category of storage device'; a device which has a much looser coupling with it's back-end disk than the current range of storage arrays. There are reasons why EMC might want to do this or at least customers might be interested in such a device. 

I posit that array controllers are actually changing quicker than the back-end disk these days; sure, disks are getting bigger but this is not entirely beneficial to especially the Enterprise storage market but if I want to take advantage of the latest greatest features of EMC's latest greatest array; I have to rip out both the array controllers and the back-end disk. 

What if I didn't have to do that any more? What if I could upgrade the array controller completely separately from the back-end disk? What if this was a completely non-disruptive upgrade? What if in order to do a migration, I didn't have to temporarily have twice as much disk on the floor as I do in normal operation? 

Perhaps at that point EMC have actually built a truly modular storage array?  And what if EMC can finally head towards a unified code-base for block storage? Yes, they might have to live with three code-bases for a period of time but it might actually be a worthwhile investment for them or perhaps they are happy to continue with a multitude of code-bases.

Just thinking what I might do in their situation…

V(per)PLEXed?

So we have VPLEX and despite some scratching of heads as to what it is; it is really quite simple, 

'storage access is further decoupled from storage physicality'

And this really is nothing especially new; decoupling the storage access from storage physicality has been going on for some time. Servers are getting further and further away from their physical disk. We have been adding abstraction layers to storage access for some time, the big question is whether we need another abstraction layer? 

Actually I think that the additional layer is useful; the ability to present 'real LUNS' from 'storage arrays' as a single 'plexed LUN' and keeping these LUNs in-sync might actually be useful in a number of use-cases. I can see it simplifying and transforming DR for example; I can see it making migration a lot easier and I can see EMC heavily leveraging their VMWare investments. I've said it before and I'll say it again; ever since EMC spun VMWare off, they have acted more in concert than when VMWare were wholly owned. 

Is it useful enough to warrant EMC's claim to have invented a new class of storage device?

I think I'll let the vendor bloggers rip themselves to shreds over that.  

I also think it is interesting that they have at long last decided to pretty whole-heartedly support third party arrays; if anything, this makes it an interesting announcement for EMC. Will they sell any? Well, it's going to be an uncomfortable experience for your run-of-the-mill account manager when faced with a Storage Manager who says 'Well you've just re-invented SVC/USP-V etc…you told me that they were rubbish, so why is yours any good?'

I think the heavy-hitters in EMC are going to be very busy supporting their account-teams.

Why I Don’t Ignore NAS?

Stephen's blog entry about NAS strikes so many chords with me that I find it hard to disagree with very much he writes in the entry but I am going to disagree with the central premise/question; you should not ignore NAS if you work in a Storage Team. You may hate it but you should not ignore it.

If you ignore NAS as the storage team, you will find your users going very much their own way and you will find that you have a half a dozen little NAS deployments deployed across your corporate network. This causes headaches for everyone; security, network teams, back-up teams all suffer when this happens. And when it all goes wrong, who is going to get called in to rescue the situation? Well, that would be the storage team and who will still get the blame, that will be the storage team.  

Stephen talks about primary data-centre applications but what does this mean? I think that we all have our own prejudices about what constitutes a primary data-centre application but I am going to argue that this might not be a useful definition any more. If the data has value, it needs to be in a secure, supported environment. 

However, it is worth looking at the problem that NAS is often used to solve; NAS is often seen as the simpler, more flexible and agile solution. In smaller SME environments where you do not have a dedicated team looking after storage and you only have a few systems; this can very much be the case. But as soon as your environment grows and increases in complexity, often interoperability issues start to raise their ugly heads. 

Mostly this tends to be in the CIFS/SMB area in a mixed-environment. If there is one thing that you can currently do to avoid unwarranted complexity in your environment, that is do not deploy SMB in a mixed-environment. And even if some vendors allow you to do so, do not deploy NAS shares using SMB and NFS to share out the same volume; the security models will eventually cause you to cry.

A pure NFS environment generally works well; a pure Microsoft SMB environment generally works well. But they need looking after and the storage team needs to work closely with the server teams; the storage team needs to understand more that pure storage, they need to understand the operating systems that they are talking to. I think this is why often storage teams try to avoid NAS, they really want to focus on spinning rust not what the users want to use it for. 

For the storage team, NAS is not simpler; it is more complex. So is it more flexible and agile?

Flexibility can probably summed up as shared-access to common areas. This is harder to achieve with traditional block storage and can bring you into the world of clustered file systems; these have traditionally been fairly complex and arcane to manage and set-up. They are getting better, GPFS is the one that I am most familiar with and it has come on massively in the terms of useability over the past decade. The only problem with most of the clustered file systems is that they are not available for all operating systems but with the growth of Linux and Windows in the data centre, this is becoming less of an issue.

Agility, it is probably quicker to throw up a NAS share than to present traditional block storage. It can use the same infrastructure as your already in place IP network. You do not need a traditional SAN and you do not need to put in extra cards in your servers. You can just put a NAS device on your network and start sharing. iSCSI, however could also be deployed in the same way, just as quickly and arguable more simply as you do not need to worry about integrating networked file-systems into your existing security domains.

Obviously this is rather fraught with danger but many NAS deployments start this way; this agility leads to a long term fragility in the NAS deployment. A NAS deployment should actually be designed with as much care and attention as a traditional SAN.

It is also possibly fair to say that the NAS vendors have paid more attention to their management UIs than the traditional block storage vendors; they have worked to make it simple to configure and administrate. However, the block vendors are now catching up in the simplicity stakes. 

So if block storage can be less complex, as flexible and as agile as NAS deployments; perhaps we can ignore NAS? But if we had ignored the challenges that NAS had brought to the traditional block model then we probably would not have seen the improvements which have reduced complexity, increased flexibility and agility in the world of block storage. 

And with convergence at the network layer coming in the form of DCB; neither NAS or SAN storage can sit on their laurels. I wouldn't ignore either as technologies in my data centre but I wouldn't ignore RESTful storage or clustered file-systems either. 

Taking the Proverbial

What is the point of an 'IT Project'? 

Is it to deliver infrastructure?

Is it to deliver applications?

Or is it to deliver a service to the business? 

Actually, most 'IT projects' shouldn't be called 'IT projects'; perhaps they should simply be called 'Service Delivery Projects'. Let's forget about the delivery of applications and infrastructure; we should simply focus on the delivery of service to the Business in a sustainable, cost-effective model as quickly as possible. Then let's look at how we deliver such services. 

This requires IT to work as single department not as warring factions under the titular head of the CIO; without applications to run, infrastructure has no value and without infrastructure to run on, applications are useless.

Some services should be simply be categorised as 'Oxygen'; these are the services that almost every Business needs to survive; email, desktop services, collaboration tools, backup, archiving; all of these can be delivered in an off-the-shelf manner. These are the services which may most easily lend themselves to an outsourced or cloud delivery model. 

Some services are those which require a certain amount of development, probably customisation of off-the-shelf applications; in many cases these are such things as CRM, Workflow management tools, web services etc. Every company probably needs them but every company will use them in different ways and have different requirements. The delivery model of such services is often the most contentious; in-house, out-sourced, cloud? All of these have potential.

And then there are those services which make your Business special; you might have a lot of these, you potentially have none. You might not actually need IT to produce your core product, an Artisan Baker probably doesn't need a lot of specialised IT to bake bread; whereas a Satellite broadcaster needs a lot of specialised IT kit to put television programmes out. These could be bespoke applications required to deliver your service which for whatever reason need a non-standard stack. These are the services which you will most likely decide to keep in house; these are your core.

Then you need to review these services and what has the most impact of the speed of delivery? Is it the provision of IT infrastructure? Is it the development of applications to deliver the service? Is it the process of building business models to support the decision to progress with a project? Does it take longer to develop the ROI model for a service than to deliver the service? Is it the procurement process? What is actually the biggest cost in delivering the new service? 

I'm not actually convinced that the delivery of IT Infrastructure is either the biggest implementation cost for many services or has the most impact on delivery timescales. I think often it can be perceived as the largest cost and the biggest impact on delivery timescales. The former is often because we tend to buy IT infrastructure in large chunks, storage has been especially vulnerable to this process.

(I suspect the costs of storage get focussed on so much simply because the CIO and the CEO often have to sign off on these large purchases; buying a server here and server there hides the costs. Utilisation of most storage arrays is certainly better than the utilisation of most non-virtualised servers and often better than that of virtualised servers. This might be key to the reason that many storage vendors are trying to push a 'pay as you use' model. It might well help to hide storage costs) 

And delivery of Infrastructure is often the last thing to be done on a project, often any delay further up the chain means that delivery of Infrastructure is often rushed and gets blamed for all of the other woes of a project. Often in a well-run and defined delivery, the development of application and the delivery of Infrastructure are carried out in parallel and almost without exception, the Infrastructure sits idle waiting for the deployment of the new application.  

But still we focus on speeding up the delivery of infrastructure because actually it is one of easiest things to speed-up. It should be a 'crank the handle' process and we do need to get better at this but I am not sure at the end of the day that speeding up the delivery of infrastructure is going to speed up service delivery dramatically. 

Of course as an Infrastructure guy, I would say that but I have run a development team in the past and very rarely was I held up by the delivery of Infrastructure. And if I was, it was often due to us being unclear as to what we required from the Infrastructure. 

Now what might dramatically change the cost-base of ongoing service is a more dynamic infrastructure which is easier to manage which can scale up and down easily. And building applications which can dynamically request and release resource as required. But will it change the speed of delivery of service? 

The most significant change to the speed of delivery of most services would actually be the five P's! 

Planning Prevents Piss Poor Performance. 

Nothing I have written above actually says that the various block delivery models are without merit but don't expect miracles from them either. Examine the whole service delivery model, not just that of Infrastructure; Infrastructure is a tiny piece of the delivery model, let's not forget this.

BFI

BFI is an acronym which gets thrown around a bit and could stand for many things

Brute Force and Ignorance is one…but I've come up with a hopefully a new one which goes along with it, Big F**king Infrastructure. And this is my problem with Cloud at present; there seems to be a trend around at the moment that the point of Cloud is to build Big F**king Infrastructure. 

Now as an infrastructure bod, I can appreciate this and indeed, the part of me which likes looking at big tanks, fighter jets, aircraft carriers etc; finds BFI cool! Who wouldn't want to build the biggest, baddest data centre in the world? 

But is it really the point of Cloud? And this is what concerns me! Cloud should not just be about building infrastructures, it certainly should not be about turning data centres into Building Blocks. Cloud needs to be more than that.

It needs to be about something more; it needs to be about changing development methodologies and tools. If we just use it to simply replicate at scale what we do today, I think that we have failed. It certainly needs to be more than packaging and provisioning. It needs to be about elegance and innovation. 

I really don't want Cloud to turn into something like Java; what do I mean? Don't get me wrong, Java is great (and the JVM is greater) but how much Java written is simply C written in Java? Lots, believe me! I don't believe that Java has changed development paradigms nearly as much as some people like to believe. A large amount of C++ code is also simply C written using some of the features of C++ but not the fundamental structural changes brought by C++. And so it goes on.

Cloud brings elasticity to infrastructure, applications need to be designed with this elasticity in mind. A database needs to be able to scale up on demand and then gracefully shrink back down again; perhaps it needs to be able to start additional instances of itself on different machines to meet a peak and then when the load falls away, it should remove those instances whilst maintaining transactional consistency and integrity. 

Developers need to be able to design applications which wax and wane with demand. Yes we can fix a lot of these sort of issues at an infrastructure level but is that actually the right place to do it? We can fix a huge amount of problems with BFI but are we bringing sledgehammers to bear? 

So Cloud needs to be more than BFI! And that is why I was glad to see this story here about VMware and Redis; like Zilla writes, I also know >.< NoSQL apart from a couple of presentations at Cloud Camp and what I've read on the Net. After sitting in presentations by VMWare employees where they seemed to be equating Virtualisation with Cloud; it is great to see that they are looking beyond that. Let's hope it continues.

Autonomic for the People

Autonomic computing was a phrase coined by IBM in 2001; arguably the frame-works which were defined by IBM as part of this initiative could form much of what is considered Cloud Computing today.

And now 3Par have taken the term Autonomic and applied it to storage tiering. This is really a subset of the Autonomic Computing vision but none the less it is one which has recently gained a lot of mind-share in the Infrastructure world, especially if you were to replace the word Autonomic with the word Automatic; leaving you with Automatic Storage Tiering. But I think autonomic has rather more to it than mere automation; autonomic implies some kind of self management.

An autonomic system should be
  • Self Configuring
  • Self Healing & Protecting
  • Self Optimising 
IBM themselves defined five levels of evolution on the path to autonomic computing
  1. Basic 
  2. Managed 
  3. Predictive
  4. Adaptive
  5. Autonomic
Here I shall crib from the IBM press release dated 21st October 2002
"The basic level represents the starting point where a significant number of IT systems are today. Each element of the system is managed independently by systems administrators who set it up, monitor it, and enhance it as needed.

At the managed level, systems management technologies are used to collect information from disparate systems into one, consolidated view, reducing the time it takes for the administrator to collect and synthesize information.

At the predictive level, new technologies are introduced that provide correlation among several elements of the system. The system itself can begin to recognize patterns, predict the optimal configuration and provide advice on what course of action the administrator should take. As these technologies improve, people will become more comfortable with the advice and predictive power of the system.
The adaptive level is reached when systems can not only provide advice on actions, but can automatically take the right actions based on the information that is available to them on what is happening in the system.
Finally, the full autonomic level would be attained when the system operation is governed by business policies and objectives. Users interact with the system to monitor the business processes, and/or alter the objectives."
As press-releases go; it's really rather good and has applicablity in much that we are trying to achieve with dynamic infrastructures. It would behoove many vendors to look honestly at their products and examine where they are on this scale. IBM never really managed to deliver on their vision but has any vendor come close yet? 

I wonder if 3Par are really at level five of the evolutionary process; in fact they actually talk about Adaptive Optimisation as well as Autonomic Storage Tiering; a sub-conscious admission that they are not quite there yet?

But Autonomic Computing Infrastructures is something that all vendors and customers should be aspiring to though. Of course, there is the long term issue of how we get the whole infrastructure to manage itself as an autonomic entity and how we do this within an heterogeneous environment is surely a challenge. Still, surely it is the hard things which are worth doing?

What is Dynamic?

There's a lot of talk about Dynamic Data Centres, Dynamic Infrastructures; mostly in a cloudy context and mostly as some over-arching architectural vendor-focused vision. At times, I wonder if when a vendor talks about a 'Dynamic Infrastructure'; if they actually mean, you can use as much of OUR infrastructure as you like? You can flex up and down on OUR infrastructure.

This is rather limiting from an end-user IT consumer's point of view because you still find yourselves locked into a vendor or a group of vendors. So it's only dynamic with constraints; actually, I think Amazon got it right in their naming, it's Elastic but not truly Dynamic.

So as a good architect/designer/bodge-it-and-scarper-type person, you should be asking this question every time; if I do this, can I get out? What is my exit plan? Can I change any key component of the stack without major process/capability impact? Is the lock-in which comes with any unique feature worth it? 

And when I say any component, I mean all the way up to the application. So as part of the non-functional requirements of any application, there should be

1) Data Export/Import

2) Archival

standards defined and actually implemented. This goes for any off-the-shelf application as well. 

For Cloud to truly change the way IT is done and delivered; this has to be done..otherwise the only way is vertically integrated stacks, which ultimately lead to long-term lock-in. There are still mainframes in existence, not only because they are the right platform for some workloads but also because people are struggling to unpick the complex interdependencies which exist.

More Vendor Bashing!

What a hornet's nest I stirred up with my blog; firstly it was good to see a lot of the NetApp guys coming out swinging in defence of their company, you should have passion for the company you work for but…..and there's always a but, it was not suprising to see that most missed the point! I was not attacking the Filer product, it is actually a great product for most people. However it is a single great product which you have built a business on but it is just a single product. 

There comes a time I think that once a company gets to certain size and I think that NetApp are at that size; that a company needs to start to diversify. NetApp's performance on acquistion has quite frankly been terrible; perhaps Georgen's can turn this round and they can acquire and integrate well. NetApp for too long have traded on being 'not EMC'; I am not convinced that this is any longer a credible strategy which brings me on nicely to EMC..you didn't actually think I was going to let EMC off the hook?

EMC have exactly the opposite problem to NetApp; they actually have too many products and the 'Cloud' strategy actually sums them up! Their strategy is made of cloud, it's large, all encompassing and when you try to get hold of it; well….Put it like this, the average sales-man cannot articulate it, they don't even get to arm-waving bit, there's a blank look and then they try to sell you some storage. But at least you have Chuck's Blog!

So it's about time EMC started to make their sale-guy's lives a bit easier and shrunk their product catalogue. Clariion and Celerra need to become the same product line; yep, you need to copy NetApp and have a truly unified storage platform. You've got some bright guys who understand file-systems, containers etc; just admit the NetApp were right in the mid-range space and launch the Celariion. If you were feeling really brave, you could keep the gateway product and virtualise other vendor's disk. 

Next have a look at the CMA area; what exactly does Documentum do for you?  And when you start to drill down into the Documentum product set, there's some real cruft in there. Does anyone actually use your Digital Asset Management tools for example? The whole CMA area needs looking at and streamlining. 

Ionix? A rebranding exercise at the moment. The whole product set needs integrating and you need to sit down with the people who use this stuff on a day-to-day basis, you could streamline and much improve this product set. And as your friends at NetApp seem to be asleep at the wheel with SanScreen; you could actually catch up and go past them. 

Like Ionix, your BURA product set needs integrating and streamlining; the Avamar/Data Domain story is confusing customers everywhere, it makes us go cross-eyed at times. For example, we were looking deduplication last year prior to your Data Domain acquisition and you were trying to sell us Avamar against Data Domain; now you want to sell us Data Domain. Confused and we aren't the only ones!

EMC Consulting would be a good idea, not EMC paid-for PreSales which unfortunately it currently often turns into. You do have some good guys but stop brainwashing them and allow them independance of thought. I won't rant about EMC-UK but it's broken; if you want more information, contact me directly. 

I think that like NetApp, you are in no-man's land as an organisation and there's a wonderful British expression 'eyes too big for your belly' which sums you up nicely at the moment!  And like NetApp, you have some interesting challenges ahead and some interesting challengers but quietly and privately, you appear to acknowledge that. 

I do want EMC, NetApp and all the other storage companies to succeed and grow; in IT infrastructure, it's the only place where there's any kind of product differentiation. The server market is quite frankly, boring and the network market suffers from the big kid in the playground syndrome.

‘Meh…it’s only a Billion Dollars…’

NetApp worry me as a company; despite their record revenues this quarter, they strike me as a company in trouble. And as an end-user who wants/needs a competive storage market, this is a little concerning.

Now obviously, you are now thinking that 'Bod has gone mad, so I better explain my reasoning.

Over the past year or so, NetApp have been quietly dropping products under the guise of focussing more on their core but if you look at things, their core is actually very narrow. Dropping unprofitable lines is obviously generally good business but I don't see these lines being replaced with anything. Their product range is narrowing, this is not the actions of company confident in their ability to provide solutions to a market-place which will become increasing solution orientated. This is a company who is willing to be a little cog in the grander scheme of things!

The struggle to get OnTap 8 out of the door has in my opinion meant that the company has not really focussed on providing innovative new products. NetApp are currently not innovating and the rest of the market are catching up and some could rocket past them.

Some of the comments I've heard from people who have looked in more detail at OnTap 8 are concerning as well. If you are running OnTap 7; it appears that there is little for you, if you merely upgrade. It sounds like that a full re-implementation is required to take advantage of features like 64 bit aggregates.

When EMC announced Atmos, NetApp dropped big hints that they had a RESTful object oriented storage product in the works. This has yet to surface and I've not heard anything more than 'watch this space' muttering but there's no product shipping.

Another reason that I am concerned about NetApp are that they are a single product company; if OnTap 8 struggles to gain acceptance, there is little for the company to fall back on. And there seems to be little appetite at the moment to broaden the NetApp product range and as I said earlier, they are indeed shrinking their portfolio.

The failure to take over Data Domain and loosing that battle to EMC, I suspect damaged the company's confidence internally and I wonder whether they currently have the appetite to embark on an aquisition campaign but surely that is what is needed if they are to grow quickly enough to survive as a company long term?

The clustered NAS vendors could cause them pain going forward, the Isilons of this world are looking to do to NetApp what NetApp did to EMC. In fact NetApp remind me alot of EMC of four or five years ago, which is ironic as much of NetApp's appeal was that they are not EMC!

And what's worse, you've got some big players who could do immense damage to NetApp; I'm thinking IBM, Oracle and HP. IBM with SONAS could hurt them at the high-end and Oracle with the 7000 series could really hurt them in what has been traditionally their heartland; medium-sized NAS environments. HP could revitalise their storage business under Dave D but that is probably a longer term turn-around as opposed to immediate threat.

Yes NetApp have spent time building some strong partnerships but even this is a bit 'meh'; not a huge amount of organisational innovation here. Nothing which made us sit up and think, 'blimey, that was a clever move!'

In fact at the moment, 'meh' pretty much sums up NetApp as a company. Lots of companies go through a 'meh' period; HDS have been sitting in theirs for some time and need to come out pretty soon. EMC went through their 'meh' moment…IBM and HP have managed to have 'meh' decades in the past! Can NetApp come out of a 'meh' moment fighting and innovating? Lets hope so!

p.s I've labelled my own post as FUD…because if a vendor had written this…I might have accused them of writing FUD!