Storagebod Rotating Header Image

Live Forever

No not my favourite Oasis track, no it's something which I've been thinking about and a subject I've touched on a few times; once you've deployed a technology, how do you get out of it? Do you get out of it? And what impact does virtualisation have on this.

How many of us are running or are aware of business critical applications running on hardware which is no longer supported by the vendor? And the reasons? Often the reasons are that the application only runs on a particular operating system which is no longer supported and will not run on current generations on hardware.

Virtualisation may well allow you to refresh the hardware; for example if you insist in running applications under DOS; pretty much any of the x86 hypervisors will run DOS. This will allow you to run that business critical application on DOS for the forseeable future and will allow you to continually refresh the hardware underneath it.

Well, it will until the Hypervisor vendor calls time and decides that they no longer want to certify the hypervisor with DOS x.xx; oh well, what do you do? Obviously you shrug your shoulders and now run a back-level hypervisor with an unsupported Operating System which is running an application which by now, no-one has a clue on how it works!

Oh, you've migrated the application into a Public Cloud? Well, it didn't need much in the way of the resources and suited the Cloud perfectly. And now your Cloud provider has said that they are no-longer supporting or even allowing DOS instances to run; oh heck and now you can't get the hardware/software to run your application locally. 

So although virtualisation will allow you to get away with running legacy apps for a long time; don't assume that this means that they can 'Live Forever'! Virtualisation is not an excuse for not carrying out essential maintenance and keeping your estate up-to-date.

*that's 'Bring It On Down', just in case you were interested!


6 Comments

  1. John Dias says:

    Many of these situations are created by using niche solutions for a particular line of business – for example there was a DOS app I had to deal with for years because it was THE only way that a certain gov’t backed secondary mortgage entity would accept data transfer. The software provider didn’t care about our problems – and that’s often the case with small dev shops. I’m not sure why, other than cost to re-write the app, but that doesn’t make sense because they typically have the corner on that particular market and could easily pass along the dev costs to the customer.
    Have you heard any chatter about hypervisor providers dropping support for legacy OS?

  2. Martin G says:

    No, but it will happen…it is inevitable; you can’t keep everything supported forever. Things will become ‘well, it should work’ but eventually will stop working.

  3. Ianhf says:

    Good post, couple of blocking areas I see :-
    1) A lot of the ‘legacy’ applications or environments have technology profiles that make them unsuitable to cloud or virtualisation. Things such as specific hardware interfaces, chip sets, CPU / RAM configs (think vertically scaling etc), OS req (HP-UX, tru64 etc). Similarly the apps were often designed using principles or assumptions that don’t always lend themselves towards virtualisation or (and in particular in this case) cloud – things such as synchronous service communication, fixed addressing, legacy protocols etc.
    2) The other main issue I see is that knowledge of the legacy application, environment and business logic – often when the technical elements are causing issues, we have found that there are more critical ‘wetware’ issues in the knowledge awareness, capture & retention as to what the application does, where/how it interfaces, what logic and processes it uses etc Sometimes it’s just less risk / cost to kill something than to ignore (or pretend) the real underlying risk…
    A fear for me is that virtualisation / cloud will actually make it easier to ‘tolerate’ legacy apps and environments, and as such mask the real issue & requirement of needing a continual improvement process to avoid such legacy – and actually create more legacy deployments over time!

  4. Martin G says:

    Ian, that is very much my worry too. Virtualisation could well be used to prolong the life of legacy apps causing issues 5-10 years hence.
    BTW, a least one major bank discovered that their Standing Order code still operated in pounds, shillings and pence when they were doing their Y2K audit; there was a decimalisation conversion routine which hid this fact. Do you think that anyone understood the core code?

  5. Maarten says:

    One of the things I’m always looking for when deploying new apps is a way to do an application-independent export of all relevant data; whether it’s XML or some other format doesn’t really matter, as long as there is some way for the customer to get the data out of it.
    The reason for this is simple: if the app ever becomes unsupported, you can hire someone to import your data into another one. It doesn’t have to be automatic; if you have to hire someone to write import logic that’s all right. This will avoid finding yourself with an application holding your vital data while the supplier has been out of business for years.

  6. Chris Fricke says:

    We have a few of those “live for as long the universe will let it” type apps. The software media and knowledge is long gone but the app is still in use. The hardware was old and dying so it was virtualized – years ago – and there it will stay until it suffers software corruption enough that snapshots can’t recover from or VMWare stops supporting windows. At least in the physical server world we had tangible hardware reasons to engage in app lifecycle management. Now we have to make -gulp- purely business related reasons to drive change.

Leave a Reply to Maarten Cancel reply

Your email address will not be published. Required fields are marked *