Storagebod Rotating Header Image

June, 2012:

Meltdown

The recent RBS systems meltdown and the rumoured reasons for it are a salutary reminder to all as to how much we are all reliant on the continued availability of core IT systems; these systems are pretty much essential to modern life. Yet arguably the corporations that run these systems have become incredibly cavalier and negligent about these systems; their maintenance and long-term sustainability even in supposedly heavily regulated sectors such as Banking is woeful.

There is a ‘It Aint Broke, So Don’t Fix It’ mentality that has led to systems that are unbelievably complex and tightly coupled; this is especially true of those early adopters of IT technologies such as the Banking sectors.

I spent my early IT years working for a retail bank in the UK and even twenty years ago, this mentality was prevalent and dangerous; code that no-one understood sat at the core of systems, wrappers written to try to hide the ancient code meant that you needed to be half-coder, half-historian to stand a chance of working out exactly what it did.

If we add another twenty years to this, twenty years of rapid change where we have seen the rise of the Internet, 24 hour access to information and services, mobile computing and a financial collapse; you have almost a perfect storm. Rapidly changing technology coupled with intense pressure on costs has led to under-investment on core infrastructure whilst Business chases the new. Experience has oft been replaced with expedience.

There is simply no easy Business Case that flies that justifies the re-writing and redevelopment of your core legacy applications, even if you still understand them; well, there wasn’t until last week. If you don’t do this and if you don’t start to understand your core infrastucture and applications; you might well find yourself in the same position that the guys in RBS have.

Systems that have become too complex and are hacked together to do things that they were never supposed to do; systems which if I’m being generous were developed in the 80s but more likely the 70s trying to cope with the demands of the 24 hour generation; systems which are carrying out more processing in realtime and yet are at their heart, batch systems.

If we continue with this route, there will be more failures and yet more questions to be answered. Dealing with legacy should no longer be ‘It Aint Broke, So Don’t Fix It’ but ‘It Probably Is Broke, You Don’t know It…yet!’ Look at your Business, if it has changed out of all recognition, if your processes and products no longer resemble those of twenty years ago, it is unlikely that IT systems designed twenty years are fit for purpose. And if you’ve stuck twenty years worth of sticking plaster on them to try and make them fit for purpose; it’s going to hurt when you try to remove the sticking plaster.

This is not a religious argument about Cloud, Distributed Systems, Mainframe but one about understanding the importance of IT to your Business and investing in it appropriately.

IT may not be your Business but IT makes your Business…you probably wouldn’t leave your offices to fall into disrepair, patching over the cracks until it falls down…don’t do the same your IT.

New Lab Box: HP ML110 G7

I keep meaning to do a blog post on the Bod home-office, something like the Lifehacker Workspace posts but I never quite get round to it. Still, suffice to say, my home workspace is pretty nice; it’s kind of the room I wanted when I was thirteen but cooler! The heart of the workspace though is the tech, I have tech for gaming, working, chilling and generally geeking; desktops of every flavour and a few servers for good measure.

Recently, as you will know I have been playing with Razor and Puppet and I found that my little HP Microserver was struggling a bit with the load I was putting on it and I started to think about getting something with a bit more oomph. I had decided that I was going to put something together based on Intel’s Xeon technology and began to put together a shopping list.

Building PCs is something I kind of enjoy but then as luck would have it, an email dropped into my mailbox from Servers Plus in the UK offering £150 cashback on the HP ProLiant ML110 G7 Tower Server with a E3-1220 Quad Core processor; this brought the price down to £240 including VAT. And I was sold..no PC building for me this time.

As well as the afore mentioned E3-1220, the G7 comes equipped with a 2Gb ECC UDIMM, 2x Gigabit Ethernet ports, iL03 sharing the first Ethernet, 250GB Hard Disk, 350W power supply and generally great build quality (although I could do a better job with the cable routing I reckon).

The motherboard can support up to six SATA devices and there are four non-hotswap caddies for no-screw hard-disk installation, one of which holds the 250 GB Hard Disk. Installing additional drives was a doddle and involved no cursing and hunting for SATA cables. I did not bother to install an optical disk as I intended to network boot and install from my Razor server.

Maximum supported memory is an odd 16GB; the chipset definitely supports 32GB but there are very mixed reports of running an ML110 G7 with 32GB; I just purchased a couple of generic 4 GB ECC DIMMS for about £50 to bring it up to 10 GB for the time being. I’d be interested in hearing if anyone has got a ML110 G7 running successfully with 32GB. There’s no technical reason for HP to limit the capability and it does seem strange. The DIMM slots are easily accessible and no contortions are required to install the additional memory.

There are 4 PCIe card slots available; 1×16, 2×4 and 1×1; this should be ample for most home servers as it already comes with two onboard ethernet ports.

After installing the additional memory and hard-disks, I powered the box up and let it register with my Razor server; added a policy to install ESXi on it and let it go.

A quick note about the iLO3, it is the basic license which allows you to power-up and power-off, do some basic health checking and monitoring but no remote terminal; this is not a huge problem for me as the server is in the same room and I can easily put a monitor on it if required.

The ML110 is pretty damn quiet considering the amount of fans in it but start putting under load and you will know it’s there but it’s no noisier than my desktop when I am gaming and all the fans are spinning. It is certainly noisier than the Microserver though.

Once ESXi was installed; bringing up the vSphere let me see that all the components are recognised as expected; all the temperature monitors and fans were also being seen. Power management is available and can be set to Low power if you want for your home lab.

So I would say that if you want a home lab box with a little more oomph than the HP Microserver; the ML110 G7, especially with the £150 cash-back takes some beating. If it could be upgraded all the way to 32GB, then it would be awesome.

 

 

 

Razor – An Idiot’s Guide to Getting Started!

My role at work does not really allow me to play with tech very often and unless I have a real target in mind, I’m not as much as an inveterate hacker as I used to be; I generally find a problem, fix a problem and then leave it alone until solution is broken. So I don’t tend to go into any depth on anything but every now and then, I see something and think that’s cool, can I get it to work.

When I saw the new tool from Nicholas Weaver and EMC built on Puppet to configure ‘bare-metal’ machines; I decided that was something cool enough to have a play with. But there were a few problems, I knew what Puppet is but I’d never used it and I really didn’t have a clue what I was doing.

Still, I followed the links to the documentation and started to hack; after a couple of failed attempts due to missing prerequisites, I decided to be a bit more methodical and document what I was doing.

So this is what I did…more or less. And hopefully this might be helpful to someone else. I am assuming a certain amount of capability tho’! So more an Idiot Savant than just an Idiot.

I have a number of core services already running at home; I have my own internal DNS and DHCP server running on Ubuntu, I have a couple of NAS servers and a couple of machines running ESX5i.

All the documentation for Razor is based around Ubuntu Precise Pangolin 12.04; so first thing to do was to build a Precise VM on one of the ESX5i boxes. This was provisioned with 2 Gigs and 2 virtual cores.

1) Install Ubuntu 12.04 Server and I always do an OpenSSH Server build at installation; I leave everything else to after I’ve done a base install.

2) I added a static assignment for the server in my DHCP box and created a DNS entry for it.

3) After the server was installed, I did my normal ‘break all the security’ and used sudo to set a root password and allow myself to log directly on as root. I’m at home and can’t be bothered to use sudo for everything.

4) I started installing packages, I’m not sure whether the order matters but this the order I did things and all this was done as root

EDIT:According to Nick and he should know, the Razor module installs Node and Mongo-db automagically…I had some problems the first couple of times and decided to do it myself, this is possibly because I’m an extremely clever idiot and break idiot proof processes.

Node.JS

apt-get install nodejs npm

MongoDB

I didn’t use the standard packaged version and pulled down the package from mongodb.org

apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
vi /etc/apt/sources.list
Added deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen
apt-get update
apt-get install mongodb-10gen

Puppet

Yet again I didn’t use the standard packaged version and pulled down from puppetlabs.com

wget http://apt.puppetlabs.com/puppetlabs-release_1.0-3_all.deb
dpkg -i puppetlabs-release_1.0-3_all.deb
apt-get update
apt-get install puppetmaster

Ruby 1.9.3

Note that the above install does install Ruby but appears to bring down Ruby 1.8; Razor wants a later version.

apt-get install ruby1.9.3

This seems to do what you want!

At this point you should be in the position to start installing Razor.

Razor

This is very much cribbed from the Puppet Labs page here

puppet module install puppetlabs-razor

chown -R puppet:puppet /etc/puppet/modules

puppet apply /etc/puppet/modules/razor/tests/init.pp --verbose

This should run cleanly and at this stage you should have some confidence that Razor will probably work.

/opt/razor/bin/razor

This shows that it does work.

/opt/razor/bin/razor_daemon.rb start

/opt/razor/bin/razor_daemon.rb status

This starts the razor daemon and confirms it is running. Our friends at PuppetLabs forgot to tell you start the daemon, it’s kind of obvious I guess but made this idiot pause for a few minutes.

Configuring Your DHCP Server

I run my own DHCP server based and this needed to be configured to point netbooting servers at the Puppet/Razor server.

I amended the /etc/dhcp/dhcp.conf file and added the following

filename "pxelinux.0";
next-server ${tftp_server_ipaddress};

in the subnet declaration.

At this point, you should be ready to really start motoring and it should be plain sailing I hope. Certainly this idiot managed to follow the rest of the instructions for Example Usage on Puppetlabs.

Of course now I’ve got lots of reading to do around Puppet and the likes but at the moment, it does appear to work.

So great work Nick and everyone at EMC.