Storagebod Rotating Header Image

Storagebod’s Precipice

Inspired by Preston De Guise's blog entry on the perils of deduplication; I began hypothesising if there is a constant for the maximum physical utilisation of the capacity in a storage array that can be safely utilised; I have decided to call this figure 'Storagebod's Precipice. If you haven't read Preston's blog entry; can I humbly suggest that you go read it and then come back.

The decoupling of logical storage utilisation from that of the physical utilisation which allows a logical capacity/utilisation which is far in excess of the physical capacity is one that is both awfully attractive but also terribly dangerous.

It is tempting to sit upon one's laurel's and exclaim 'What a clever boy am I!' and in one's exuberance forget that one still has to manage physical capacity. The removal of the 1:1 mapping between physical capacity and logical capacity needs careful management and arguably reduces that the maximum physical capacity that one can allocate. 

Much of the storage management best practises are no more than rules of thumb and should be treated with extreme caution; these rules may no longer apply in the future.

1) It is assumed that on average data has a known volatility; this impacts any calculation around the amount of space that needs to be reserved for snap-shots. If the data is more volatile than one expects, snapshot capacity can be utilised a lot faster than expected. In fact, one can imagine an upgrade scenario which changes almost every block of data and completely blows the snapshot capacity and destroying your ability to quickly and easy return to a known state, let alone one's ability to maintain the number of snapshots agreed in the business SLA.

2) Deduplication ratios when dealing with virtual machines can be huge. As Preston points out; reclaiming space may not be immediate or indeed be simple. For example; often the reaction to capacity issues is to move a server from one array to another, something which VMware makes relatively simple but this might not buy you anything. Moving hundreds of machines might not even be very effective. Understand your data and understand that data which can be moved with maximum impact on capacity. Deduplicated data is not always your friend! 

3) Automated tiering, active archives etc; all potentially allow a small amount of fast storage medium to act as a much larger logical space but certain behaviours could cause this to be depleted very quickly and lead to an array thrashing as it tries to manage the space and moving data about.

4) Thin provisioning and over-commitment ratios; this works on the assumption that users ask for more storage than they really need and that average file-system utilisations are much lower than provisioned. Be prepared to experience that this assumption makes an 'ass out of u & me'. 

All of these technologies mean that one has to be vigilant and rely greatly on good storage management tools; they also rely on processes that are agile enough to cope with an environment that could ebb & flow. To be honest, I suspect that the maximum safe physical utilisation of capacity is at most 80% and these technologies may actually reduce this figure. It is ironic that logical efficiencies may well impact the physical efficiency that we have so long strived for! 


Leave a Reply

Your email address will not be published. Required fields are marked *