ROI calculators won’t save us … but data classification will.

I had the pleasure to present yesterday at the HDS Information Forum in London.  Having flown with Moshe Yanai in his plane when I was in Israel, it was an honour to share the stage with another storage industry luminary in Hu Yoshida … and it’s always great to share a stage with one of my favourite Scots in Steve Murphy.  Now, if I can just figure out a way to share the stage with Dave Hitz, Paul Maritz, and Pat Gelsinger at some stage in my career I’ll have the full deck!

It was only as I was approaching where the event was being held that I realised this was the very hotel … indeed the very ballroom … where Mrs. PL and I got married seven years ago although this time instead of giving the groom’s speech I presented ‘Cloud Isn’t A Four Letter Word: The Practicalities of VDC and the Federated Service Provider Model’.

The central premise of my 25 minute presentation was that cloud needn’t be a four letter word, however I believe that the nomenclature ‘cloud’ is confusing at best as it doesn’t accurately describe the consumption and usage model.  Put simply, cloud as a marketing term is nothing more than the movement of an asset from within a customer environment to external … effectively trying to ‘move’ CapEx but not necessarily doing anything about the OpEx side of the house.

And this is where I have a real challenge with ‘cloud’ … at present course and speed it just looks too much like an economic shell game.  But it needn’t be this way.

Rather, I seek to make the case for addressing the entirety of the solution equation, including total cost of ownership, understanding that ROI represents only 30% of TCO.

In other words, acquisition or CapEx only represents 30% of the TCO equation whereas OpEx represents 70% of the equation and I believe that the practical applications of the virtual datacentre as well as the federated service provider model can absolutely solve this equation.

What’s this got to do with Data Storage and Protection?

Data creation, and the storage devices to store the data we’re creating, holds both the root cause as well as the solution to ever increasing OPEX costs in my opinion.

1. Data creation is rising inexorably and has now reached the point where having humans physically ‘manage’ storage provisioning, the data lifecycle, and the like no longer makes economic sense.

A remarkable statistic is just how quickly the industry standard Compound Annual Growth Rate for data creation is rising for customers.  Just 12 to 18 months ago the standard was 40% …in other words, every three years you had created twice as much data with a little more on top.  Now the industry standard is 60%, and in the three years I have been with Computacenter not a single storage assessment we have run for customers have shown even a 60% CAGR … the vast majority have show 100% to as high as 212% for one retail customer.

2. It’s all virtual, baby.

Or at least is should all be virtual, baby.  Virtualise what?  Put simply … everything.  We’ve been working on and deploying virtual datacentres in Computacenter for 18 months and what we have learned is that first and foremost, VDC is a business solution underpinned by technical components.  Yes, you can manage a storage array manually if you so desire …but if we introduce a grid based storage array, coupled with universal compute and networking components, we can automate provisioning of not only the data storage to support business users but indeed the whole workload.  Why would we want highly skilled technologists plumbing away kit when it would be of higher value for them to be working with the business solving the alignment of data to business value?  But in the absence of a virtualised grid based pool of storage …and server compute, and network …automating this becomes virtually [no pun intended] impossible.  The more you automate, the higher the utilisation …and the lower the OpEx.

Equally, we can drive the OpEx down further as next generation storage arrays are becoming largely self tuning with automated storage tiering and much more efficient with the advent of storage compression, thin provisioning, and data deduplication.

3. Virtualise. Containerise. Mobilise.

Once we’ve virtualised the lot, now we can containerise the workloads.  Containerisation is what allows us to provide this workload automation.  Rather than trying to attempt levitation by harmonising data storage, servers, virtualisation, and networking separately we can define the workload as a container with storage, network, compute attributes.

But whereas we would be limited by workload type [e.g. database, email, web application] in a traditional ‘siloed’ solution, the VDC allows us to run multiple container workloads simultaneously.

To wit, we have been testing a VDC in our Solution Centre which can support 6,000 virtual machines all from a footprint of one datacentre row or six 40U racks effectively.  When you consider that it could take something like four to five rows of equipment to support the same number of virtual machines in a traditional datacentre, the OpEx optimisation begins to become much more apparent.

What VDCs really allows us to do is optimise all three dimensions of a customer datacentre simultaneously such that we can deliver immediate cost benefit …perhaps as high as 30% to 50% CapEx avoidance and OpEx reduction …as well as provide a bridge to what some are calling private/public cloud federation but we prefer to call federated service provider model.

4. Data classification will determine what data is kept inside the corporate VDC and what data gets shipped out to an external service provider.

When I’m discussing the federated service provider model …or private/public cloud hybrid, if you must …one of the inevitable questions relates to data security.  Now, not to be flippant, but why should we ever have to store sensitive data anywhere other than inside a customer VDC?

In a typical environment we see that 20% of the data is ‘business meaningful’ or structured data … ERP systems, customer databases, etc.  This leaves better than 80% as unstructured …email, PPTs, duplicate and dormant data which hasn’t been accessed in some time.

Why wouldn’t we connect the customer VDC to an external service provider such as Computacenter and allow for the unstructured data to be ‘shipped out’ where it can be stored more efficiently and at lower cost than internally?

Some are calling these the ‘stack wars’ but I don’t believe they are …rather, this is the API war as it will be the APIs which make this model possible.

But that’s a story …or presentation? …for another day.

Until then, have a great weekend.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: