Archive for the ‘Cloud’ Category

Four reasons why the sky isn’t falling, but storage quadrants are collapsing.


Sixty Second Synopsis:

The average customer achieves 40% or less utilisation from their data storage infrastructure. Increasing efficiencies increases utilisation which, in turn, decreases costs. Next generation storage systems such as grid/scale out and unified represent the natural evolution of the four storage quadrants aimed at reducing costs in both the immediate and for the future.

People love a good story, and humans have been telling stories in one form or another for hundreds, if not thousands, of years.  Indeed, there has been much talk in business consultancy and related journals regarding the ‘art of storytelling’ to help disseminate what can be difficult and often technically complicated topics and ideas to a wide audience.

One of the most well known and widely used methods of storytelling is that of film.  I love a good film …the story and music, the escapism of two hours in a darkened theatre, hot buttered popcorn …. mmm popcorn! …. and I’m sure I’m not alone in having a top ten list of my favourite all time films.

Depending upon what day of the week you ask me, taking the number one slot in my top ten will either be Zulu or Lawrence of Arabia.  Okay, perhaps I have a top eleven list but let’s not let facts get in the way of a good story!  Pun intended, but I digress.

Near the head of my top ten is a film called Searching For Bobby Fischer, released in the UK as Innocent Moves.  The film tells the story of a young chess prodigy and the lengths he and his family go in their trying to understand him and he them all whilst being supportive of and developing his unique gift.  I won’t give the plot or ending away, and would seriously recommend you spending the time to watch the film as it is an excellent object lesson in how we raise and prepare our children for the world in my opinion.  I will, however, share with you my favourite line from the film … ‘The game is already over. You just don’t see it yet.’

What’s this got to do with Data Storage & Protection?

In my career I have never seen the data storage consolidate nor move more quickly than it has in the past 24 months.  Indeed, what were the four storage quadrants …Enterprise, Modular, NAS, and Archive …have rapidly converged and consolidated to leave with what are effectively two categories …Grid/Scale Out and Unified.

But why?  At a high level, the Four Quadrants of storage developed and evolved as they sought to solve different customer issues however none of the quadrants represents a ‘perfect’ solution and all suffer from a serious reduction in utilisation as they attempt to scale.  Given we are creating more data every three days as we did in all of 2003, it isn’t difficult to see why customers need efficient data storage systems which can easily scale to solve utilisation and cost challenges.

I’m not going to go into the Four Quadrants in any detail as I’ve developed masterclasses which cover this from soup to nuts in two hours.  I have also developed separate two hour masterclasses for Grid Storage and Unified Storage.  Please contact me if you’d like me to run a private masterclass(es) session for you and/or your organisation.

1. Customers want …and need … efficient arrays.  What were once products are now features.

Start-up companies such as Data Domain [dedupe] acquired by EMC, Diligent [dedupe] acquired by IBM, Storewize [data compression] acquired by IBM, Compellent [automated storage tiering], XiV [grid] acquired by IBM …the list goes on and on …have either been acquired or seen their ‘unique’ products rolled into existing vendor products as is the case with FAST [Fully Automated Storage Tiering] in EMC products.

2.  The four storage quadrants are collapsing to leave us with two primary solution variants; grid/scale out and unified.

The simple equation is that increased utilisation equals decreased costs, and we’ve seen Grid/Scale Out storage [e.g. EMC VMAX, IBM XiV] evolve from the Enterprise and Modular quadrants to address the ability to scale at cost as well as Unified storage [e.g. NetApp] evolve from the Modular/NAS/Archive quadrants where customers don’t necessarily require massive scale out capabilities but would like a ‘Swiss Army knife’ approach with iSCSI, fibre channel, NAS, dedupe, compression, et al all included in the same storage product.  This is also an effort by the vendors to reduce their R&D costs by delivering fewer but much more efficient storage products.

3. The storage technologies which underpin VDC and the federated service provider model … or private/public cloud hybrid, if you prefer …are waypoints and not the final destination.

At the moment we treat data as either ‘block’ …I’ll whack the data on some block storage systems like EMC VMAX or IBM XiV …or ‘file’ …I’ll whack the data on some file based storage like NetApp or EMC Celerra …but we’re rapidly heading towards data being treated as an object.  In other words, a true convergence of block and file based storage where we treat the data as an individual object as opposed to ‘speeds and feeds’.   However, we need ways in which to optimise our datacentre and storage environments today which reduce costs as well ad provide a bridge to the future.  VDC and the federated service provider model is absolutely that bridge.

4.  The infrastructure to support data will continue to evolve, but data as a construct will not.

Many IT departments truly believe that the barbarians are at the gates with their users seeking to access data whenever and however they wish with ‘unapproved’ mobile devices such as Apple iPads, Google Android phones, et al.   Egads and heavens to Murgatroyd!  I understand the reasons for IT to try and provide high levels of support by restricting usage, but putting up walls by device to restrict data access is a very 1990s method of physical access control and, frankly, a fool’s game.  Nature …and users …always find a way around such barriers if they feel they can be more productive by providing and acquiring their own access devices.  But you can protect the data, and that is where the security and restrictions should be placed in my opinion.  Indeed, we will see geotagging of data and with access and geolocation restrictions based on the data objects …for example, you can view the data in the UK but not when you leave our sunny clime …but I’ll save this for another blog post.  Equally, there may even be a case here for organisations to move what were CapEx costs …laptops, PCs, mobile phones …off the books by allowing employees to acquire and use their own devices.


Click here to contact me.

ROI calculators won’t save us … but data classification will.


I had the pleasure to present yesterday at the HDS Information Forum in London.  Having flown with Moshe Yanai in his plane when I was in Israel, it was an honour to share the stage with another storage industry luminary in Hu Yoshida … and it’s always great to share a stage with one of my favourite Scots in Steve Murphy.  Now, if I can just figure out a way to share the stage with Dave Hitz, Paul Maritz, and Pat Gelsinger at some stage in my career I’ll have the full deck!

It was only as I was approaching where the event was being held that I realised this was the very hotel … indeed the very ballroom … where Mrs. PL and I got married seven years ago although this time instead of giving the groom’s speech I presented ‘Cloud Isn’t A Four Letter Word: The Practicalities of VDC and the Federated Service Provider Model’.

The central premise of my 25 minute presentation was that cloud needn’t be a four letter word, however I believe that the nomenclature ‘cloud’ is confusing at best as it doesn’t accurately describe the consumption and usage model.  Put simply, cloud as a marketing term is nothing more than the movement of an asset from within a customer environment to external … effectively trying to ‘move’ CapEx but not necessarily doing anything about the OpEx side of the house.

And this is where I have a real challenge with ‘cloud’ … at present course and speed it just looks too much like an economic shell game.  But it needn’t be this way.

Rather, I seek to make the case for addressing the entirety of the solution equation, including total cost of ownership, understanding that ROI represents only 30% of TCO.

In other words, acquisition or CapEx only represents 30% of the TCO equation whereas OpEx represents 70% of the equation and I believe that the practical applications of the virtual datacentre as well as the federated service provider model can absolutely solve this equation.

What’s this got to do with Data Storage and Protection?

Data creation, and the storage devices to store the data we’re creating, holds both the root cause as well as the solution to ever increasing OPEX costs in my opinion.

1. Data creation is rising inexorably and has now reached the point where having humans physically ‘manage’ storage provisioning, the data lifecycle, and the like no longer makes economic sense.

A remarkable statistic is just how quickly the industry standard Compound Annual Growth Rate for data creation is rising for customers.  Just 12 to 18 months ago the standard was 40% …in other words, every three years you had created twice as much data with a little more on top.  Now the industry standard is 60%, and in the three years I have been with Computacenter not a single storage assessment we have run for customers have shown even a 60% CAGR … the vast majority have show 100% to as high as 212% for one retail customer.

2. It’s all virtual, baby.

Or at least is should all be virtual, baby.  Virtualise what?  Put simply … everything.  We’ve been working on and deploying virtual datacentres in Computacenter for 18 months and what we have learned is that first and foremost, VDC is a business solution underpinned by technical components.  Yes, you can manage a storage array manually if you so desire …but if we introduce a grid based storage array, coupled with universal compute and networking components, we can automate provisioning of not only the data storage to support business users but indeed the whole workload.  Why would we want highly skilled technologists plumbing away kit when it would be of higher value for them to be working with the business solving the alignment of data to business value?  But in the absence of a virtualised grid based pool of storage …and server compute, and network …automating this becomes virtually [no pun intended] impossible.  The more you automate, the higher the utilisation …and the lower the OpEx.

Equally, we can drive the OpEx down further as next generation storage arrays are becoming largely self tuning with automated storage tiering and much more efficient with the advent of storage compression, thin provisioning, and data deduplication.

3. Virtualise. Containerise. Mobilise.

Once we’ve virtualised the lot, now we can containerise the workloads.  Containerisation is what allows us to provide this workload automation.  Rather than trying to attempt levitation by harmonising data storage, servers, virtualisation, and networking separately we can define the workload as a container with storage, network, compute attributes.

But whereas we would be limited by workload type [e.g. database, email, web application] in a traditional ‘siloed’ solution, the VDC allows us to run multiple container workloads simultaneously.

To wit, we have been testing a VDC in our Solution Centre which can support 6,000 virtual machines all from a footprint of one datacentre row or six 40U racks effectively.  When you consider that it could take something like four to five rows of equipment to support the same number of virtual machines in a traditional datacentre, the OpEx optimisation begins to become much more apparent.

What VDCs really allows us to do is optimise all three dimensions of a customer datacentre simultaneously such that we can deliver immediate cost benefit …perhaps as high as 30% to 50% CapEx avoidance and OpEx reduction …as well as provide a bridge to what some are calling private/public cloud federation but we prefer to call federated service provider model.

4. Data classification will determine what data is kept inside the corporate VDC and what data gets shipped out to an external service provider.

When I’m discussing the federated service provider model …or private/public cloud hybrid, if you must …one of the inevitable questions relates to data security.  Now, not to be flippant, but why should we ever have to store sensitive data anywhere other than inside a customer VDC?

In a typical environment we see that 20% of the data is ‘business meaningful’ or structured data … ERP systems, customer databases, etc.  This leaves better than 80% as unstructured …email, PPTs, duplicate and dormant data which hasn’t been accessed in some time.

Why wouldn’t we connect the customer VDC to an external service provider such as Computacenter and allow for the unstructured data to be ‘shipped out’ where it can be stored more efficiently and at lower cost than internally?

Some are calling these the ‘stack wars’ but I don’t believe they are …rather, this is the API war as it will be the APIs which make this model possible.

But that’s a story …or presentation? …for another day.

Until then, have a great weekend.

Boil the kettle, data rationalisation and reduction could take a while.


UPDATE Mon 26 July: My interview with The BBC World Service ‘The World Today’ programme covering this topic aired this morning.  Click here for the full thirty minute podcast, or here for just my interview excerpt.

I thought perhaps I would begin this Weekly View with a quick experiment …now, you’ll need a kettle for this exercise …and in this context we need this to be a kettle of the electric variety … so if you don’t have one, or reading this blog where ‘lectric kettles may be a foreign concept, here’s a picture of one which will suffice for now.

Okay, ready?  Great.  Now, I’d like you to go and boil your kettle seventeen and a half times.  It’s okay, I know it takes a bit for each boil.  I’ll wait.  See you in a few minutes …or if you’re feeling generous, mine’s a PG Tips with milk and two Splendas.

Right …all done?  Great!  You’ve just expended as many greenhouse gases as you would by sending an email with a 4.7 megabyte attachment.

That’s right, campers … boiling your kettle 17.4 times consumes as many resources (electricity, water, and the like) and produces as much greenhouse gas as sending an email with a 4.7MB attachment in a traditional IT environment.

Source: Life-cycle analysis carried out by Mark Mills of Digital Power Group, translated into kettle boilings with help from the Energy Savings Trust [UK].

Now, I know what you’re thinking as I was thinking the same thing when I first read that statistic …what?  How can this be?!

Without getting overly geeky on the topic, the short answer is that traditional IT environments tend not to be overly efficient at scale and we’ve known for quite some time that the traditional IT infrastructure …server plus storage plus network plus operating system plus application …tends to be siloed with each individual component connected physically to the others with wastage and efficiencies lost between these connections and within the physical devices themselves.

And, to be fair, traditional datacentres don’t fare much better …indeed, the datacentre industry as a whole has reached parity with the airline industry in CO2 production with 2% of all man-made CO2 comeing from computers and communications technology.

Source: Green IT: A New Industry Shock Wave by Simon Mingay [Gartner]

What’s this got to do with Data Storage and Protection?

I suppose that there is the obvious ‘think before you send emails with 4.7 meg attachments’.  I’m somewhat bemused …well, saddened really …that with the green revolution of the past ten years or so I now get emails with the tagline ‘Think before you print!’ with pretty green trees from just about everyone these days.  But what about having a tagline which gently asks the user …’Do you really need to send this, and, if so …please consider an alternative method rather than sending the attachment.’  Or, ‘Think before you send!’ for short.

Email has been the bane of many a storage administrator’s life as it has morphed from a purely text based system …remember SNDMSG, CICS, and OS/400? …to the rich text and inefficient file distribution model we find today.  Why do people attach large files to email and send them to a large distribution list?  I suppose the short answer is …it’s easy and they would argue they’ve more important things to be getting on with.  Fair enough.

But this isn’t a blog post having a whack at email and email vendors …and we should consider the fact that the ubiquity of smart phones, high definition cameras, et al mean we’ll continue to create ever larger files …indeed, we’re uploading 24 hours worth of video to YouTube every minute up from 12 hours a minute just two years ago …so how do we reduce the amount of resources we’re consuming …electricity, datacentre space, and people to run the show cost dosh you know! …and the CO2 we’re creating when we need to share files with others?

I think there are five answers which are worth considering.

1. Introduce proactive limits for users.

Let’s face facts, reactive limits with users tend not to work and/or are quickly circumvented to keep the business moving.  Telling users ‘your email mailbox cannot grow beyond 20MB or we’ll cut you off so you can’t send/receive email’ rarely works in my experience.  Rather, we need to evolve this theory to be proactive.

For example, I use a great cloud based application called Evernote.  I could write a whole post on just how great this app is …it allows me to take notes anywhere I am on my MacBook Air, iPod, Blackberry and keeps the notes and related notebooks in sync so that where ever I am, all of my notes are up to date without me having to do anything.  Brilliant.

But here’s where it gets even better …it’s free.  Provided I don’t exceed a 20MB monthly limit, of course …and therein lies the true genius in my mind.  Evernote resets my 20MB limit at the beginning of each month so, providing I don’t exceed the 20MB in a month …sorted!  This is the type of proactive limit I’m thinking of for users …we give you a limit and then count down monthly to zero.  Stay in your limits, you’re good to go …exceed them, we charge you more on a graduated basis.

2. Rethink service levels for workloads.

So what are Evernote doing with the 20MB that I created last month …it doesn’t get deleted from my syncronised notes as they remain available to me, so what gives?  To be honest, I’m not quite sure …my guess would be they move the data previously created to a lower tier of storage, such as dense 2TB SATA drives, or even archive.

To be fair, I don’t much care.  I don’t notice any performance degradation and I get to carry on consuming the service for free.

Perhaps this is the key to the answer with our users …we’ll keep your data in a highly performant architecture for one month and then demote to a lower less performant tier thereafter and reset your limit.  And we won’t ‘charge’ you unless you exceed your limit.

3. Introduce datacentre technology optimisation in the form of virtualised datacentres [VDC]s.

I’ve talked a lot about VDCs in previous posts starting with this one and many more since, so there’s no reason for me to labour the point more here other than to say that what VDCs deliver is optimisation by removing wastage as well as increasing business agility.

How much optimisation?  Chad Sakac, Vice President of VMware Strategic Alliance and the general in charge of ‘Chad’s Army’ of VCE vBlock gurus, blogged in 2009 about the potential benefits of deploying a vBlock against deploying technology silos.  An excerpt follows below:

  • 30% increase in server utilization (through pushing vSphere 4 further, and denser memory configurations)
  • 80% faster dynamic provisioning of storage and server infrastructure (through EMC Ionix UIM, coupled with template-oriented provisioning models with Cisco, VMware, and EMC)
  • 40% cost reduction in cabling (fibre / patch cords etc.) and associated labor (through extensive use of 10GbE)
  • 50% increase in server density (through everything we do together – so much it’s too long to list)
  • 200% increase in VM density (through end-to-end design elements)
  • Day to day task automation (through vCenter, UCS Manager and EMC Ionix UIM)
  • 30% less power consumption (through everything we do together)
  • Minimum of 72 VMs per KW (note that this is a very high VM/power density)

Now, I say potential benefits as, at present, these numbers have been derived from product datasheets and lab work by EMC …however we at Computacenter are looking to put more substantive quantitative analysis behind these benefits (and those of other VDC variants such as NTAP SMT, HDS UCP, ‘open VDC’) as we deploy VDCs with our customers locally in the UK.  Watch this space.

4.  Use alternative distribution tools for large attachment distribution and filesharing.

I really try not to use email as a file distribution these days, preferring instead to use cloud applications such as Dropbox to share large files with others such as internal, customers, and our vendor partners.  Now, this isn’t perfect as a) in the absence of my using encryption I wouldn’t wish to use this for sensitive corporate data, and b) it does have a ‘hard stop’ limit where I can only store 2.5GB for free with no reset limit like we have with Evernote.

But using tools such as Dropbox, uploading personal photos to Facebook instead of emailing them, if I must send an attachment trying to shrink it by converting to PDF or similar …every little helps!

That said, I accept that I’m a geek by trade and we need to find ‘easy’ ways for everyday users which replace email as a distribution system without increasing complexity.

After I’ve done that I’m planning to sort world peace, famine, and poverty.

5. Rethinking how we create data.

Only about 20% of the data we create today is ‘text’, with rich media [high def videos, pictures, etc.] representing well over 50% or more of the new data being created.

Equally, the text data we are creating is rarely just text …by the time we save it in MS Word or similar we have increased the file size with the formatting and related metadata, and many users choose to use PPT to communicate ideas such as business plans and so on …a large file type if ever there was one …and that’s without even adding any pictures, charts, or videos to the PPT.

Again, I’m not having a go at the use of PPT or MS Word …but I do believe we are going to have to begin to think about how we create data so that the data ‘footprint’ is smaller in addition to the optimisation and alternative distribution models we’ve discussed above.

Which has me thinking …it’s time for a nice cuppa before Mrs. PL needs my help setting the table for dinner with she and PL Junior …the highlight of my week!

Have a great weekend and remember your kettle the next time you send an attachment.


Five reasons why cloud isn’t a four letter word.


This is the second post in the Federated Service Provider series, click here to go to the first post.

Growing up my parents had two aspirations for me …one, to become a doctor and two, to play the piano.

Now, I studied neuroscience in university but didn’t go to medical school …long story, with the jury still out as to whether I get partial credit or ½ a point for that aspiration …and as for playing the piano, well I can play anything brass and was … okay, still kinda am …a band geek having been in marching band, concert band, and jazz/Big Band band.  Heck, I could probably even manage film classics on the vuvuzela.  But piano?  Erm, no …I would most certainly lose a contest to Chuck Hollis no matter how many drinks you gave me.

Why?  Two primary reasons, really …first, I could never could get to coordination required with feet, hand, pedals blah blah …and second, I never practised enough.

Now, before you go castigating me for not practising enough …pianos were expensive to own when I was a kid and keyboards hadn’t really hit the mainstream.  Yeah, okay, there were these things called ‘fingerboards’ with which you could ‘practise’ playing piano but really?  Cardboard fingerboard on the dining room table?  Exactly.  No self-respecting Atari 2600 playing kid is going to pretend playing piano …sorry, ‘practise’ …when there was Tron to be played in the absence of a real piano to practise with.

I hadn’t thought of any of this very much until very recently when I saw amazing piano tutor apps for the Apple iPad.  Would having had an iPad …admittedly much cheaper than a piano and also capable of doing much more than being just a piano …have changed things for me such that I would have become a concert pianist?

This blog post is by no means about the iPad …and gosh knows the iPad does evoke interesting responses from people …I’ve heard everything from the fan boy addict ‘I’ll stand  in queue overnight to buy one’ to people expressing hatred tantamount to ‘Steve Jobs is Hitler and Chairman Mao’s love child’ …and everything in between.

Look …the iPad, just like any technology, is important only inasmuch as what you are going to do with it.  And some are taking the view that perhaps better to test in the field and form a quantitative view as opposed to an emotive qualitative view.  Indeed, Eversheds have embarked on just such a qualitative analysis with Computacenter.

What’s this got to do with Data Storage and Protection?

Coming back to the topic of this series, the Federated Service Provider model, customers have begun to take steps into understanding how cloud offerings might be able to help transform their environments by executing limited trials …the Daily Telegraph trial of Google Apps is a good example of this.  And yet the market has been somewhat ‘surprised’ that customers haven’t adopted cloud technologies further.

Frankly, I’m not that surprised as cloud offerings offer a different consumption model more analogous to say electricity or water delivery …tell me exactly how much this will cost me by metering my consumption …versus more traditional IT delivery models where the individual components are purchased, integrated, and consumed.

But cloud isn’t a Federated Service Provider model, so what is?

1. Defining and implementing a service catalogue is the key to unlocking the internal service provider.

We have been hearing since the 1970s that we need to align data to business value and that IT should be a service provider to the business.  The challenge is that as we have distributed systems away from centralised systems such as mainframes, it has become more and more difficult to accomplish this as we have developed ‘silos’ of technology …servers, storage, network, desktop …all operating more or less independently although interconnected.  The business says ‘give me reliable data storage, and make it as inexpensive as possible!’ and so we replied ‘Tier One is 300GB 15K drives!’ …we replied with a technical purchasing strategy that the business could understand and we could implement, but it doesn’t align data to business value.

Another way to accomplish this is the definition of a service catalogue; codifying what throughput is required by bands [e.g. 30K > 50K IOPS], the Recovery Time Objective, Recovery Point Objective, retention period, deduplication, and so on such that you can then give this package a name …we’ll call is SLA1 to keep things simple …and then give the business a flat price per user per annum.  We then keep going …SLA2, SLA3, SLA4 …and so on until we have enough packages to satisfy the business requirements.

The business gets SLA1 at £15 per user per annum, and IT becomes an internal service provider by ‘stocking’ against the requests made by the business per SLA package.

2. Virtualised datacentres allow you to automatically provision and deploy resources from the service catalogue, and introduce a fixed cost but fungible and elastic infrastructure.

I’m a geek, in case you haven’t noticed …be kind! …and I both need and want to know how data deduplication, thin provisioning, grid storage, fully automated storage tiering and the like work.  But to be fair, the real advantage to a virtualised datacentre, or VDC, is not the technical whizzy bits but, rather, that wastage is eliminated at each IT level from hypervisor through server, storage, network such that workloads can be automatically deployed and …perhaps most importantly increased and decreased elastically …without necessarily requiring any manual intervention.  Watch this space as we’re developing some demo videos to help describe this process more fully.

What the VDC does deliver, however, is a fungible resource …I don’t necessarily need to know what future workloads will look like, but I know they’ll be £1.35 per user per annum to deploy …capable of expanding and contracting without losing structural integrity.

3. I’m not moving my holy of holies into any cloud!

If we accept that only 20% of data stored is truly business meaningful …some call this structured data; ERP, customer databases, transaction processing, and the like …it is unlikely that any customer will ever go ‘all in’ with cloud storage, preferring instead to house the ‘holy of holies’ locally within their own datacentre.  And why not?  Storing this locally would help to mitigate many of the concerns regarding data security.

But how to stem the tide and control the bleeding of the data being created?

4. Help me connect my internal service provider to external service providers seamlessly.

Controlling cost as data grows inexorably is what VDCs will ultimately provide, in my opinion …through bridges to support the federation of internal and external service providers.

The corporate IT department will continue to be the internal service provider; looking after the corporate service catalogue, managing the corporate structured data housed in the VDC, and evaluating external service providers such as Computacenter Shared Service, Amazon EC2, Google, and the like.

The onsite corporate VDC will enable a customer to virtualise, containerise, and mobilise …effectively mobilising unstructured data created by users to external service providers where it can be housed less expensively than it can internally.  Ah, but ‘aren’t we just shifting the problem elsewhere and creating a tap which can never be closed’ I hear you ask?  Not necessarily, but I’ll save the possible answers to this query to next week’s post.

5. These aren’t the stack wars, they’re the stack battles …the real war will be on of APIs.

Some analysts have been watching the emergence of vendor VDCs such as VCE vBlock, NetApp SMT, HDS Unified Compute, IBM Dynamic Infrastructure, and HP Converged Infrastructure and stated that we are ‘in the midst of the stack wars’ as each rushes to gain dominance in the market for their particular ‘stack’.

I’m not convinced.  Rather, I believe the real ‘war’ will be one of APIs.

The founder of cloud provider Enomaly, Reuven Cohen, recently asked in a blog post ‘Do Customers Really Care About Cloud APIs?’  I believe that they absolutely do …and will …but perhaps not in the way that Reuven was thinking.

If I now have a corporate VDC, I will need something to ‘federate’ my internal and external service providers.  Something, in essence, to move the data seamlessly from my corporate VDC to my external provider …based on defined business rules …without disruption to my production business.  Why wouldn’t this be APIs coupled with policy based engines?  In other words, software which is smart enough to know how to connect to an external service provider …read ‘cloud’ …and move data from the corporate VDC to the cloud seamlessly through the use of a policy based engine.

But how do we keep this from becoming just a shift of the real issue to another source?

And, if this works and we start to move significant amounts of data …how do we move a petabyte of data from the external service provider back into the corporate VDC or to another service provider should we so desire?

Mrs. PL is giving me that look which means it’s time for dinner and we’ll need to save those for next week’s post but, until then, enjoy the sunshine and have a great weekend.

Accents and the language of cloud.


I have always had a fascination with language, have studied several, and am always fascinated by the regional dialect or accent of places I visit.  But accents, and language in general, can be a funny thing.

Language continues to evolve, as do accents … I’m as amused to read the use of ‘bespoke‘ to describe bagels and ‘fortnight‘ to denote a period of time in American periodicals such as the New York Times, given these words weren’t widely used by Americans not more than ten years ago  …as I am to listen to my seventeen year old niece and her friends, born and raised in Edgware/Mill Hill both northwest London suburbs, use ‘like‘ as every other word in a sentence and ending the sentence with a rising inflection such that virtually every sentence is no longer a declarative statement but, rather, sounds much like a question.  Like, ya know?

I meet frequently with vendor partners, customers, potential vendor partners, analysts, et  al and, this week I met with a potential vendor partner where I got one of my three favourite ‘frequently asked questions’ when someone is meeting me for the first time.

After introducing myself … ‘Hello, I’m Matthew Yeager, Practice Leader for Data Storage & Protection with Computacenter. How are you?’ I was met with …

‘But …I thought you were American?’

‘Yes, I was born in the United States and have lived here for over ten years … I’m married to a woman born and raised in north London, my son was born here, and of my friends, colleagues, and contacts not one of them is from the USA. The United Kingdom is my adopted home.’

Now, this isn’t by design nor was it a conscious decision, it is just how it evolved over the past ten years.  During this evolution I will admit that my accent has flattened and my sympathetic ear has caused my speech to evolve such that I am sometimes asked, ‘But aren’t you [insert expected/anticipated nationality …American/Canadian/Irish/Other]?’

When my father came from his home in Dallas, Texas to London for Mrs. PL and my wedding he did remark, ‘I’m fascinated, your accent seem to be shifting towards London,’ I remarked that I didn’t think I had an accent … I was just pronouncing all of the syllables and calling the herbs not ‘erbs which is what the good people here tell me one should do.

Thankfully he saw the humour and we had a good laugh.

For the record, I’m not trying to sound like anyone other than me … and accents shift all the time, I’m afraid.  Although the mind boggles at just how Steve McClaren shifted to first Dutch when the manager at FC Twente and, now that he is manager of German side Wolfsburg …has he shifted again to German?

But I digress.

I have friends who emigrated from America to live in Israel and, when speaking English, now have a definitive ‘Israeli’ accent.  I also have friends, born and bred in the UK, who after having moved to the USA and after only a few years have begun developing broad mid-Atlantic accents and using words like ‘cell phone’ and *gasp* ‘soccer’.

If you listen to the Queen, Her Majesty doesn’t sound anything today like what she used to not thirty or so years ago.  Indeed, when is the last time you heard a BBC reporter or news reader use RP.

What’s this got to do with Data Storage and Protection?

Our customers and indeed the technology market have demanded that we evolve our language, to speak more of the business benefits of technology and how our solutions will help them align data to business value.

Equally, Mrs. PL often tells me that there is a stark difference between listening to someone and hearing them …this is usually just prior to my taking the hint and closing the laptop lid whilst we discuss the day’s goings on and PL Junior’s day at school.

It was with these points firmly in mind that I began reading a very interesting press release on Tuesday last, 29 June 2010, which reported that EMC was closing down its Atmos online storage company.  What?  The world’s largest storage vendor is closing down their cloud storage offering …but weren’t we led to believe the great saviour of all things technical …the vaunted cloud …was the natural evolution of storage?!

Not so fast.  I’m joking a bit as it always amuses me how quickly hyperbole enters any conversation regarding technology, but there may be some clues as to what is really going on here in the actual statement placed on the Atmos website.

“We are no longer planning to support production usage of Atmos Online.  Going forward, Atmos Online will remain available strictly as a development environment to foster adoption of Atmos technology and Atmos cloud services offered by our continuously expanding range of Service Provider partners who offer production services [emphasis mine].  We will no longer be offering paid subscription or support for Atmos Online services.  Any existing production accounts will not be billed either for past or future usage.   We will also no longer provide any SLA or other availability commitment.  As a result, we strongly encourage that you migrate any critical data or production workloads currently served via Atmos Online to one of our partners offering Atmos based services.”

The emergence of the federated service provider model.

I think that this is a perfect example of what is happening today within technology generally and data storage offerings specifically; our customers encourage/insist/demand that we evolve offerings and the way in which data storage is consumed to help them align data to business value, so providers emerge with the advent of the whole new language of ‘cloud’.

But the adoption of cloud has been much slower than the cloud providers would like, with some providers such as Atmos shutting up shop altogether.  And being a cloud provider and service provider are two very different things.

Why?  I think there are a few reasons, but possibly the largest is that in actuality, one could argue …and I often do …that customers weren’t looking for a new language but, rather the evolution of language …a different ‘accent’, if you will.

Indeed, I would pose that what we are witnessing is not so much the emergence of cloud storage or ‘everything as a service’ …we are witnessing the emergence of a federated service provider model.

I’ll talk more about what I think a federated service provider model might look like next week, but a few reasons why cloud storage adoption has been slower than anticipated by cloud providers.

1. What is the cost benefit of placing data in the cloud?

This is probably the biggest reason that cloud adoption for storage has been slow, in my opinion.  Search just about any cloud storage provider’s website and I challenge you to find any statements, calculators, or baseline data which shows what the average cost of storing data for a customer onsite would be versus storing the data in the cloud.  Yes, I’m fully aware that ‘your mileage may vary’ and each customer will be different …but we as an industry need to give our customers real CBA [Cost Benefit Analysis] and TCO [Total Cost of Ownership] data both from our labs and from the field to help them efficiently calculate what the cost savings would be over a five year period when transitioning data from locally held to cloud.  Given the analyst statistics purporting that only roughly 25% of technology assets are likely to be owned by customers outright by 2030, one would think that the prize would be great in helping customers understand these calculations.  Watch this space as we are actively developing these very tools within Computacenter.

2. We’re not really placing data in the cloud are we …isn’t it more like workloads?

This can get somewhat complicated so I’ll go a bit deeper on this in the next Weekly View but, put simply, we can’t just say we’re putting data in the cloud.  We must understand that this data is part of an existing customer workload …which includes compute, network connectivity, application, possibly a hypervisor, recovery time objective/recovery point objective, cost, and so on.  In other words, attributes which must be taken into consideration prior to any consideration of placing the workload in the cloud …or anywhere else for that matter.

3. Help me understand which workloads are applicable for cloud adoption.

All workloads aren’t created equal and each will have different characteristics, as I’ve just described, as well as a different cost or value to the customer business.  Whilst some of the cloud storage providers have become a bit better at publishing how, exactly, their offerings can be consumed by customers I still feel that we could do better as an industry in helping to provide customers with tools and/or collateral which helps them determine which workloads could be migrated to the cloud to offer lower cost without a degradation in performance or service …and define what their workloads look like today.  This is, in my humble opinion, the role of a service provider and exactly where service and solutions providers such as Computacenter fit in.

4. What about security?

Admittedly government mandates and related legalities meant to govern data protection and security remain a bit confusing …I say a bit, learning a new language altogether is often less confusing and time consuming …but security is not an attribute which can be ignored.  But just what is meant by data security?  Does this mean where the data is held, the physical security attributes of the service provider datacentre, data encryption …and is that encryption at the host, from host to provider, throughout the entire stream …all of the above?  This can get very complicated quite quickly, but my belief is that understanding how to solve this issue starts with workload definition as discussed in point three above.

5. How would I use cloud based storage with what I have today?

Too often I have seen cloud storage offerings and ‘anything as a service’ offerings pitched as zero sum all or nothing propositions.  But it needn’t be so, and this is the core of my belief in a federated service provider model …helping the customer to understand how and when to consume external service provider offerings such as cloud and how to use these offerings in conjunction with their existing IT.

But we’ll talk more about that next week.

Until then, have a great week and if we can’t truly enjoy the World Cup now that England are out …perhaps we can enjoy the glorious sunshine we’ve been gifted instead!

Click here to contact me.