Archive for October, 2010

Four reasons why the sky isn’t falling, but storage quadrants are collapsing.

29/10/2010

Sixty Second Synopsis:

The average customer achieves 40% or less utilisation from their data storage infrastructure. Increasing efficiencies increases utilisation which, in turn, decreases costs. Next generation storage systems such as grid/scale out and unified represent the natural evolution of the four storage quadrants aimed at reducing costs in both the immediate and for the future.

People love a good story, and humans have been telling stories in one form or another for hundreds, if not thousands, of years.  Indeed, there has been much talk in business consultancy and related journals regarding the ‘art of storytelling’ to help disseminate what can be difficult and often technically complicated topics and ideas to a wide audience.

One of the most well known and widely used methods of storytelling is that of film.  I love a good film …the story and music, the escapism of two hours in a darkened theatre, hot buttered popcorn …. mmm popcorn! …. and I’m sure I’m not alone in having a top ten list of my favourite all time films.

Depending upon what day of the week you ask me, taking the number one slot in my top ten will either be Zulu or Lawrence of Arabia.  Okay, perhaps I have a top eleven list but let’s not let facts get in the way of a good story!  Pun intended, but I digress.

Near the head of my top ten is a film called Searching For Bobby Fischer, released in the UK as Innocent Moves.  The film tells the story of a young chess prodigy and the lengths he and his family go in their trying to understand him and he them all whilst being supportive of and developing his unique gift.  I won’t give the plot or ending away, and would seriously recommend you spending the time to watch the film as it is an excellent object lesson in how we raise and prepare our children for the world in my opinion.  I will, however, share with you my favourite line from the film … ‘The game is already over. You just don’t see it yet.’

What’s this got to do with Data Storage & Protection?

In my career I have never seen the data storage consolidate nor move more quickly than it has in the past 24 months.  Indeed, what were the four storage quadrants …Enterprise, Modular, NAS, and Archive …have rapidly converged and consolidated to leave with what are effectively two categories …Grid/Scale Out and Unified.

But why?  At a high level, the Four Quadrants of storage developed and evolved as they sought to solve different customer issues however none of the quadrants represents a ‘perfect’ solution and all suffer from a serious reduction in utilisation as they attempt to scale.  Given we are creating more data every three days as we did in all of 2003, it isn’t difficult to see why customers need efficient data storage systems which can easily scale to solve utilisation and cost challenges.

I’m not going to go into the Four Quadrants in any detail as I’ve developed masterclasses which cover this from soup to nuts in two hours.  I have also developed separate two hour masterclasses for Grid Storage and Unified Storage.  Please contact me if you’d like me to run a private masterclass(es) session for you and/or your organisation.

1. Customers want …and need … efficient arrays.  What were once products are now features.

Start-up companies such as Data Domain [dedupe] acquired by EMC, Diligent [dedupe] acquired by IBM, Storewize [data compression] acquired by IBM, Compellent [automated storage tiering], XiV [grid] acquired by IBM …the list goes on and on …have either been acquired or seen their ‘unique’ products rolled into existing vendor products as is the case with FAST [Fully Automated Storage Tiering] in EMC products.

2.  The four storage quadrants are collapsing to leave us with two primary solution variants; grid/scale out and unified.

The simple equation is that increased utilisation equals decreased costs, and we’ve seen Grid/Scale Out storage [e.g. EMC VMAX, IBM XiV] evolve from the Enterprise and Modular quadrants to address the ability to scale at cost as well as Unified storage [e.g. NetApp] evolve from the Modular/NAS/Archive quadrants where customers don’t necessarily require massive scale out capabilities but would like a ‘Swiss Army knife’ approach with iSCSI, fibre channel, NAS, dedupe, compression, et al all included in the same storage product.  This is also an effort by the vendors to reduce their R&D costs by delivering fewer but much more efficient storage products.

3. The storage technologies which underpin VDC and the federated service provider model … or private/public cloud hybrid, if you prefer …are waypoints and not the final destination.

At the moment we treat data as either ‘block’ …I’ll whack the data on some block storage systems like EMC VMAX or IBM XiV …or ‘file’ …I’ll whack the data on some file based storage like NetApp or EMC Celerra …but we’re rapidly heading towards data being treated as an object.  In other words, a true convergence of block and file based storage where we treat the data as an individual object as opposed to ‘speeds and feeds’.   However, we need ways in which to optimise our datacentre and storage environments today which reduce costs as well ad provide a bridge to the future.  VDC and the federated service provider model is absolutely that bridge.

4.  The infrastructure to support data will continue to evolve, but data as a construct will not.

Many IT departments truly believe that the barbarians are at the gates with their users seeking to access data whenever and however they wish with ‘unapproved’ mobile devices such as Apple iPads, Google Android phones, et al.   Egads and heavens to Murgatroyd!  I understand the reasons for IT to try and provide high levels of support by restricting usage, but putting up walls by device to restrict data access is a very 1990s method of physical access control and, frankly, a fool’s game.  Nature …and users …always find a way around such barriers if they feel they can be more productive by providing and acquiring their own access devices.  But you can protect the data, and that is where the security and restrictions should be placed in my opinion.  Indeed, we will see geotagging of data and with access and geolocation restrictions based on the data objects …for example, you can view the data in the UK but not when you leave our sunny clime …but I’ll save this for another blog post.  Equally, there may even be a case here for organisations to move what were CapEx costs …laptops, PCs, mobile phones …off the books by allowing employees to acquire and use their own devices.

-Matthew

Click here to contact me.

Advertisements

ROI calculators won’t save us … but data classification will.

15/10/2010

I had the pleasure to present yesterday at the HDS Information Forum in London.  Having flown with Moshe Yanai in his plane when I was in Israel, it was an honour to share the stage with another storage industry luminary in Hu Yoshida … and it’s always great to share a stage with one of my favourite Scots in Steve Murphy.  Now, if I can just figure out a way to share the stage with Dave Hitz, Paul Maritz, and Pat Gelsinger at some stage in my career I’ll have the full deck!

It was only as I was approaching where the event was being held that I realised this was the very hotel … indeed the very ballroom … where Mrs. PL and I got married seven years ago although this time instead of giving the groom’s speech I presented ‘Cloud Isn’t A Four Letter Word: The Practicalities of VDC and the Federated Service Provider Model’.

The central premise of my 25 minute presentation was that cloud needn’t be a four letter word, however I believe that the nomenclature ‘cloud’ is confusing at best as it doesn’t accurately describe the consumption and usage model.  Put simply, cloud as a marketing term is nothing more than the movement of an asset from within a customer environment to external … effectively trying to ‘move’ CapEx but not necessarily doing anything about the OpEx side of the house.

And this is where I have a real challenge with ‘cloud’ … at present course and speed it just looks too much like an economic shell game.  But it needn’t be this way.

Rather, I seek to make the case for addressing the entirety of the solution equation, including total cost of ownership, understanding that ROI represents only 30% of TCO.

In other words, acquisition or CapEx only represents 30% of the TCO equation whereas OpEx represents 70% of the equation and I believe that the practical applications of the virtual datacentre as well as the federated service provider model can absolutely solve this equation.

What’s this got to do with Data Storage and Protection?

Data creation, and the storage devices to store the data we’re creating, holds both the root cause as well as the solution to ever increasing OPEX costs in my opinion.

1. Data creation is rising inexorably and has now reached the point where having humans physically ‘manage’ storage provisioning, the data lifecycle, and the like no longer makes economic sense.

A remarkable statistic is just how quickly the industry standard Compound Annual Growth Rate for data creation is rising for customers.  Just 12 to 18 months ago the standard was 40% …in other words, every three years you had created twice as much data with a little more on top.  Now the industry standard is 60%, and in the three years I have been with Computacenter not a single storage assessment we have run for customers have shown even a 60% CAGR … the vast majority have show 100% to as high as 212% for one retail customer.

2. It’s all virtual, baby.

Or at least is should all be virtual, baby.  Virtualise what?  Put simply … everything.  We’ve been working on and deploying virtual datacentres in Computacenter for 18 months and what we have learned is that first and foremost, VDC is a business solution underpinned by technical components.  Yes, you can manage a storage array manually if you so desire …but if we introduce a grid based storage array, coupled with universal compute and networking components, we can automate provisioning of not only the data storage to support business users but indeed the whole workload.  Why would we want highly skilled technologists plumbing away kit when it would be of higher value for them to be working with the business solving the alignment of data to business value?  But in the absence of a virtualised grid based pool of storage …and server compute, and network …automating this becomes virtually [no pun intended] impossible.  The more you automate, the higher the utilisation …and the lower the OpEx.

Equally, we can drive the OpEx down further as next generation storage arrays are becoming largely self tuning with automated storage tiering and much more efficient with the advent of storage compression, thin provisioning, and data deduplication.

3. Virtualise. Containerise. Mobilise.

Once we’ve virtualised the lot, now we can containerise the workloads.  Containerisation is what allows us to provide this workload automation.  Rather than trying to attempt levitation by harmonising data storage, servers, virtualisation, and networking separately we can define the workload as a container with storage, network, compute attributes.

But whereas we would be limited by workload type [e.g. database, email, web application] in a traditional ‘siloed’ solution, the VDC allows us to run multiple container workloads simultaneously.

To wit, we have been testing a VDC in our Solution Centre which can support 6,000 virtual machines all from a footprint of one datacentre row or six 40U racks effectively.  When you consider that it could take something like four to five rows of equipment to support the same number of virtual machines in a traditional datacentre, the OpEx optimisation begins to become much more apparent.

What VDCs really allows us to do is optimise all three dimensions of a customer datacentre simultaneously such that we can deliver immediate cost benefit …perhaps as high as 30% to 50% CapEx avoidance and OpEx reduction …as well as provide a bridge to what some are calling private/public cloud federation but we prefer to call federated service provider model.

4. Data classification will determine what data is kept inside the corporate VDC and what data gets shipped out to an external service provider.

When I’m discussing the federated service provider model …or private/public cloud hybrid, if you must …one of the inevitable questions relates to data security.  Now, not to be flippant, but why should we ever have to store sensitive data anywhere other than inside a customer VDC?

In a typical environment we see that 20% of the data is ‘business meaningful’ or structured data … ERP systems, customer databases, etc.  This leaves better than 80% as unstructured …email, PPTs, duplicate and dormant data which hasn’t been accessed in some time.

Why wouldn’t we connect the customer VDC to an external service provider such as Computacenter and allow for the unstructured data to be ‘shipped out’ where it can be stored more efficiently and at lower cost than internally?

Some are calling these the ‘stack wars’ but I don’t believe they are …rather, this is the API war as it will be the APIs which make this model possible.

But that’s a story …or presentation? …for another day.

Until then, have a great weekend.

We need business solutions to business problems.

09/10/2010

To a man with a beaker, everything is a solution.

Or so goes the sage advice of one of my university professors whose name I sadly seem to have forgotten, relegated to the mists of time and memory.

I am often reminded of this and, perhaps a more popular and well known version of the same sentiment, Abraham Maslow’s famous quote … ‘To a man with a hammer, everything looks like a nail.’ … when I’m meeting with customers as the conversation inevitably winds its way round to three common queries.

1. What do you think about storage vendor [x] versus vendor [y] versus vendor [z]?

2. We’re paying too much for storage specifically and/or IT generally.  We thought IT was meant to be a commodity which supported the business, yet costs feel wildly unpredictable. What would you recommend, and where should we start?

3. What made you think that that tie went with that shirt?

Oh dear.  Where to begin?

Number three is the easiest to deal with as I just need to ensure that I turn the lights on when I dress in the morning or, better still, let Mrs. PL choose my ensemble.

As for numbers one and two, well … they can be slightly more challenging.

When tackling number one, I am as candid as I can be about what we know from experience in deployment and testing both in the field and the Solution Centre as well as my personal opinions based on personal research and visits to our vendor partners development facilities … understanding that the vendors frequently FUD one another, but this is to be expected and always somewhat suspect.  Equally, it is worth bearing in mind that there are no silver bullets nor are there perfect solutions … at least I haven’t come across any in the 15+ years I’ve worked in technology and the 30+ years I’ve been around IT.  Indeed, when I used to go to work with my father it wasn’t called IT but ‘data processing’.

As for the second, the conversation will almost undoubtedly involve ‘Should I virtualise my servers?’, ‘Should I virtualise my storage?’, ‘Should I thin provision, deduplicate data, archive data, deploy grid storage, consider federated storage … ‘.  Unfortunately the answer is always …yes.  I do recognise it can be frustrating to hear … and I’m trying desperately to ensure that it doesn’t come across as flippant … when I know full well what many folks want is a direct answer and order to follow to solve what is arguably a three dimensional issue.

Ultimately what this all boils down to is that technology has largely become a three dimensional challenge, as I discussed last week, and that what our customers are asking us for is not technical jargon nor do they want to watch us throw technology at a business issue but, rather, proffer business solutions to business problems.

What’s this got to do with Data Storage & Protection?

I’m sometimes criticised for not getting to the point quickly enough, or for circular speech.  Fair enough.  But, in my defence, when faced with three dimensional business issues …if I recommend grid storage with a single storage media type but don’t take into account your future VDI and data mobility aspirations, for example …simply throwing a two dimensional solution is not going to get us where we need to be, no matter how pretty the Powerpoint slides.  These things need to be thought through and discussed and that takes time …and frequently a glass of wine or two.

So what to do?

1. Magic quadrants are good, but working equations are better.

We do use a consultancy equation …ROI + CBA + DPB = CSS …which attempts to help solve the ‘what to do next’ and ‘what’s the best solution’ for three dimensional business issues.  The composite score then points us towards the storage and technology most appropriate to underpin said solution.

2. The Computacenter Virtual Datacentre [VDC] solution is a three dimensional business solution.

VDC seeks to solve business issues by increasing business agility, automating highly repeatable tasks, optimising all aspects of a datacentre to reduce CapEx/OpEx costs by 30% to 50% … and we’ve been working on and have experience with VDC for over 18 months, long before others were even thinking about such solutions.  Don’t believe me?  Have a look at the date stamp on the Automated Storage Provisioning demo video … it reads 25 March 2009 if you don’t feel like clicking the link.

3. Vendors are rushing to create silver bullets as quickly as they can.

VCE vBlock, Oracle Exadata/Exalogic, NTAP IVA, HDS UCP, IBM Dynamic Infrastructure, HP Converged Infrastructure … it doesn’t really matter what marketing decides to call it, at no point in my technology career have I seen vendors spend this amount of effort trying to create complete datacentre silver bullets for customer business issues.  I’m not saying this is good or bad as it is still too early to tell, but the concept does seem to be resonating with customers.

4. If you don’t want to go the VDC route just yet, introduce a 3D storage solution.  HDS is trying to create just such a 3D storage solution which scales out, up, and deep.

HDS announced their Virtual Storage Platform this week, effectively replacing the USPV.

HDS VSP page level tiering allows a customer to create a pool of storage which in turn creates 3D tiering; scale up, scale out, scale deep.

Scale up; pooled storage media [FC, SATA, SAS, SSD] allows the VSP to locate the data on the most appropriate tier based upon business needs [e.g. my workload needs faster response during our corporate end of quarter billing] in an automated fashion such that workloads remain performant with zero or minimal administrative action as well as zero intrusion to the users.

Scale out; expand workloads automatically to accommodate greater storage requirements [e.g. users are creating more data so we need to expand the workload container] again in an automated fashion with zero or minimal administrative action as well as zero intrusion to the users.

Scale deep; demote/move data to denser storage for long term retention at lower cost as long term retention has become more important than performance [e.g. move workload to dense but less performant SATA] again with zero or minimal administrative action as well as zero intrusion to the users.

Does the HDS VSP work and is this the 3D answer to data storage?  Is page level tiering better than block level automated tiering?

The page level tiering allows a customer to leverage existing storage arrays and their previous storage investments, so there is a valid business case for page tiering.  However, to be honest we haven’t received our test unit into the Solution Centre yet, so I don’t want to offer an opinion until Bill McGloin and the CP Data team have finished their evaluation putting the VSP through its paces.

But watch this space as I think 2011 is going to be very interesting indeed as 3D solutions such as VDC begin to find their way into corporate datacentres.

Have a great weekend,

-Matthew

Click here to contact me.

The future is 3D and the future is now.

03/10/2010

It feels like quite a long time since I blogged just prior to my going on holiday with Mrs. PL and PL Junior, so please do forgive this Weekly View being a) somewhat late and, b) out of sequence.  I am back and into the full swing of things, with the Weekly View commencing each Friday again from this week. 

Many interesting things have happened since I went on holiday; here are just some that caught my attention … PL Junior started reception which left me wondering where the time goes, Lloyds made the front page of the weekend FT by piloting iPads, Paul Maritz [CEO, VMware] stated that ‘in 2009 organisations deployed more virtual machines [VMs] than physical machines for the first time’, and let’s not forget the $2.4b tussle between HP and Dell over 3PAR … with HP winning the tug of war.

Now, I have to make a confession here … I hate packing and unpacking possibly more than anything I can think of and will do just about anything to not have to do it … to wit, when Mrs. PL and I got married I suggested we throw our honeymoon clothes into suitcases dirty and then have housekeeping launder them when we got there.  Made sense to me … no hassle packing, unpacking, and we’d have clean and neatly pressed clothes delivered to our room!  Mrs. PL was less enamoured with this idea and offered some suggestions of her own for my packing which I’m still not sure are anatomically possible.

What’s this got to do with Data Storage & Protection?

I don’t like packing for two primary reasons; 1) I’m rubbish at packing and can never seem to get things packed properly, and 2) what looks like a large suitcase never seems to hold what it should.

Enter Mrs. PL who has a PhD in packing, bringing order to chaos and filling every cubic centimetre of our suitcases.  Which got me thinking … packing is a three dimensional problem and what Mrs. PL does so exceptionally well is to bring three dimensional solutions to this three dimensional problem.

I believe the exact same thing … 3D solutions for 3D issues … is happening with technology generally and with data storage specifically.

1.  Customers tend to reap only 40% utilisation from their storage infrastructures.

Customers want to get 100% utilisation from their storage infrastructures every bit as much as Mrs. PL wants to ensure she has used every cubic centimetre of a suitcase wisely.  When it comes to storage inefficiencies, there are numerous reasons; fat provisioning, orphaned volumes, duplicate data, dormant data which hasn’t been accessed in days/weeks/months/years … even inefficiencies within the storage arrays themselves.  The past two years have seen quite a consolidation of technologies such that the vast majority of tier one storage vendors have worked to introduce thin provisioning, data deduplication/data compression, automated storage provisioning/tiering into their storage arrays and offerings to increase utilisation.  In essence, introducing what were once products in their own right as features into mainstream data storage.  Why?  Put simply, increasing utilisation decreases costs … and as customers continue to store more and more data they require the highest utilisation possible to avoid excessive storage costs.

2.  Three dimensional problems require three dimensional solutions.

Virtualising a server, introducing dedupe into backups, thin provisioning a few volumes … each on their own are two dimensional solutions.  Whilst two dimensional solutions will reduce costs somewhat, it is only when these solutions are coupled holistically into three dimensional solutions that true cost reductions both in the immediate and for future growth can be achieved.  The Computacenter Virtualised Datacentre [VDC] solution is a three dimensional solution, seeking to holistically optimise the network, platform, hypervisor, storage, and automation such that OpEx and CapEx costs can be reduced by as much as 30% to 50% or more both in the immediate and for future workload creation and data retention.

3.  3D solutions, whether at the datacentre level or storage level, are business solutions and not technical solutions.

It is true that VDC is made up of technical components such as hypervisors, universal compute, virtualised 10GB Ethernet, grid storage, and automation … however VDC isn’t a technical solution.  It is a business solution which seeks to reduce costs by optimising and reducing wastage between components, automating highly repeatable tasks such as server and storage provisioning, all with a view to aligning technology [IT] to business value.  Why is this important?  Put succinctly, when calculating Total Cost of Ownership [TCO], only 30% of TCO is represented in acquisition cost … the remaining 70% is OpEx.  Many tools and technologies have focussed heavily on immediate return [e.g. TCO calculators, data dedupe] however the real long term cost savings remains in OpEx reduction.  Helping an IT department optimise OpEx should return significant long term value and cost reductions, and we’ve spent a lot of time putting science behind this such that we can underwrite the business benefits as opposed to peddling the marketing hype.  I’ve written at length about how and why VDCs help internal IT departments transition/evolve into internal service providers, so I won’t rehash that now but click here and here if you’d like to revisit these posts.

4.  Using commoditised hardware in data storage and treating data storage as a commodity are not the same thing.

Grid or ‘scale out’ systems such as IBM XiV and EMC VMAX form the basis of the Virtualised Datacentre … and cloud computing.  The secret to scale out systems is that they use commoditised hardware … Intel chips, SATA, SAS, and SSD drives … and use software to manage data placement and automated tiering.  However, this isn’t the same thing as treating data storage as a commodity and buying ‘good enough’ storage at the lowest price to store yet more data as this strategy is, generally speaking, what leas to low storage utilisation and high IT costs.  These next generation arrays represent the introduction of true business intelligence into the storage fabric and seeking to store data created as efficiently as possible from creation throughout the lifecycle.  Indeed, without scale out storage VDI, service catalogues, automated provisioning, cloud computing, et al wouldn’t be possible at costs low enough to help organisations overcome the inertia of how storage has traditionally been purchased and allocated.

5.  If three dimensional solutions are the future of technology and the key to significant cost reductions, why not introduce them into data storage directly?

A very good question, and certainly one that HDS have put to the market and customers with their recent Virtual Storage Platform [VSP] announcement.  Whereas the arrays mentioned in item 4 are more intelligent than traditional storage arrays, it could be argued that one must connect them to other components to achieve a 3D solution.  HDS would argue that their new VSP introduces 3D storage … scale out, scale up, scale deep … into the market for the first time.

I’ll be tackling the HDS VSP announcement in my next blog post , giving you my thoughts about how I think it stacks up to competitive technologies and solutions.