Archive for the ‘Virtualised Datacentre’ Category

ROI calculators won’t save us … but data classification will.

15/10/2010

I had the pleasure to present yesterday at the HDS Information Forum in London.  Having flown with Moshe Yanai in his plane when I was in Israel, it was an honour to share the stage with another storage industry luminary in Hu Yoshida … and it’s always great to share a stage with one of my favourite Scots in Steve Murphy.  Now, if I can just figure out a way to share the stage with Dave Hitz, Paul Maritz, and Pat Gelsinger at some stage in my career I’ll have the full deck!

It was only as I was approaching where the event was being held that I realised this was the very hotel … indeed the very ballroom … where Mrs. PL and I got married seven years ago although this time instead of giving the groom’s speech I presented ‘Cloud Isn’t A Four Letter Word: The Practicalities of VDC and the Federated Service Provider Model’.

The central premise of my 25 minute presentation was that cloud needn’t be a four letter word, however I believe that the nomenclature ‘cloud’ is confusing at best as it doesn’t accurately describe the consumption and usage model.  Put simply, cloud as a marketing term is nothing more than the movement of an asset from within a customer environment to external … effectively trying to ‘move’ CapEx but not necessarily doing anything about the OpEx side of the house.

And this is where I have a real challenge with ‘cloud’ … at present course and speed it just looks too much like an economic shell game.  But it needn’t be this way.

Rather, I seek to make the case for addressing the entirety of the solution equation, including total cost of ownership, understanding that ROI represents only 30% of TCO.

In other words, acquisition or CapEx only represents 30% of the TCO equation whereas OpEx represents 70% of the equation and I believe that the practical applications of the virtual datacentre as well as the federated service provider model can absolutely solve this equation.

What’s this got to do with Data Storage and Protection?

Data creation, and the storage devices to store the data we’re creating, holds both the root cause as well as the solution to ever increasing OPEX costs in my opinion.

1. Data creation is rising inexorably and has now reached the point where having humans physically ‘manage’ storage provisioning, the data lifecycle, and the like no longer makes economic sense.

A remarkable statistic is just how quickly the industry standard Compound Annual Growth Rate for data creation is rising for customers.  Just 12 to 18 months ago the standard was 40% …in other words, every three years you had created twice as much data with a little more on top.  Now the industry standard is 60%, and in the three years I have been with Computacenter not a single storage assessment we have run for customers have shown even a 60% CAGR … the vast majority have show 100% to as high as 212% for one retail customer.

2. It’s all virtual, baby.

Or at least is should all be virtual, baby.  Virtualise what?  Put simply … everything.  We’ve been working on and deploying virtual datacentres in Computacenter for 18 months and what we have learned is that first and foremost, VDC is a business solution underpinned by technical components.  Yes, you can manage a storage array manually if you so desire …but if we introduce a grid based storage array, coupled with universal compute and networking components, we can automate provisioning of not only the data storage to support business users but indeed the whole workload.  Why would we want highly skilled technologists plumbing away kit when it would be of higher value for them to be working with the business solving the alignment of data to business value?  But in the absence of a virtualised grid based pool of storage …and server compute, and network …automating this becomes virtually [no pun intended] impossible.  The more you automate, the higher the utilisation …and the lower the OpEx.

Equally, we can drive the OpEx down further as next generation storage arrays are becoming largely self tuning with automated storage tiering and much more efficient with the advent of storage compression, thin provisioning, and data deduplication.

3. Virtualise. Containerise. Mobilise.

Once we’ve virtualised the lot, now we can containerise the workloads.  Containerisation is what allows us to provide this workload automation.  Rather than trying to attempt levitation by harmonising data storage, servers, virtualisation, and networking separately we can define the workload as a container with storage, network, compute attributes.

But whereas we would be limited by workload type [e.g. database, email, web application] in a traditional ‘siloed’ solution, the VDC allows us to run multiple container workloads simultaneously.

To wit, we have been testing a VDC in our Solution Centre which can support 6,000 virtual machines all from a footprint of one datacentre row or six 40U racks effectively.  When you consider that it could take something like four to five rows of equipment to support the same number of virtual machines in a traditional datacentre, the OpEx optimisation begins to become much more apparent.

What VDCs really allows us to do is optimise all three dimensions of a customer datacentre simultaneously such that we can deliver immediate cost benefit …perhaps as high as 30% to 50% CapEx avoidance and OpEx reduction …as well as provide a bridge to what some are calling private/public cloud federation but we prefer to call federated service provider model.

4. Data classification will determine what data is kept inside the corporate VDC and what data gets shipped out to an external service provider.

When I’m discussing the federated service provider model …or private/public cloud hybrid, if you must …one of the inevitable questions relates to data security.  Now, not to be flippant, but why should we ever have to store sensitive data anywhere other than inside a customer VDC?

In a typical environment we see that 20% of the data is ‘business meaningful’ or structured data … ERP systems, customer databases, etc.  This leaves better than 80% as unstructured …email, PPTs, duplicate and dormant data which hasn’t been accessed in some time.

Why wouldn’t we connect the customer VDC to an external service provider such as Computacenter and allow for the unstructured data to be ‘shipped out’ where it can be stored more efficiently and at lower cost than internally?

Some are calling these the ‘stack wars’ but I don’t believe they are …rather, this is the API war as it will be the APIs which make this model possible.

But that’s a story …or presentation? …for another day.

Until then, have a great weekend.

When did we start thinking that anything less that 100% utilisation was okay?

20/08/2010

This will be my last Weekly View for a fortnight as it is that time of year again …holiday!

My fellow Practice Leaders, Paul Casey and Kevin Ebbs, will be enlightening, educating, and entertaining you …all in equal measure, no doubt … with the next few Weekly Views as I recharge my batteries with Mrs. PL and PL Junior.

Before I sign off an help finish the packing, there were a few interesting things that have happened over the past few weeks that I thought I would share with you.

The first is that we’ve launched our Optimised Environment VDC application on the iTunes store as a free of charge download.  I won’t go into this too much as I’ve spoken about this app a few times and even made a demo video, but I did want to thank Nikki Clift and Anna Rosenberg for their help and perseverance to make this app a reality.  It isn’t designed to win any technology or business ‘arguments’ but, rather, serve as a tool whereby one can easily navigate the great collateral we have for Optimising Environments, Virtualised Datacentre, and the like … and I, for one, believe this helps to set us distinctly apart from our competition in the services & solutions provider space as we’re articulating that it isn’t our IP that makes us unique but our ability to execute efficiently and delight our customers with market leading solutions such as VDC and cost benefit underwriting.

The second was Mrs. PL and I attended a ceremony which celebrates an engagement called a vort.  Now, this is a ceremony that has been going on for hundreds of years …possibly longer …but is just as relevant today as it was when it was first thought up and is one of my favourites.  There are many interpretations and much symbolism in the ceremony, but put simply you are firstly giving your word to marry …vort literally means ‘word’ in Yiddish …but it also serves to remind both of the MiLs [mum in laws] that they must be careful with the words they use when their son/daughter comes to seek counsel/complain about their significant other after they are married.  How is this message achieved?  During the vort a plate is broken with each of the MiLs holding opposite ends of the plate …signifying that, once broken, it is impossible to put the plate back together again fully.  And that each MiL has a responsibility for the structural integrity of the  plate and for the counsel they give.

Yes, you can glue it back together …but there will always be the little shards you couldn’t find and it will never look quite the same.  And who wants to eat dinner of a plate glued back together, anyway?!  And so it is with marriage.

What’s this got to with Data Storage & Protection?

I’ve been a geek for as possibly as long as I’ve been alive …Mrs. PL jokes that I had corduroy patches on my pyjamas …and I recall going with my father to work on the weekends where he retensioned the reel to reel backup tapes whilst I played Star Trek on the mainframe.  I was about seven.  Flash forward to my first IT project …a three tier client server project to expand an order entry system from a closed mainframe to an ‘open’ system with CICS, OS/2 Warp 4 servers, and Win95 workstations.

Told you I was a geek, not that I thought any of you doubted me.  But I’ve also spent a fair amount of my life around mainframes.

I’ve always had a soft spot for mainframes and never thought they really got the kudos they deserved, but as with anything in technology …if you wait long enough, the pendulum will swing in the opposite direction and you’ll see the same ideas come back again to solve business issues.

As we democratised technology away from the mainframe and into ‘open’ systems, our job as technologists was to make the pot and allow others to determine how to fill it.  Would Facebook, Twitter, Google, or Wikipedia have happened if the inventors had to submit the jobs on punch cards and provide business justification for the output?  But as data volumes have increased and corporate IT departments have had to scale to meet demand we’ve begun to see how much waste there can be in ‘open’ and distributed technology infrastructures.

Just as we would find it difficult to glue the plate back together if we broke it, so too have we found that it has become costly at scale once we broke the mainframe apart into separate network, storage, computing, and software components.

Now, this isn’t a ‘let’s go back to the mainframe’ rant.  There are many technical advantages to open and distributed infrastructures, but what we’re seeing now with the virtualised datacentre is a return back to a mainframe like device capable of supporting tens of thousands of users from a very dense and efficient footprint.

When did we start thinking that anything less than 100% utilisation on a platform was okay?

I suppose our having had PCs in our homes and on our desktops for almost the past twenty or so years has conditioned us to believe that if we approach 100% CPU, disk, or memory utilisation …bad things happen.  PC slowdowns, mouse won’t move, you know the score.  But mainframes typically operate at 100% utilisation …thats what they are designed to do!  Run multiple workloads on the same infrastructure efficiently.  Yet we know that the average server utilises less than 15% of the total CPU, storage infrastructures operate at best 40% utilisation …put simply, VDC isn’t necessarily a new idea but, rather, we are revisiting the mainframe by virtualising the servers, network, storage simultaneously such that we can run an infrastructure to support 60,000 or more users with just five 40U racks of equipment versus the better than five or more rows of equipment this would take in a traditional distributed infrastructure.

Equally, whilst I recognise that there are some that view VDCs as being ‘cutting edge’ technology they are by no means bleeding edge …indeed, mainframes have been containerising workloads and operating upwards of 100% utilisation for the past forty five years. And the mainframe space is about to get even more interesting as you can run Linux and related workloads on an IBM z196 mainframe.

But I don’t want to steal Martin Boake’s thunder, so if you’ve questions about that or are interested in a mainframe for workloads contact Martin.  He and the team we acquired from Thesaurus are a great bunch of guys and have probably forgotten more about mainframes than most people …myself included …will ever know.

Right …I think that does it for right now and remember, the next time you pick up a plate or consider breaking a workload out …there’s probably a much more efficient way to do that without you having to go and pick up the pieces!

Have a great weekend and see you in a few weeks.

-Matthew
Click here to contact me.

Boil the kettle, data rationalisation and reduction could take a while.

23/07/2010

UPDATE Mon 26 July: My interview with The BBC World Service ‘The World Today’ programme covering this topic aired this morning.  Click here for the full thirty minute podcast, or here for just my interview excerpt.

I thought perhaps I would begin this Weekly View with a quick experiment …now, you’ll need a kettle for this exercise …and in this context we need this to be a kettle of the electric variety … so if you don’t have one, or reading this blog where ‘lectric kettles may be a foreign concept, here’s a picture of one which will suffice for now.

Okay, ready?  Great.  Now, I’d like you to go and boil your kettle seventeen and a half times.  It’s okay, I know it takes a bit for each boil.  I’ll wait.  See you in a few minutes …or if you’re feeling generous, mine’s a PG Tips with milk and two Splendas.

Right …all done?  Great!  You’ve just expended as many greenhouse gases as you would by sending an email with a 4.7 megabyte attachment.

That’s right, campers … boiling your kettle 17.4 times consumes as many resources (electricity, water, and the like) and produces as much greenhouse gas as sending an email with a 4.7MB attachment in a traditional IT environment.

Source: Life-cycle analysis carried out by Mark Mills of Digital Power Group, translated into kettle boilings with help from the Energy Savings Trust [UK].

Now, I know what you’re thinking as I was thinking the same thing when I first read that statistic …what?  How can this be?!

Without getting overly geeky on the topic, the short answer is that traditional IT environments tend not to be overly efficient at scale and we’ve known for quite some time that the traditional IT infrastructure …server plus storage plus network plus operating system plus application …tends to be siloed with each individual component connected physically to the others with wastage and efficiencies lost between these connections and within the physical devices themselves.

And, to be fair, traditional datacentres don’t fare much better …indeed, the datacentre industry as a whole has reached parity with the airline industry in CO2 production with 2% of all man-made CO2 comeing from computers and communications technology.

Source: Green IT: A New Industry Shock Wave by Simon Mingay [Gartner]

What’s this got to do with Data Storage and Protection?

I suppose that there is the obvious ‘think before you send emails with 4.7 meg attachments’.  I’m somewhat bemused …well, saddened really …that with the green revolution of the past ten years or so I now get emails with the tagline ‘Think before you print!’ with pretty green trees from just about everyone these days.  But what about having a tagline which gently asks the user …’Do you really need to send this, and, if so …please consider an alternative method rather than sending the attachment.’  Or, ‘Think before you send!’ for short.

Email has been the bane of many a storage administrator’s life as it has morphed from a purely text based system …remember SNDMSG, CICS, and OS/400? …to the rich text and inefficient file distribution model we find today.  Why do people attach large files to email and send them to a large distribution list?  I suppose the short answer is …it’s easy and they would argue they’ve more important things to be getting on with.  Fair enough.

But this isn’t a blog post having a whack at email and email vendors …and we should consider the fact that the ubiquity of smart phones, high definition cameras, et al mean we’ll continue to create ever larger files …indeed, we’re uploading 24 hours worth of video to YouTube every minute up from 12 hours a minute just two years ago …so how do we reduce the amount of resources we’re consuming …electricity, datacentre space, and people to run the show cost dosh you know! …and the CO2 we’re creating when we need to share files with others?

I think there are five answers which are worth considering.

1. Introduce proactive limits for users.

Let’s face facts, reactive limits with users tend not to work and/or are quickly circumvented to keep the business moving.  Telling users ‘your email mailbox cannot grow beyond 20MB or we’ll cut you off so you can’t send/receive email’ rarely works in my experience.  Rather, we need to evolve this theory to be proactive.

For example, I use a great cloud based application called Evernote.  I could write a whole post on just how great this app is …it allows me to take notes anywhere I am on my MacBook Air, iPod, Blackberry and keeps the notes and related notebooks in sync so that where ever I am, all of my notes are up to date without me having to do anything.  Brilliant.

But here’s where it gets even better …it’s free.  Provided I don’t exceed a 20MB monthly limit, of course …and therein lies the true genius in my mind.  Evernote resets my 20MB limit at the beginning of each month so, providing I don’t exceed the 20MB in a month …sorted!  This is the type of proactive limit I’m thinking of for users …we give you a limit and then count down monthly to zero.  Stay in your limits, you’re good to go …exceed them, we charge you more on a graduated basis.

2. Rethink service levels for workloads.

So what are Evernote doing with the 20MB that I created last month …it doesn’t get deleted from my syncronised notes as they remain available to me, so what gives?  To be honest, I’m not quite sure …my guess would be they move the data previously created to a lower tier of storage, such as dense 2TB SATA drives, or even archive.

To be fair, I don’t much care.  I don’t notice any performance degradation and I get to carry on consuming the service for free.

Perhaps this is the key to the answer with our users …we’ll keep your data in a highly performant architecture for one month and then demote to a lower less performant tier thereafter and reset your limit.  And we won’t ‘charge’ you unless you exceed your limit.

3. Introduce datacentre technology optimisation in the form of virtualised datacentres [VDC]s.

I’ve talked a lot about VDCs in previous posts starting with this one and many more since, so there’s no reason for me to labour the point more here other than to say that what VDCs deliver is optimisation by removing wastage as well as increasing business agility.

How much optimisation?  Chad Sakac, Vice President of VMware Strategic Alliance and the general in charge of ‘Chad’s Army’ of VCE vBlock gurus, blogged in 2009 about the potential benefits of deploying a vBlock against deploying technology silos.  An excerpt follows below:

  • 30% increase in server utilization (through pushing vSphere 4 further, and denser memory configurations)
  • 80% faster dynamic provisioning of storage and server infrastructure (through EMC Ionix UIM, coupled with template-oriented provisioning models with Cisco, VMware, and EMC)
  • 40% cost reduction in cabling (fibre / patch cords etc.) and associated labor (through extensive use of 10GbE)
  • 50% increase in server density (through everything we do together – so much it’s too long to list)
  • 200% increase in VM density (through end-to-end design elements)
  • Day to day task automation (through vCenter, UCS Manager and EMC Ionix UIM)
  • 30% less power consumption (through everything we do together)
  • Minimum of 72 VMs per KW (note that this is a very high VM/power density)

Now, I say potential benefits as, at present, these numbers have been derived from product datasheets and lab work by EMC …however we at Computacenter are looking to put more substantive quantitative analysis behind these benefits (and those of other VDC variants such as NTAP SMT, HDS UCP, ‘open VDC’) as we deploy VDCs with our customers locally in the UK.  Watch this space.

4.  Use alternative distribution tools for large attachment distribution and filesharing.

I really try not to use email as a file distribution these days, preferring instead to use cloud applications such as Dropbox to share large files with others such as internal, customers, and our vendor partners.  Now, this isn’t perfect as a) in the absence of my using encryption I wouldn’t wish to use this for sensitive corporate data, and b) it does have a ‘hard stop’ limit where I can only store 2.5GB for free with no reset limit like we have with Evernote.

But using tools such as Dropbox, uploading personal photos to Facebook instead of emailing them, if I must send an attachment trying to shrink it by converting to PDF or similar …every little helps!

That said, I accept that I’m a geek by trade and we need to find ‘easy’ ways for everyday users which replace email as a distribution system without increasing complexity.

After I’ve done that I’m planning to sort world peace, famine, and poverty.

5. Rethinking how we create data.

Only about 20% of the data we create today is ‘text’, with rich media [high def videos, pictures, etc.] representing well over 50% or more of the new data being created.

Equally, the text data we are creating is rarely just text …by the time we save it in MS Word or similar we have increased the file size with the formatting and related metadata, and many users choose to use PPT to communicate ideas such as business plans and so on …a large file type if ever there was one …and that’s without even adding any pictures, charts, or videos to the PPT.

Again, I’m not having a go at the use of PPT or MS Word …but I do believe we are going to have to begin to think about how we create data so that the data ‘footprint’ is smaller in addition to the optimisation and alternative distribution models we’ve discussed above.

Which has me thinking …it’s time for a nice cuppa before Mrs. PL needs my help setting the table for dinner with she and PL Junior …the highlight of my week!

Have a great weekend and remember your kettle the next time you send an attachment.

-Matthew

Five reasons why cloud isn’t a four letter word.

16/07/2010

This is the second post in the Federated Service Provider series, click here to go to the first post.

Growing up my parents had two aspirations for me …one, to become a doctor and two, to play the piano.

Now, I studied neuroscience in university but didn’t go to medical school …long story, with the jury still out as to whether I get partial credit or ½ a point for that aspiration …and as for playing the piano, well I can play anything brass and was … okay, still kinda am …a band geek having been in marching band, concert band, and jazz/Big Band band.  Heck, I could probably even manage film classics on the vuvuzela.  But piano?  Erm, no …I would most certainly lose a contest to Chuck Hollis no matter how many drinks you gave me.

Why?  Two primary reasons, really …first, I could never could get to coordination required with feet, hand, pedals blah blah …and second, I never practised enough.

Now, before you go castigating me for not practising enough …pianos were expensive to own when I was a kid and keyboards hadn’t really hit the mainstream.  Yeah, okay, there were these things called ‘fingerboards’ with which you could ‘practise’ playing piano but really?  Cardboard fingerboard on the dining room table?  Exactly.  No self-respecting Atari 2600 playing kid is going to pretend playing piano …sorry, ‘practise’ …when there was Tron to be played in the absence of a real piano to practise with.

I hadn’t thought of any of this very much until very recently when I saw amazing piano tutor apps for the Apple iPad.  Would having had an iPad …admittedly much cheaper than a piano and also capable of doing much more than being just a piano …have changed things for me such that I would have become a concert pianist?

This blog post is by no means about the iPad …and gosh knows the iPad does evoke interesting responses from people …I’ve heard everything from the fan boy addict ‘I’ll stand  in queue overnight to buy one’ to people expressing hatred tantamount to ‘Steve Jobs is Hitler and Chairman Mao’s love child’ …and everything in between.

Look …the iPad, just like any technology, is important only inasmuch as what you are going to do with it.  And some are taking the view that perhaps better to test in the field and form a quantitative view as opposed to an emotive qualitative view.  Indeed, Eversheds have embarked on just such a qualitative analysis with Computacenter.

What’s this got to do with Data Storage and Protection?

Coming back to the topic of this series, the Federated Service Provider model, customers have begun to take steps into understanding how cloud offerings might be able to help transform their environments by executing limited trials …the Daily Telegraph trial of Google Apps is a good example of this.  And yet the market has been somewhat ‘surprised’ that customers haven’t adopted cloud technologies further.

Frankly, I’m not that surprised as cloud offerings offer a different consumption model more analogous to say electricity or water delivery …tell me exactly how much this will cost me by metering my consumption …versus more traditional IT delivery models where the individual components are purchased, integrated, and consumed.

But cloud isn’t a Federated Service Provider model, so what is?

1. Defining and implementing a service catalogue is the key to unlocking the internal service provider.

We have been hearing since the 1970s that we need to align data to business value and that IT should be a service provider to the business.  The challenge is that as we have distributed systems away from centralised systems such as mainframes, it has become more and more difficult to accomplish this as we have developed ‘silos’ of technology …servers, storage, network, desktop …all operating more or less independently although interconnected.  The business says ‘give me reliable data storage, and make it as inexpensive as possible!’ and so we replied ‘Tier One is 300GB 15K drives!’ …we replied with a technical purchasing strategy that the business could understand and we could implement, but it doesn’t align data to business value.

Another way to accomplish this is the definition of a service catalogue; codifying what throughput is required by bands [e.g. 30K > 50K IOPS], the Recovery Time Objective, Recovery Point Objective, retention period, deduplication, and so on such that you can then give this package a name …we’ll call is SLA1 to keep things simple …and then give the business a flat price per user per annum.  We then keep going …SLA2, SLA3, SLA4 …and so on until we have enough packages to satisfy the business requirements.

The business gets SLA1 at £15 per user per annum, and IT becomes an internal service provider by ‘stocking’ against the requests made by the business per SLA package.

2. Virtualised datacentres allow you to automatically provision and deploy resources from the service catalogue, and introduce a fixed cost but fungible and elastic infrastructure.

I’m a geek, in case you haven’t noticed …be kind! …and I both need and want to know how data deduplication, thin provisioning, grid storage, fully automated storage tiering and the like work.  But to be fair, the real advantage to a virtualised datacentre, or VDC, is not the technical whizzy bits but, rather, that wastage is eliminated at each IT level from hypervisor through server, storage, network such that workloads can be automatically deployed and …perhaps most importantly increased and decreased elastically …without necessarily requiring any manual intervention.  Watch this space as we’re developing some demo videos to help describe this process more fully.

What the VDC does deliver, however, is a fungible resource …I don’t necessarily need to know what future workloads will look like, but I know they’ll be £1.35 per user per annum to deploy …capable of expanding and contracting without losing structural integrity.

3. I’m not moving my holy of holies into any cloud!

If we accept that only 20% of data stored is truly business meaningful …some call this structured data; ERP, customer databases, transaction processing, and the like …it is unlikely that any customer will ever go ‘all in’ with cloud storage, preferring instead to house the ‘holy of holies’ locally within their own datacentre.  And why not?  Storing this locally would help to mitigate many of the concerns regarding data security.

But how to stem the tide and control the bleeding of the data being created?

4. Help me connect my internal service provider to external service providers seamlessly.

Controlling cost as data grows inexorably is what VDCs will ultimately provide, in my opinion …through bridges to support the federation of internal and external service providers.

The corporate IT department will continue to be the internal service provider; looking after the corporate service catalogue, managing the corporate structured data housed in the VDC, and evaluating external service providers such as Computacenter Shared Service, Amazon EC2, Google, and the like.

The onsite corporate VDC will enable a customer to virtualise, containerise, and mobilise …effectively mobilising unstructured data created by users to external service providers where it can be housed less expensively than it can internally.  Ah, but ‘aren’t we just shifting the problem elsewhere and creating a tap which can never be closed’ I hear you ask?  Not necessarily, but I’ll save the possible answers to this query to next week’s post.

5. These aren’t the stack wars, they’re the stack battles …the real war will be on of APIs.

Some analysts have been watching the emergence of vendor VDCs such as VCE vBlock, NetApp SMT, HDS Unified Compute, IBM Dynamic Infrastructure, and HP Converged Infrastructure and stated that we are ‘in the midst of the stack wars’ as each rushes to gain dominance in the market for their particular ‘stack’.

I’m not convinced.  Rather, I believe the real ‘war’ will be one of APIs.

The founder of cloud provider Enomaly, Reuven Cohen, recently asked in a blog post ‘Do Customers Really Care About Cloud APIs?’  I believe that they absolutely do …and will …but perhaps not in the way that Reuven was thinking.

If I now have a corporate VDC, I will need something to ‘federate’ my internal and external service providers.  Something, in essence, to move the data seamlessly from my corporate VDC to my external provider …based on defined business rules …without disruption to my production business.  Why wouldn’t this be APIs coupled with policy based engines?  In other words, software which is smart enough to know how to connect to an external service provider …read ‘cloud’ …and move data from the corporate VDC to the cloud seamlessly through the use of a policy based engine.

But how do we keep this from becoming just a shift of the real issue to another source?

And, if this works and we start to move significant amounts of data …how do we move a petabyte of data from the external service provider back into the corporate VDC or to another service provider should we so desire?

Mrs. PL is giving me that look which means it’s time for dinner and we’ll need to save those for next week’s post but, until then, enjoy the sunshine and have a great weekend.

#awesomesauce isn’t cricket, Part One.

07/06/2010

Before I get started with this week’s Weekly View, quite a few people have asked me how the VDC: VCE vBlock 2 launch went on Friday 21 May.  I’ve thought a lot about the answer as I don’t wish to appear flippant, so I’d say we should be pleased …but not satisfied.  There were many positives, indeed it isn’t every day that we get Simon Walsh and a vendor partner MD in Adrian McDonald of EMC to host an exclusive customer launch for a market leading solution where we continue to hold first mover advantage.  Of the forty customers who had confirmed to attend only three didn’t show to an event held on a Friday from 13:00 – 15:30.  In Hatfield.  When it was 24C without a cloud in the sky.

And here I was convinced 50% of attendees wouldn’t show as they were firing up their barbecues!  My sincere thanks to the customers who came out and worked with us on the day.

Not a single customer dropped out through the presentation and demos, and what was fascinating to me was that after we had presented the solution and during the panel Q&A with Simon, Adrian, Paul Casey, a customer from a legal firm, Chris Vance of Cisco, and yours truly …not a single customer asked why deploy a virtualised datacentre but, rather, how and when.  There were some great customer queries and the Q&A could have easily gone on for another hour …needless to say, we learned as much from the questions being asked as perhaps attendees did from the answers being given.  Heck, we even impressed an analyst from IDC who said in an email after the event, “It was an excellent event. To see the vBlock system in operation, plus the evident commitment and expertise of CC, was a very compelling proposition.”

We also launched the eagerly anticipated Optimised Environment app, based on the original VDC cube.  Click here to view a brief video of the Optimised Environment app in action!

Watch out for more VDC customer events as we continue to build VDC solutions in the Solution Centre and, as always, feel free to bring your customers to Hatfield for presentations and demonstrations of our VDC solutions.

And so it was perhaps the 24C weather which got me thinking about cricket and my getting engaged to Mrs. PL in what seems like a world away.

I met the future Mrs. PL, born and raised in London, when I was working in Ireland [long story, but I promise to tell it in the Weekly View someday if you like] and I’ll never forget a brief conversation I overhead between herself and a family friend during our engagement party.

‘So tell me, are you excited to be getting married?’

‘Yes, absolutely …and I’m delighted that he was born in the United States.’

‘Really, how so?’

‘Well, I’ve always wanted to live in Manhattan and I’d be eligible for a passport …and then there’s the fact that I won’t be a football, rugby, or cricket widow!’

Hope may have been permanently dashed after Mrs. PL and I had a long chat about how living in NYC is nothing like ‘Sex and the City’ and I much prefer London.  Couple this with the fact that I woke up at 05:00 on our honeymoon in St. Lucia so I could watch a French TV transmission from Martinique of the 2003 Rugby World Cup final, have a season ticket to Watford FC, and have got into more than one ‘robust discussion’ with Mrs. PL when I refused to go out due to a key cricket test and you can see why perhaps Mrs. PL is somewhat peeved at having thought marriage to an immigrant meant freedom from sporting widow status.  Best not to discuss living arrangements for the forthcoming World Cup.

What’s this got to do with Data Storage and Protection?

It wouldn’t be hyperbole to say that I love cricket.  Indeed, I can think of no better way to spend a day than watching the cricket with cucumber sandwiches and jugs of Pimms and, whilst I promise not to turn this Weekly View into a cricket column, how can you not love a sport which takes lunch and tea intervals?!

But when I say cricket, I mean cricket of the test variety.  Try as I might, I just can’t get into the Twenty20 nor One Day International varieties.  I suppose this is because I am fascinated by the strategy and skill required to win a test and a test series which could be papered over or prove not relevant in the shorter Twenty20 and ODIs.  The preparation to win an Ashes series, from the strategy to team selection to batting order and the captain’s field plan, can often begin years in advance.

Having been born in the United States I do have an appreciation for cricket’s distant cousin in American baseball, but there is one critical difference between the two …with baseball a team can not do particularly well for 8 and a half innings and then, but a stroke of luck [no pun intended] a solid hit here and there loads the bases.  Send in your designated batting lumper and hey, presto …you’ve won the game.  With cricket, it’s all about consistency.  You have to have a batting order comprised of batsmen who can stand at the crease and bat for hours against all types of deliveries and delivery speeds.  There really is no lucky ‘home run’ concept to be found in cricket.

One of my favourite cricket books which has much to teach not only about cricket but about life and business is The Art of Captaincy by Mike Brearley where many of the concepts I’ve described above are discussed in great detail.  My other favourite cricket book is Penguins Stopped Play by Harry Thompson as I suppose I hold out hope that as PL Junior gets older I’ll have the good fortune to live a vicarious cricketing life through him or even get the chance to play with other cricket fanatics who are more than likely a bit hopeless when it comes to actually playing.

But I digress …I see many parallels between the skills and strategy required to win at cricket as I do with the design and execution of data storage specifically and virtualised datacentres generally.

I’ll discuss one of them now with a view to completing this series in next week’s Weekly View.

1. Without a plan, it becomes difficult to direct resources efficiently and effectively …particularly when events change ‘on the ground’.

No great test series, from the great Ashes Bodyline Tour of 1932-1933 to the closely fought Ashes of 2005 which saw the Ashes returned to England for the first time in 18 years, were won by individual heroics alone.  Rather, the planning began years in advance of the series with different scenarios played out well beforehand to ensure consistency if indeed events changed on the ground.

This equally, if even more true, with data storage.  As I’ve discussed in recent posts, nothing is designed to deliver miracles …especially data storage.  Whilst others in our industry may choose to think differently, my view is that thin provisioning, data deduplication, automated storage provisioning, automated storage tiering such as FAST, zero page reclamation are technology enablers and not saviours.

How you use these technologies is probably of far greater importance than what they actually are.

Having a storage strategy which looks at what the current business issues are, what they might be in the future, budget considerations, and so on is of prime importance.  I’ve discussed how we look at these variables and design solutions in other posts and we also have a Storage Assessment & Strategy Service including a customer free to use Storage Resource Analysis portal.

In keeping with the theme of cricket and test series, we’ll complete this series by reviewing four more parallels between cricket and technology in next week’s Weekly View.

-Matthew

Click here to contact me.

Five reasons why catalogue p*rn may be useful.

03/05/2010

As I started to write this blog entry I noticed that, whilst I knew I hadn’t blogged in a few weeks, the inertia of life has well and truly taken over recently as it has been almost five weeks since my last entry.  Along with my normal day to day responsibilities, the last five weeks has seen an unfortunate death in the family, Mrs. PL embarking on the conversion of our garage into a proper playroom for PL Junior …she is a formidable project manager! …the deployment of our first virtualised datacentre [VDC], trying to enjoy the three days of English summer …you get the point!  I’m just now getting back on top of things, more or less, so normal service will resume from this week and I apologise in advance as this week may see a ‘double’ entry as I try to get back to once a week entries each Friday at 17:00.

As we were clearing out the garage in preparation for the conversion I realised that Mrs. PL and I have a strange habit which could get us into trouble some day.  Hold on, bear with me before you delete this more quickly than usual or I get invited for a brief ‘chat’ with HR!  As we were clearing I was struck by the number of old magazines and catalogues we had amassed in the garage and the habit which led to this detritus.

Mrs. PL and I jokingly refer to our nasty habit as ‘catalogue porn’.

It is a nasty habit, possibly worse than biting my nails or picking my nose at traffic lights, which I was convinced  only I was afflicted with before I met Mrs. PL.  Yet one Sunday morning shortly after we got married, as I was filleting the papers and preparing to throw [or hide] the useless catalogues of rubbish we’ll never need or even know how to use …we didn’t have a garden at the time, but I still enjoyed perusing the odd ‘garden scarifier’ or two …when Mrs. PL looked at me and said ‘Don’t throw those away!  I want to look at those’.

It was in this moment, as I looked at her lovingly, I realised that amongst all that we already shared we also shared a perverse desire to look at catalogue upon catalogue of useless items.  I could come out of the closet, free at last to indulge my secret habit in full view of the world safe in the knowledge I was no longer alone.

But that was quite a few years ago and things have moved on a bit for Mrs. PL and I in the strange world of catalogue porn.  We have our favourites, such as Pedlars …otherwise known as ‘I saw you coming dot com’ in our house …and even have a bit of a game challenging each other to find the most outrageous claims and/or marketing hyperbole.  At present I’m in the lead with ‘Nutrileum’ …what in the name of my giddy aunt is that? what, washing my hair with a substance rumoured to require heavy water and centrifuges to produce? …but Mrs. PL is coming up fast with skin cream ‘pentapeptides’ and their related nonsense.

What’s this got to do with data storage and protection?

Whilst there is the obvious space that our catalogues ‘o crap take up prior to our getting round to throwing them away, the real danger posed by our habit is the very real possibility that …sooner or later …we’re going to buy something from one of these ‘I saw you coming dot com’ merchants.

Why is that a danger?  Well, not to put too fine a point on it, but let’s face facts …the scientific ‘fact’ used to sell this rubbish, when ‘facts’ are even bothered to be used that is, is a bit shady.  I say shady …statistically meaningless would be a more accurate description.  One advert actually states ‘of 84 people polled’ …84 people?!  Ideally a poll should have a sample size of 3,000 to 10,000 to be statistically meaningful …and don’t get me started on randomised distribution!  Suffice it to say, most adverts with their ‘polls’ are about as meaningful as me running in to my local and yelling ‘oi, who likes booze?!’.

But I digress.

We find something similar in technology generally and in storage specifically where I think that sometimes we focus too much on the ‘razzle dazzle’ as opposed to the business benefits.  And yes, I admit I sometimes suffer from this as well!  No comments re my obsession with productivity apps and iPads, thanks, I am all too familiar with my own shortcomings.

1.  Make it statistically meaningful for the customer.

I don’ think that there’s anything wrong with talking about the potential positive effects or business benefits of a technology, so long as you either ‘show your work’ or state that the data may not be available as yet.  For instance, if you’re going to talk about data deduplication and the transformative effects it can have on a customer’s environment by reducing costs …and that you have 522 implementations around the world …don’t you think it would be useful to state ‘we have reduced our customer’s costs by xy% on average’ and back that up with field data?  Yeah, me too.  I’ll continue to challenge our vendor partners on this point and we’re also keen to collect this ROI and TCO data ourselves moving forward.

2. Leave the awesomesauce at home, what’s the benefit?

I fully recognise that I’m a geek and, whilst I often think I’m speaking proper English …albeit slightly accented, admittedly …I may not be speaking in a language easily understood by others.  I’m paid to stay in front of technology and understand not just today’s movements but the technologies of tomorrow most likely to benefit our customers.  But.  There’s always a but, isn’t there?  This must be translated into business benefit for our customers, and I’ve noticed that there are folks in technology who would prefer to focus on the ‘I can move a bit faster than you my competitors’ ‘this one goes to eleven’ style of value description.  Some of our customers call this ‘awesomesauce’.  Sunshine is the best antiseptic, so why not be clear about business benefit?  Geeks like me need to understand the ins and outs of the technology, but we shouldn’t lose sight of aligning technology to business value.

3. One size fits all?

Just as I’m sure there are men in the world who can carry off skin tight spangly leggings …I’m not one of them as you’ll likely have gathered.  Equally, I am not aware of a single ‘one size fits all’ storage technology.  IBM XiV might be appropriate for a customer given certain criteria, where NetApp might be more appropriate for another or EMC vMax for yet another.  This is where true consultancy comes into play and, if I’m honest, we need to work with a customer re the immediate and future challenges in an interactive way that RFPs don’t often satisfy.  Early engagement, in my humble opinion, is key.

4.  The only constant in a customer environment is inertia.

Nothing wrong with inertia, but just because we’ve done a good job of engaging with the customer and articulating how we believe we can help we shouldn’t underestimate the power of how things have been done before or presently.  This is where the Solution Centre in Hatfield, our £10m investment in the technologies we’re discussing, comes into its own.  A Virtualised Datacentre [VDC] might very well be the right answer, but there will surely be the need to test workloads and proposed solutions; it most likely won’t be the technology but, rather, people and processes that will catch you out if you haven’t tested properly.

5. Show me the Nutrileum.

As I’m keen to discuss in future posts, I fundamentally believe that VDCs will help to solve many more customer issues than they may potentially introduce.  At a bare minimum I believe that they can reduce IT costs by as much as 30% – 50% or more.  But here’s the rub …we just don’t have the field referenceable statistics to support these claims as yet.  You can rest assured we will collect these stats as we deploy VDCs for our customers, but in the interim I think it is possible to build a business case based upon field referenceable statistics available from components of a VDC.  We should be able to add £xx saved from   server virtualisation plus £yy saved from grid storage plus £zz saved from network convergence and so on to show a cost benefit over a five year period post VDC deployment, and I see nothing wrong with being honest and up front about where these statistics come from.

If you’ve stayed with me to the end, thanks for reading and I hope that this has been useful as, truth be told …that’s the only reason I blog, hoping that what I’ve learned may be of interest and use to others.

Right, Mrs. PL and I have some catalogues to be getting on with so until next time dear reader.

Click here to contact me.

Drinking rubbish wine and deploying silos will both kill you in the end.

19/03/2010

Life is too short to drink crap wine.

In saying that, it is important to read that sentence carefully …I didn’t say ‘Life is too short to drink cheap wine’, or ‘LIfe is too short to drink any wine under £20 a bottle’.

Nope.

I said, ‘Life is too short to drink crap wine.’

I’ve discussed my love of wine, wine production, pretty well all things wine at some point or another in this blog and please let me reiterate …this is not about to become a ‘wine snob’ post nor a ‘wine snob’ blog.

Frankly, wine snobs bore me and, to be fair, I have had bottles of wine which retail for *cough* erm well …a lot …that I thought were crap every bit as much as I have had £4 bottles which I loved.  Equally, I’ve got plenty of other topics I could choose from to bore you without having to resort to wine snobbery …such as listening to live air traffic control from around the world on my iPod with headphones when Mrs. PL and PL Junior are asleep.  And that’s for starters.  Don’t act all surprised …you knew I was a geek of the highest order of technowwenism when you met me.

But back to wine.  See, there’s an awful lot of thought and talent which goes into making a bottle of wine …where to grow to capture the ‘terroir’, what kind of grapes to grow, when to plant, when to harvest …how much of the harvest to use …age in stainless steel or wooden barrels before bottling, if wood what kind of wood, when to bottle …you get the point.

Now …do you care?  As it happens, I do care and am happy to read/research, taste, visit wineries, blah blah.  But does the average punter give a badger’s backside?  I’m guessing a resounding no.

I think the average person …if they’re in the mood for wine …wants to buy a quality bottle of wine at the best cost they can.  The rest is, as they say, commentary.

To that end there are sommeliers in just about every restaurant you go to help you select a wine for your meal, and Masters of Wine who go through some seriously intensive training to achieve accreditation and then work in the wine industry …many MoW are employed as winemakers to make all of the difficult decisions above such that you get a quality bottle of wine at a decent price.  Just walk in (or log on) and buy a bottle or two you fancy, open, enjoy!

What’s this got to do with Data Storage and Protection?

I am convinced that, as with wine, people like simplicity and transparency when making decisions and would prefer to consume a finished product as opposed to being forced to cobble things together.

Fancy planting vines, hand plucking the harvest, and stomping grapes just to have a nice drop with dinner tonight?  Yeah, me neither.

Since my posts about the virtualised datacentre and the impact VDCs will likely have on corporate technology, I’ve had quite a few queries and comments with many querying ‘just what is a VDC and how is it different to the way we consume IT today’ so I thought I would have a go at explaining some of this whilst I’m waiting for our bottle of claret to breathe for tonight’s dinner.

1.   By 2012, 20% of organisations/businesses will own no IT assets. This is a recent statistic from Gartner and it certainly has the ‘wow’ factor as it would seem to indicate that business consumption of cloud based technology will accelerate.  I generally agree with that statement, but I feel that hype, security concerns, and a lack of demonstrable cost benefit over an extended period of time have all played a part in customers taking a ‘wait and see’ approach to moving IT functions into the cloud.  Virtualised datacentres [VDCs] give us a ‘bridge’ to private/public/federate cloud of the not to distant tomorrow whilst giving us demonstrable cost savings today and in perpetuity.

2.  Virtualised Datacentres [VDCs] are integrated solution stacks. VDCs, and they come in many flavours … VCE vBlock, VCN NetApp Virtualise Anything, IBM Dynamic Infrastructure, HP Converged Infrastructure or indeed some other ‘open flavour’ comprised of multiple vendors …are designed to work at a greatly reduced cost over traditional siloed architectures much in the same way as the bottle of wine we discussed earlier.  A VDC has server compute, server virtualisation, network and network virtualisation, storage, and automation all a single contiguous unit which works as such.  In the past we would have sought to find best of breed server, best of breed hypervisor, best of breed network, best of breed storage …all at the ‘right‘ price …coupled with the complexity of all of the decisions required for each component selection …and whacked them together and hoped they worked.  Is it any wonder why we get such low utilisation and high support / total cost of ownership challenges from traditional architectures?

3.  VDCs are largely comprised of technologies which we use and deploy today. Much as our competitors may try to convince customers otherwise, VDCs are not made of special plutonium 239 isotopes sold in exclusive lead lined boxes.  Rather, they are blueprint reference architecture designs made up largely of existing components [e.g. Cisco UCS blades, Cisco MDS network fabric, Cisco Nexus, VMware, IONIX, and EMC vMax] which are compiled by a service provider such as Computacenter to work as an integrated unit.  The key difference is the VDC is designed to allow for consumption as a unit as opposed to each component being consumed separately.  For example, a the unit would automatically deploy everything required for project manager to run a new application …server, network, storage, etc.

4.  Virtualisation on its own is not a workload, it is an enabler and component technology of a VDC. But I’ve already got VMware or Microsoft Hyper-V, so I guess I don’t need a VDC?  Not quite.  VMware and server/desktop virtualisation is certainly a key component of a VDC …but so is network virtualisation, universal server compute, storage and storage virtualisation, automation …you get the picture.  Equally, the question shouldn’t be ‘how much VMware can I run on a VDC’ as VMware isn’t a workload …an application is.  So the question should be, ‘how many applications, users, at what response speed, over what distance, and at what cost per user’ can said VDC support.

5.  Why take two small steps forward which will unlikely cover the chasm you seek to breach when a large leap has a higher probability of success? It may be that you don’t have the budget to implement or migrate to a VDC today, but working with service providers such as Computacenter we can show you just exactly how you would demonstrably reduce costs both in the immediate and in perpetuity.  Armed with this information, why not ‘draw a line in the sand’ to make the leap which will slash costs and plan implementation/migration of VDC components today with a view to full VDC integration sooner rather than later?  Isolated small steps such as server virtualisation or  storage virtualisation could be far more useful as part of a greater ‘march to VDC’ strategy.

As always, please contact me directly if you would like a VDC demonstration or help in taking the VDC journey.

Have a great weekend,

-Matthew

Click here to contact me.

First, execute with urgency. The rest is commentary.

05/03/2010

‘Real artists ship.’ – Steve Jobs

‘Just do it.’ – Nike

‘To be is to do.’ – Socrates

‘To do is to be.’ – Sartre

‘Do be do be do.’ – Sinatra

I’ve talked before about my desire to lose weight and the ‘eureka’ moment when I realised that the answer was in front of me the whole while, but I must admit …even with the best of intentions I haven’t lost as much weight as I would like.  Yeah, okay …a few pounds here and there …which always prompt Mrs. PL to offer to sew up the hole in my trousers from whence I’ve lost three pounds …har de har har.  The brutal truth is that I need to increase my outgoings and reduce my incomings to lose weight, and that requires me to take control of my diary and make time for taking exercise.

I suppose you could say my challenge is execution over intent …in other words, to just get on with it and do.

You could describe me as a ‘doer’ in many other aspects of my life, but I’ve always had a problem with ‘doing’ when it comes to things that don’t interest me terribly.  And let’s face it, exercise isn’t overly exciting.  But I’ve always had a deep respect for anyone who is a ‘doer’ no matter the circumstances, and one such doer whom I’ve come to respect deeply is one of our account managers …Rob MacAlister.

For reasons that entirely escape me, Mr. MacAlister has decided he would like to visit the North Pole.  Now, that in and of itself would normally be interesting …but Rob has decided to race to the North Pole.

Yes, you read that right.

And he’s spending the next year in training …you know, dragging a 75kg tire around Richmond Park, eating freeze dried Pot Noodles, spending hours on end in walk in freezers to acclimate his body to extreme cold, that sort of thing.  I know, I can think of about a million other hobbies I would prefer to have, but hey …it makes him happy and I applaud anyone with this level of dedication.

Rob is a doer, and rather than talk the talk (how many of us have made pint sozzled plans to rule the universe with mates in a pub?!) he is going to walk the walk.  Applied for and received sabbatical.  Train for a year.  Diet for a year …no boozin‘ or ciggies (prolly for the best in any case)!  Six weeks away from his wife and young family racing 350+ nautical miles in -40C.

Rob, I take my ten gallon yarmulke off to you my friend and Godspeed.

If you want to follow Rob, his blog can be found here http://robmacalister.blogspot.com/

What does this have to do with Data Storage and Protection?

One of the storage customer blogs I read regularly is Grumpy Storage written by the inimitable ianhf.  Ian always writes things as he sees them and rarely sugar coats anything.  He wrote a post …actually, it’s damn near a manifesto …towards the end of 2009 called ‘Show Me the Money! (Information)’ and what Ian wrote in this post has been swirling round in my head since the first time I read it.  If you’re in technology, this is essential reading on how to engage and actively work with your customers in a positive fashion …actually, thinking on that, Ian’s commentary is applicable for just about any business you might be in!

I wanted to pick up on point 13, however where Ian writes …”I need electronic copies of any & all materials discussed or presented – no exceptions, without this I can’t use it as reference material in my internal strategy planning. If you hide behind “it’s beyond NDA”, or “NDA prohibits” then I’ll interpret that as “you don’t trust me personally or respect me professionally” and the relationship will be difficult from then on.”

I personally struggled with this one a bit when I first read it as I meet with all of our vendor partners frequently under NDA, and I wouldn’t ever wish to betray their confidence …but I could empathise with where Ian was coming from.

Then it occurred to me …what is an NDA really but protecting intent over execution.

Time was, not so long ago, that within data storage specifically and technology generally that if you came upon an idea such as thin provisioning, data deduplication, automated storage tiering, or data compression you could pretty well rest assured that you had a good 18 months before anyone could bring anything similar to market.

So you used an NDA to protect yourself from competition long enough so you could launch your product, as well as protection from someone launching something that seemed similar when you did but in actuality wasn’t as good.  Customers hate esoteric FUD based arguments …see Ian’s point 12 if you don’t believe me …so you prolly don’t want to be having a ‘but mine goes to eleven’ type argument with customers.

What’s the answer?  For me I think the answer lies in reversing the intent over execution equation with customers.

  1. Show me, don’t tell me. I hate slides and slideware presentations generally, but I recognise their use as a medium for delivering information.  But that shouldn’t be the whole presentation, in fact I would reserve slides for ‘handouts’ post meeting.  Why not show someone what you’re talking about with a videoed demo?  I could give you slides to articulate automated storage provisioning, or I could just show you.  Watch this space as we’ll make similar videos for containerisation, container mobility, and secure multitenancy for virtualised datacentres.  Just don’t expect me to schlep a 75kg bit of vulcanised rubber around the Solution Centre like Rob will be in Richmond Park.
  2. Candy is dandy, but liquor is quicker. ROI/TCO calculators are great and very useful, but in the end they are nothing more than numbers which have most likely started in a lab and may or may not have field referenceability.  Firstly, I think we as an industry should stand by cost reductions by underwriting them and, if we’re wrong …we’ll write you a cheque.  Secondly, the TCO data should be field referenceable …and the iPod/iPhone app we’re developing for virtualised datacentres will have the ability to reference the most up to date field TCO data every time you use it.
  3. Let customers consume information when they wish, how they wish. Part of the reason we’re developing an iPod/iPhone app is to allow customers to interact with TCO modelling, virtualised datacentre demonstrations, virtualised datacentre components, application containers moving between our virtualised datacentre and another in the USA ..all without me hovering over you!  Seriously, though …I believe customers want to consume information in ways that are useful to them, and this requires us to present information and data in ways that allow this.

Execution drives true value, intention tells me you might (or might not) get there one day.

I hope you’ll join me in execution and perhaps someday NDAs will become a thing of the past.

Have a great weekend,

-Matthew

Click here to contact me.