Sixty Second Snapshot
Data is at the very heart of the evolution of corporate IT and the revolution of consumer technology. The coming year will see many customers evolve what was their storage strategy and how they stored what they created to a data strategy and how/where they access data through classifying the data being created. This classification will give rise to segmenting data for permanent in-house retention [structured] and data to be streamed off-site retention [unstructured]. The biggest question of 2011 for data will likely be ‘for what do I want to own the storage outright versus what I can hire access to’.
Firstly, a very Happy New Year if I haven’t seen you or had the opportunity to say HNY thus far …and this officially marks the last time I think we can get away with wishing each other a Happy New Year before we cross the Rubicon hurtling well into 2011 and beyond!
Secondly …I’d like you to go and find a newspaper article or favourite magazine, clip an article that you find meaningful or interesting, photocopy it, and post to 20 close friends and family. It’s okay, I’ll wait.
Right …done? Good! Now, how long did it take to do that, or , on balance did you not bother as you reckoned the amount of effort and time was more than you were prepared to expend for little old me to illustrate a point? Not to worry, I wouldn’t have done it either!
Over the past Christmas and New Year break I was thinking that, even a few short years ago, easily sharing information was far from easy. Yet, flash forward to now and we probably don’t even think twice about sharing something on Facebook or an email or even Twitter with tens, hundreds, or even thousands of people …all in the blink of an eye, with the technology which enables this remaining largely invisible and transparent to you as a customer or consumer.
Equally, the world wide distribution of sensitive information via something like WikLeaks has highlighted a profound change in almost effortless information distribution. We could certainly argue the legality or even the sense of releasing secure diplomatic cables …and wouldn’t it be delightfully ironic if someone leaked Julian Assange’s book before it was published. Interestingly, though, WikiLeaks would probably not have been possible even a few short years ago. I must admit, I remain unconvinced regarding the execution of WikiLeaks stated purpose and it would appear that perhaps some of Assange’s key staffers do as well given they are abandoning the Good Ship Julian and starting OpenLeaks.
What’s this got to do with data storage and protection?
As a data guy I would say that data has changed the world forever. Data is being set free at an incredible pace, whether through mediums such as WikiLeaks or access ‘anytime/anywhere’ mobile devices …and data is being created at an even more alarming rate. How much data? How about we create as much information in two days now as we did from the dawn of man through 2003. And it is this data …or, more importantly strategies for data …which give us some clues as to what 2011 holds.
1. Do you have a data strategy or a storage strategy?
Up to 2011 we have advised both as an industry and within Computacenter that our customers have a storage strategy, with these strategies often driven by categorisation such as ‘what kind of RPO/RTO is required’ and ‘throughput’ and the strategy often defined solely by the speed and type of disk drives [e.g. Tier One = 300GB /15K fibre channel drives]. Whilst throughput and traditional categorisation remains important, we will see storage strategy evolve to data strategy with key categorisation such as ‘what is the data’, ‘should the data reside internal or external to the organisation’ joining the more traditional speeds and feeds. It is this data strategy which will seek to reduce OpEx and CapEx costs in both the immediate and long term whilst also increasing corporate agility by enabling secure ‘anytime/anywhere’ access for corporate users.
2. Virtualisation … and optimisation …of the entire datacentre continues at pace.
As data strategies begin to take shape, it will become more and more apparent that catering to data through intelligent and efficient storage devices alone will not be enough to fully realise a data strategy which seek to reduce OpEx and CapEx costs in both the immediate and long term whilst also increasing corporate agility by enabling secure ‘anytime/anywhere’ access for corporate users. As such, expect to see virtualisation of the datacentre continue to accelerate at considerable pace as the optimisation and cost reduction benefits are realised and advertised by early adopters of productised VDCs such as VCE vBlock, NetApp FlexPod, and Oracle Exadata. Equally, 2011 will be about the realisation this isn’t a zero-sum game where one is forced to select a single ‘stack’ as some environments and workloads will be suited to one ore more of them …or a mixture of them all. What is important is aligning the business to the data which will determine how, when, and why to select one or more optimised datacentre stacks. Besides, the ‘stack wars’ …if there is such a thing …may not matter much as in the not too distant future there may be one chip to rule them all.
3. Intel Romley will have a profound impact on storage …and servers, hypervisors, networking …pretty much everything.
As we continue to optimise the datacentre, the offload of tasks to the chipset will gather pace. Emulex showed the way forward and the possibilities with their OneConnect converged adapter product, and I think we’ll see things like RAID …formerly the sole domain of specialised data storage hardware controllers …move down to the Intel chipset. But I don’t think it will stop there and within the next few years we’ll see the ability to run data storage, server compute, networking [routers, switches], and even hypervisors directly from Intel chips with software automating the self tuning/self healing required to distribute the load and ‘tell’ the chip which operation to perform …i.e. behave like a data storage controller today, but tomorrow we need you to perform like a server to help service the data access requirements.
4. Customers will demand their own internal corporate app stores which will ensure that their users remain productive anytime, anywhere.
The concept of the app store began in the commercial space with mobile devices such as the iPod, iPhone and iPad. The iPhone and iPod Touch gained 85m users in 11 quarters …that’s 11 times faster than AOL added users in 1994 …and the app store is about to hit 10 billion downloads. Add to that the iPad selling 1m units in 28 days and on target for 10m sold in 2011, not to mention the Google Android devices and smartphones …users will want access to their data anytime, anywhere for a truly mobile experience. In fact, mobile as a concept will probably cease to exist.
But on 11 January 2011 the app store entered the corporate space with the launch of the Mac app store. Not only does this point in the direction of a convergence of desktop and mobile operating systems in the not too distant future, it also points to a new and perhaps more efficient way for organisations to distribute and maintain corporate software as apps and through a secure corporate app store. The IT landscape is littered with instance upon instance of consumer technology being demanded by users at work to drive business agility increases and change, but it won’t be the device that enables this change for organisations …it will be the ability for organisations to federate their structure data housed internally [e.g. customer databases, ERP, and billing systems] with unstructured data housed externally with service providers [e.g. email] to provide a unified app space for their users. Put another way, the device will become far less important that the apps you can run and data federation, geotagging, and data container security will enable the corporate app store. Don’t believe me? It would seem companies like Apperian are trying to steal a march on competition with their Enterprise App Services Environment [EASE] product.
5. Federation of data will be what customers will require to keep costs down permanently, but will dip their toes in the water with selected workloads.
Fantastic apps such as Evernote for note taking, Dropbox for data, and Kindle for reading make it possible to take notes, save data, and even read books on any device you happen to own …all whilst keeping any modifications made synced up automatically with generally no additional effort on your part. How do they make this ‘magic’ work? The full answer can be somewhat complicated, but the short answer is that data federation is at the heart of each of these solutions. Customers will seek similar automated syncing and federation from their internal service provider [read, IT department] as well as external service providers such as Computacenter. How significant a change will this be? Well, let’s put it this way …I’m not sure I would want to be a USB flash drive manufacturer moving forward.
6. Data storage arrays will be able to be run as a virtual ‘instance’ on a virtual machine [VM].
The data storage of the past ten years …deduplication, compression, automated tiering, the list goes on and on …are really software at their core and not hardware solutions. Put simply …storage is really just software, after all. Given this, I expect one of the data storage vendors to ‘break cover’ and allow users to run their data storage array software as a virtual instance …a VMware VM, if you will. Indeed, NetApp have had an ONTAP filer simulator for quite a few years so it doesn’t take a huge leap of imagination perhaps to see where going one step further and allowing users to run a data storage array as a virtual machine may not be far away.
7. If we’re evolving our storage strategies to be data strategies, what will remain for data storage hardware tiering recommendations will be ‘flash and trash’; solid state drives [SSD] for performance intensive workloads and dense SATA/SAS drives for everything else.
Notice that you don’t see fibre channel [FC] drives in that equation? No, I haven’t made a mistake …I think this is the year that FC drives drop out of the architecture. Whilst they served an important purpose once upon a time, they have outlived their usefulness in the data storage architecture moving forward. Automated storage tiering, such as FAST from EMC, means that we can now move data automatically at the block level from highly performant SSD to SATA/SAS as required with no need for administrative intervention and whilst remaining transparent and seamless to the users.
I am convinced that 2011 is going to be an extremely interesting and perhaps even watershed year for data specifically and the datacentre generally. I would expect virtualise, containerise, mobilise will be joined with monitor, deploy, automate as we seek to reduce storage and IT costs whilst increasing business agility.
I’ll be blogging in much more detail about all of the seven predictions I’ve made above, and tune in next week as EMC will be making a major product announcement which could help prove point number seven …as well as possibly a few more!