Ultra-efficient Datacenters And Servers, But In The Future Will We Need Them?

Sometimes the solution to a problem involves thinking completely outside of the box. Case in point the rapidly increasing quantity of data that exists, recent studies suggest that by 2017 there will be 3.6 billion people online, 19 billion global machine-to-machine network connections and 1.4 zettabytes of information being generated online. At the same time, Jonathan Koomey, a research fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University, recently told the “How Green is the Internet?” symposium held by Google that the internet uses around 10% of the world’s electricity (up 25% in a little over a decade). Put this ever increasing deluge of information, and the corresponding increase in energy consumption together and you have a real cause of concern for those who operate data centers at scale – indeed Gartner research recently identified skyrocketing energy demands and costs as the #1 concern for data center operators globally.

So. What is the solution? Traditional approaches have been to make data centers as efficient as possible. Data center operators have invested heavily in creating highly efficient facilities – I had the pleasure of visiting one of the best examples of ultra efficiency last year at the SuperNAP facility in Las Vegas. The other approach has been at the server level – in recent years there has been a plethora of low power server developments, just the other week Servergy release a new class of servers boasting the highest performance-per-watt available. According to the company their new server line saves up to 16X in server energy and space costs over traditional systems.

That’s all well and good, but what happens when you think outside the square? When you have the ability to not look at commercial imperatives but rather look at a problem space conceptually? When you’re not tied to a corporate entity but are immersed in the pure research atmosphere of a university? Researchers at Cambridge University had that opportunity an came up with something fascinating. A future internet infrastructure that doesn’t need servers to function.

As covered over on GigaOm, researchers have found a way to deliver content without the need for servers to house and deliver the data This move away from centralized computing is part of a project funded by the European Union, Pursuit. the so=called Pursuit internet is similar to the approach used by BitTorrent or Skype – information is shared directly between individual users rather than through a third party server. The concept replicates data in multiple locations to increase efficiency and make the network as a whole more robust.

The research team have already created a proof of concept of Pursuit – given current concerns about NSA access to centralized data, the ideas raised by Pursuit obviously resonate with those who keep a watching eye on privacy. Of course Pursuit doesn’t answer the concerns of those who suspect there is widespread capturing of data on the public internet The technical manager for Pursuit, Dirk Trossen responded to this concern saying that:

Similar to today, if you designed the deployment appropriately, censorship and surveillance would become very difficult (using encryption, ‘hiding’ behind labels without using meaningful names or changing the name to label association rapidly. However, censorship and surveillance can also become easy by centralising the main components. All this, however, is similar to today’s internet. The surveillance unearthed by Snowden was enabled at large by the centralisation of main components of today’s internet (in U.S. jurisdiction). There are certain architectural measures one can do to circumvent that but it’s hard nonetheless. I don’t think that it would be much different in a Pursuit world, if you don’t have the societal push for reduced surveillance. In short: censorship and surveillance in a policy/society problem

Pursuit aims to replace the protocols used today on the public internet. This is in contrast to a project out of PARC, the “content centric network” which would run alongside TCP/IP. For those who raise questions about the sheer volume of data needing to be stored and serve, Trossen points out the fact that the massive increase in the numbers of connected devices combined with Pursuit’s ability to handle files split into multiple “chunks” means that this isn’t a major barrier to adoption.

Of course there are plenty of other barriers to Pursuit gaining a toehold, in particular the fact that some of the biggest sources of stored data – Google, Facebook and even Twitter, make money specifically out of storing, analyzing and acting upon user data. Any move to distributed storage out of their control would likely be met with resistance.

Still, as a pure research project, Pursuit is very interesting. I don’t think the data center operators (or the efficient server vendors for that matter) have anything to worry about just yet though

Enhanced by Zemanta

Viewed 21432 times by 4730 viewers

Be the first to comment - What do you think?
Posted by plates55 - December 16, 2013 at 9:32 am

Categories: Trends   Tags: , , , , , , ,


Get every new post delivered to your Inbox

Join other followers