Just published on 451 Research (subscribers only)
Environmental group Greenpeace recently released its latest take on the ecological impact of large datacenter operators. In Clicking Clean: A Guide to Building the Green Internet, the organization asserts that while there continue to be some laggards among the pack of large datacenter operators, many have taken decisive steps to improve their environmental stewardship.
In a clear switch of emphasis, Greenpeace reserves the bulk of its criticism for what it describes as a group of monopolistic utilities. The group says these utilities are still not investing sufficiently in renewable energy generation, or creating the kinds of tariffs to incentivize datacenters to use more renewables. Greenpeace devotes less attention to the fact that the relationship between datacenters and utilities is shifting, in some cases dramatically, with implications for renewable energy generation and use. (We examined some of these changes in a recent report).
Just published on 451 Research (for subscribers only)
Colocation datacenter operator Green Mountain Data Centre has two facilities in Norway. Its first facility, DC1, close to Stavanger, Norway, received a Tier III Tier Certification of Constructed Facility from Uptime Institute (a division of The 451 Group) in Q2 2015. The site has been operational since 2013, and its customers include Norway’s largest financial services company. The facility is notable for being built inside a former NATO munitions store. Other innovations include the use of seawater cooling and hypoxic fire suppression.
Early Adopter Snapshot
Many new datacenter builds attempt to accommodate the often competing ambitions of energy efficiency and resiliency. The balance for colocation facilities is even harder to achieve, with customers often demanding high levels of resiliency while mandating PUEs and operational costs commensurate with extreme energy efficiency. Green Mountain appears to have managed what few others have, thanks to the site’s history as a NATO munitions store and the availability of cheap, reliable and low-carbon hydropower. The addition of seawater cooling helps to further reinforce the facility’s green credentials.
HP thinks so. They don’t have any specific product plans but they talked me through how their existing Apollo 8000 high compute server system could be adapted to cool networking equipment and potentially eventually storage.
Here’s an excerpt from my report.
“Nearly a year after the launch of its liquid-cooled Apollo 8000 server, HP reports strong interest in the system from HPC facilities, as well as some service providers. The company declined to provide specific shipment numbers but notes that it has a healthy pipeline for the 8000. However, HP believes there needs to be a significant readjustment in the budgeting process for datacenter projects – to reflect the capital and energy-efficiency gains from using direct liquid cooling (DLC) – before the technology becomes more widely adopted.
The company is also considering how to apply its take on DLC technology to its networking and storage systems, as well as those of its partners. This would help overcome one of the roadblocks to greater DLC uptake: although DLC technology has the potential to eliminate the need for mechanical air-based cooling for servers, facilities still require some perimeter air cooling for networking and storage systems.”
451 Research clients can get the full report here:
Am looking forward to attending my first Open Compute Summit in San Jose, California next week.
Aside from escaping the European winter for a week, the line-up of speakers looks great and it seems the event is morphing it one of the must-attend events (along with Green Grid US, Datacenter Dynamics London, Uptime Symposium US) in the datacenter calendar.
I am there principally on a fact-finding, and bridge-building, mission representing the European Commission-funded CoolEmAll project. The project is entering its final phase and we are keen to establish tighter links with industry initiatives such as Open Compute so that the research and technology developed in the project can be exploited by others, as well as adhere to industry standards.
I will also be interested to hear the panel discussion on the use of renewable energy in the datacenter which fits with the another EU project – RenewIT – in which 451 is involved. It’s still early days for RenewIT as the project only kicked off last November, but it’s good to see the use of renewable energy being debated at events such as Open Compute.
I will be writing up a number of reports for 451 Research on the event and will provide summaries and links to those in the coming weeks.
Iceotope is a UK-based company I have been tracking since it first emerged in 2009, and then disappeared, before resurfacing in 2012 backed by Peter Hopton (of VeryPC fame). I had been aware of the concept of cooling datacenters with liquid rather than air – the technology dates back to the mainframe era – but it has largely remained a niche technology only found in high performance computing and supercomputing systems. It’s fair to say that Iceotope probably did more than other companies to turn me on to the idea that this could be a disruptive technology in enterprise datacenters too (although there are a lot of reasons why it might not do). So it was good to see this week that others have bought into its approach – to the tune of $10m – including datacenter giant Schneider Electric. I will be following up with Iceotope later this week and am also working on a Long Format Report for 451 Research on the 15 plus companies developing direct liquid cooling technology.
I spoke with European datacenter start-up Eco4Cloud a couple of weeks back and the report based on that conversation has just been published (for 451 subscribers) on 451 Research.com. Eco4Cloud (E4C) is a spinoff from the Institute for High Performance Computing and Networking (ICAR) of Italy’s National Research Council (CNR) and the University of Calabria. The company has developed software designed to deal with virtual machine (VM) sprawl and low server-utilization rates. E4C’s software effectively automates the real-time consolidation of VMs onto the minimum number of physical servers. The remaining servers, with a low number of VMs or none at all, can be power-managed dynamically based on workload variations, or even retired. E4C has received early stage funding from two external investors, and is looking to attract new investment and partners in 2014.
I took part in this webcast with TSO Logic in December 2013. The webcast looked at the importance of IT energy efficiency to lowering datacenter operating and capital costs. 451 Research gave an overview of some of the main themes and trends in this area before TSO Logic gave a detailed account of how its software can be used to identify and eliminate under-utilised servers, and power manage IT equipment.
Here’s an excerpt from my latest report for the Dataenter Technologies Group at 451 Research:
Earlier this year, Microsoft announced that it would become carbon neutral by FY 2013. The company says it will achieve this goal through a three-pronged strategy: be lean, be green and be accountable. Practically, this translates into ongoing efforts such as improving datacenter energy efficiency, reducing air travel, improving the energy efficiency of company facilities, and investing in renewable energy and offset projects.
The company has also set an internal price for carbon and a chargeback mechanism for individual departments. The aim is to identify which departments create the most carbon and provide incentives to curb emissions. The scheme will be applied to a range of departments, including datacenter operations. It could have repercussions on the future siting of facilities. Rather than selecting areas with the cheapest energy (and tax incentives), Microsoft may also have to consider the carbon intensity of the utilities serving that area (something environmental groups such as Greenpeace have campaigned for).
For more go to 451 Research: Microsoft imposes a carbon price on its datacenters and wider business
Datacenter modularity is one of the hot-topics for us at 451 Research’s Datacenter Technologies team. I have just completed a couple of reports back to back on what HP and Dell offer in this area. There are certain similarities but also big differences. Dell is going down a mainly services route while HP sees containerized datacenters as a natural extension of the server, rack and row.
However some critics have written off containers as a dead-end technology. We think that containers are selling, and will continue to do so for the immediate future but prefabricated IT, power and cooling modules (similar to Dell’s offering or HP’s Butterfly product) are more likely replacements for traditional bricks and mortar builds.
Check out these two reports for more (451 Research subscribers only I am afraid)
451 Research report: Dell eschews containers in favor of modular datacenter services
Dell’s competitors sell a range of modular datacenter products, such as Hewlett-Packard’s Performance Optimized Datacenter and IBM’s Portable Modular Data Center. However, Dell’s Modular Data Center group (part of Dell Data Center Solutions, or DCS) prefers to provide products and services with an emphasis on customization and best fit for the customer.(MORE)
451 Research report: HP talks up Performance Optimized Datacenters, but should it be chasing the Butterfly?
Hewlett-Packard recently asserted some ambitious potential market-size data for its container-based Performance Optimized Datacenters (PODs). The supplier believes PODs could be a relevant option for up to 45% of new total capital expenditure on datacenters over the next few years (up to $13bn in 2012). (MORE)
To subscribe or learn more about the Datacenter Technologies practice, apply for trial access here.
Our latest The 451 Group report on energy management start-up Joulex looks at how the company might benefit from the US Federal government’s datacenter consolidation plans. Consolidation is obviously about shutting down surplus resources – from individual devices to entire datacenters – but before you can start that process, and realize the savings, you need to know what you have to begin with.
Unfortunately, individual agencies appear to be struggling with that initial work. Asset management tools exist to help with this process but these systems need to be filled with information before they can be of any use That takes time and resources. JouleX, and Triton, believe their technology can help with this initial asset discovery process, as well as with long term energy efficiency monitoring and control.
However, they are far from being the only game in town and all the other major IT consultants will be hoping to grab a slice of the ongoing consolidation work.
JouleX alliance aims to benefit from federal datacenter squeeze
Energy management startup JouleX and sustainability consultant Triton Federal Solutions announced a partnership in the third quarter of 2011. Triton is acting as a reseller for the JouleX Energy Manager (JEM) products, and the partnership could help JouleX generate sales of its technology to US government agencies. The government plans to consolidate up to 800 of its 2,000 datacenters and improve the efficiency of the remainder. Savings could be up to $23.71bn per year. However, new efficient technologies will be required to identify assets for consolidation and optimize the remaining sites. Securing a slice of this effort could be lucrative for JouleX and Triton.
The 451 Group subscribers can access the report @ The 451 Group.com. Non-subscribers can get apply for access here.