Tag Archives: data centre

Back To The Future

At a recent data centre cooling seminar the discussion was all about where the industry would go next when current air-cooling technologies and techniques exhausted all the efficiency gains over today’s standard DX units.  The consensus seemed to be that liquid cooling, particularly liquid immersion cooling would inevitably reign supreme as it has two major advantages over other cooling techniques:

  1. Certain liquids can be up to 4000-times more effective at removing high heat loads than air
  2. Liquid delivered to the server submerged in it, can be as hot as the maximum operating temperatures allow, reducing the cost of cooling (sometimes eliminating it altogether), and, in addition, providing a good heat source that can be used with thermocouples or other engineering solutions to generate electricity (see: http://phys.org/news/2013-05-green-conversion-electricity.html)

So who’s doing this?  Well, it appears that this market is beginning to expand quite rapidly, as higher performance computing (HPC) becomes more ubiquitous in industry.  Data Center Knowledge report (http://www.datacenterknowledge.com/archives/2013/07/01/the-immersion-data-center/) that CGG have just installed a futuristic-looking data centre in Houston, Texas, that wouldn’t look out of place in a sci-fi movie.  Computer circuits are immersed in mineral oil ‘baths’ making the data centre eerily quiet.   A small British start-up, Iceotope (http://www.iceotope.com/), have been winning awards for their innovative ‘data centre in a rack’ design that is now beginning to gain traction in HPC environments.

If you think you’ve seen this all before, well you’re right, Cray were putting their supercomputers into liquid in the 1980′s and IBM have also been doing it for decades.  It largely fell out of favour as data centre managers became risk-averse to water in their data centres.  Today’s technologies are highly proven in terms of prevention of leakage and I’ve yet to hear of anyone who has experienced one, though I’m sure there are some examples out there.

Something to think about anyway when you design your next data centre…where will you put the pipes, and how will you re-use the heat?

University of Hertfordshire wins award for sustainability excellence

7 November 2011: The University of Hertfordshire has won a prestigious Green Gown Award 2011 for its pioneering data centre refurbishment project. This outstanding achievement was announced on 3 November at the national awards ceremony held at the Grand Connaught Rooms in London.

Run by the Environmental Association for Universities and Colleges (EAUC), the Green Gown Awards (GGA) recognise exceptional environmental initiatives being undertaken by universities, colleges and the learning and skills sector across the UK. With 240 applications this year, a rise of 25% from 2010, GGA are firmly established as prestigious recognition of sustainability excellence in the further and higher education sectors.

The GGA covers 13 awards categories and the University has won the coveted Green ICT category. This recognises the growing environmental importance of ICT within the sector; it encompasses a variety of actions that help minimise energy consumption, carbon emissions, waste generation and other environmental impacts associated with ICT use.

After months of scrutiny the judges said the UH entry gave “Impressive examples of best-practice features which could easily be adapted by others.” At the Awards Ceremony a delighted Steve Bowes-Phipps, UH Data Centre Manager, was presented with the impressive GGA trophy which will be proudly displayed in the College Lane Learning Resources Centre.

Steve said: “Once again, the Data Centre Refurbishment project has been recognised as a beacon of good practice both in the industry and in the HE/FE sector.” He praised his project team colleagues from across the University saying the award was fully deserved: “This has been an important 3-year development for the University which has delivered not only a world-class green data centre but also an operationally efficient one.”

Visit the Green Gown Awards website

More on Data Centre Best Practices II

Regular readers of this blog will know that while we have a sector-leading green and efficient data centre on one of our campuses, the other data centre is somewhat backward in that regard and I’ve spent a lot of effort trying out various ways of improving its efficiency. I’ve resorted to some fancy new type of floor tile and put grommets under the racks, blocking the holes that the cables poke through to help sustain static pressure in the floor plenum.

We’ve been making these changes blindly though as we had no meters in order to measure power usage and calculating the PUE is next to impossible as the building meters are not specific enough.

The good news is that we are finally starting to make some progress! Last Friday, we had power meters installed. We needed four: (1) Main Facility Supply, (2) PDB A, (3) PDB B and (4) Utility Board. Meter 4 captures the usage of the lighting, but also an external comms room that takes its power from our UPS – a legacy piece of infrastructure that could have been architected differently if I had been there when it was designed.

Unfortunately I don’t have a network connection Meter 1 as yet, but I do have the other three meters connected up and recording. The meters we are using are the same we’ve used elsewhere: Cube IP/400s. Does anyone know a way of capturing data from these devices automatically without manual cut & paste? If you do, please let me know. They store about 2.5 months of raw data and a year of totals for trending purposes. They can also calculate cost in monetary terms as well as carbon.

Now we can calculate our PUE and really know how our efficiency improvements are making an impact…more to follow…

Simplifying the CRC Energy Efficiency Scheme: Next Steps

(with thanks to EAUC for providing this news item)

The Department of Energy & Climate Change (DECC) have recently published their proposals for the simplification of the Climate Reduction Commitment (CRC) Energy Efficiency scheme.

The scheme has been criticised by many for its complexity, and as a result, the Government committed itself to simplifying the CRC and published a number of discussion papers earlier this year. DECC has now summarised the proposals for a simplified scheme, intended to be applied from Phase 2 (2013) onwards.

The most significant proposals include:

  • Making the qualification process easier: Under the original scheme, qualification of organisations was based on two criteria: (i) the presence of one or more settled half hourly meters; and (ii) a total electricity of at least 6,000MWh measured to such meters. Under the simplified scheme, participants will just have to prove they use a certain amount of electricity from the qualifying meter. Whether this will differ, in practice, from the original rule is currently unknown.
  • Reduce the number of fuels covered by the scheme: Currently participants are required to report on their energy supplies from a list of 29 fuels. DECC now proposes to reduce this number to four: electricity, gas, kerosene, and diesel (and the latter two, where used for heating purposes).
  • Move to fix price allowance sale: The initial scheme provided for an allowance auction from Phase 2 onward. The number of allowances would have been capped following the auction, with an option to purchase additional allowances on the secondary market. Current proposals, however, would establish two sales per year, with a fix price for allowances. This removes the need for businesses to come up with auctioning strategies, although it is unsure whether there will still be room for a secondary market and how this market evolve.
  • Simplifying organisational rules: Previously, participation was based on highest parent company. This caused problems to many business structures, particularly private equity and other investment funds, as it did not reflect their natural structure or processes of these organisations. Under the simplified scheme, although qualification would be maintained at highest parent company, organisations will be permitted to participate as “natural business units”. What will be considered as a natural business unit is not defined in the proposals.
  • Removing overlaps between the CRC scheme and other schemes: Any organisations or sites covered by a Climate Change Agreement or the EU Emissions Trading Scheme will be automatically exempt of the CRC.

Despite numerous calls from stakeholders, DECC has decided against changes made to rules dictating the landlord and tenant relationship under the scheme.

Following this review, the Government now intends to publish draft legislative proposals in early 2012 for formal public consultation.

To view the full proposals, click here.

Data Centre Shared Services

The University Sector has been trying for some time now to investigate cost-effective ways of sharing ICT provision and services across institutions.  Most of these attempts have failed due to intransigence from Her Majesty’s Revenue & Customs (HMRC), who wish to impose a double-VAT burden on institutions wishing to avail themselves of these services, where no VAT burden may have been payable originally.  Consider the example of a shared service data centre between two universities:

  • Currently both universities separately and routinely incur a liability for VAT on the purchase of services, subject to a partial exemption calculation.
  • If shared services are provided by a third party in the case of a separate legal entity or one university providing the services to the other) there is a potential for an additional VAT cost to be created, since the provider (if a university) would only be able to recover input tax in line with its partial exemption method and would need to seek to charge the other university on the irrecoverable VAT as part of the recharge.
  • As this would be a taxable supply VAT would be charged on the total recharged amount, including the irrecoverable VAT.
  • If the organisation set up to deliver the shared service employed the systems administrators who have transferred across from the institution(s) no longer requiring their services in-house, then the VAT burden has gone from zero to the double-VAT described above.

Not being an Accountant, I can’t tell you what this additional levy would mount up to, but I can assure you, that on a VAT rate of 20%, this could prove uneconomic and detrimental to the business case for a shared service initiative.  Is it any wonder that no university has done this yet!

Fortunately, a ray of light has appeared in the form of the most recent White Paper issued June 2011 by the UK Coalition Government (http://bit.ly/oBPsZE).  The White Paper has recommended reviewing the VAT burden on shared services in particular and HMRC are asking for feedback on the proposal in their review published here: http://bit.ly/paBpI8

This may be the chance to influence taxation policy that we have been waiting for and then maybe the creation of a “U-Cloud” for universities will come a step closer to reality.

PUE Certification

I recently took the step of reporting our PUE figures to the Green Grid for visbility purposes.  This now means we can add the subscript codes “L2, MD” to our PUE so that others will know how we have measured and how accurately.  The following description (courtesy of the GG) explains what the mnemonics mean:

PUE Category 0

This is a demand based calculation representing the peak load during a 12-month measurement period. IT power is represented by the demand (kW) reading of the UPS system output (or sum
of outputs if more than one UPS system is installed)  as measured during peak IT equipment utilization. Total data center power is measured at the data center boundary (e.g. point of electric
feed for Mixed-Use Data Centers  or utility meters for Dedicated Data Centers) and is typically reported as demand kW. As this is a snapshot measurement, the true impact of fluctuating IT or
mechanical loads can be missed. However consistent measurement can still provide valuable data that can assist in managing energy efficiency.  PUE category 0 may only be used for allelectric data centers i.e. it cannot be used for data centers that also use other types of energy (e.g. natural gas, district chilled water, etc.).
PUE Category 1
This is a consumption based calculation. The IT load is represented by a 12-month total kWh reading of the UPS system output (or sum of outputs if more than one UPS system is installed).
This is a cumulative measurement and requires the use of kWh consumption meters at all measurement points.  The total energy  must include all fuel types that enter  the data center
boundary (electricity, natural gas, chilled water, etc).  In a Dedicated Data Center building, this will include all energy captured on utility bills; for a Mixed-Use Data Center, all the same fuels
must be sub-metered if they cross into the data center boundary.  Annual reading should reflect 12 consecutive months of energy data.  This measurement method captures the impact of
fluctuating IT and cooling loads and therefore provides a more accurate overall performance picture than PUE Category 0.
PUE Category 2
This is a consumption based calculation. The IT load is represented by a 12-month total kWh reading taken at the output of the PDU’s supporting IT loads (or sum of outputs if more than one
PDU is installed). This is a cumulative measurement and requires the use of kWh consumption meters at all measurement points. The total energy is determined in the same way as Category 1.
This measurement method provides additional accuracy of the IT load reading by removing the impact of losses associated with PDU transformers and static switches.
PUE Category 3
This is a consumption based calculation. The IT load is represented by a 12 month total kWh reading taken at the point of connection of the IT devices to the electrical system. This is a
cumulative measurement and requires the use of kWh consumption meters at all measurement points. The total energy is determined in the same way as Category 1.  This measurement method
provides the highest level of accuracy for measurement of the IT load reading by removing all impact of losses associated with electrical distribution components and non-IT related devices,
e.g., rack mounted fans, etc.
The “M”, “D” or “Y” after the Category num,ber gives the frequency with which the measurements are taken (i.e. Monthly, Daily or Yearly).

Blanking Panels

First post of the year – so I wish all my regular readers a very Happy and Prosperous one!

I have been busy in the data centre fixing blanking panels recently – we didn’t quite have enough (we do now) but I did manage to virtually completely plug the main hot aisle.  This seems to have had a great effect as my PUE has now dropped from 1.33/1.34 to 1.23/1.24 and I’m *still* running on the “Summer” setting on the CRAHs!!!  Our new support and maintenance provider has started from 4th Jan 2011, so this will be sorted out soon.  It’s nice to see though, that we are approaching our target PUE – if only for a month or so until the weather starts to warm up again.  This gives me confidence that once the data centre is in balance, we should be able to achieve the 1.22 PUE annualised.

On another (related) note: I tried to fit some in-fill panels yesterday.  These are panels that supposedly block the sides of extra-wide cabinets to prevent air escaping around the inside of them.  We have 6 of these in our main comms room.  I have to say, I spent about an hour in there trying these panels in every configuration I could conceive and I cannot see how they fit together to the racks.  They are the correct type – from the same manufacturer – so I can’t see why it’s so difficult!  I’ll have to get someone in to assist.  If you are considering purchasing these – ask for a manual!

UH Wins Datacentre Leaders Award 2010

Hot off the press – I was honoured to pick up the award for “Innovation in the Micro-Data Centre” last night at the Lancaster Hotel in London.  We entered this category last year, and although reaching the finals, our project wasn’t complete and therefore could not present a strong enough case – now it can…and has!

Recognition is a curious thing – I’ve spent a large part of this year going to conferences and speaking on this project, but while it is easy to dismiss awards as an industry giving itself a pat on the back, but there are two reasons why I think they matter:

1.  The more awareness that an award generates, the more likely other organisations will realise just what is being achieved out in the industry and seek to emulate it.  A lot of great work is being done but it rarely breaks cover until publicised.  Green IT is strangely prevalent only in pockets of organisations and may even come about by accident (under the guise of “cost-savings”) rather than by design

2.  Awards like this are not just handed out like tea and biscuits at a village fete; having a prestigious panel of industry experts review and assess your project means that what you can give back to the industry in terms of advice and help is given a stamp of approval. As a university, we believe it is vitally important to impart and disseminate knowledge and learning to where it’s needed most.  This award at least shows we know what we’re talking about and can also turn knowledge into practical results.

Overall a great year for us, and while I plan for two power outages over the next three weeks, I would like to wish all readers of this blog a very Merry Christmas, Happy New Year or seasonal good cheer -wherever you happen to be!

We’re not finished yet! (Steve Phipps)

With his contract ended, Richard may have left us to build data centres in Liverpool, but the project rumbles on here.  We have a delay in Practical Completion due to an issue with the humidifier/dehumidifier, which brings us to an interesting discussion…

In the past, data centre environments were often operated at very low temperatures (18-20 degrees C) and 50% relative humidity.   The Ashrae Thermal Guidelines Book establishes classes of IT equipment of which Class 1 is typically data centre-specific and relates to servers and storage products.

Ashrae environmental specifications include both recommended values and allowable values.

Recommended Environmental Conditions: Facilities should be designed and operated to target the recommended range.

Allowable Environmental Conditions: Equipment should be designed to operate within the extremes of the allowable operating environment. In addition to the allowable dry-bulb temperature and humidity ranges, the maximum dew point and maximum elevation values are part of the allowable operating environment definitions.

For a Class 1 Environment, the specifications are:

Equipment environment specifications
Class Allowable dry bulb(°F) Recommended Dry Bulb (°F) Allowable % relative humidity Recommended % relative humidity
1 59 to 90 68 to 77 20 – 80 40 – 55

The Equipment Environmental Specifications are based on the equipment air inlet.

What this means for the data centre manager is that he/she no longer needs to run their data centre at low temperature and a very restrictive humidity range.  20-80% humidity is a world away from 50% close-control, and should you adhere to these specifications and programme your humidifiers appropriately, you may save yourself a significant sum of money in operational costs but also in carbon output.  A word of warning however – old servers may not operate well with such extremes of humidity, so take care to check the specs before applying this.

Coming back to our problem with the humidifier/dehumidifier:  our supplier has furnished us with one of the most efficient on the market.  Compared to the the units built into the CRAC units, this particular device is 18 times more efficient and just as effective.  However, it still uses energy that we shouldn’t need once we decommission our remaining old servers.  For now, we will set the range to 30-70% relative humidity and reap some benefit now with the expectation of greater rewards to come!

Presenting and Exporting Green ICT to Germany (Steve Phipps)

Richard and I presented at the DataCenter Dynamics Public Sector Conference in Manchester on Monday 10 May.  Attended by around 300 Public Sector professionals and supplier organisations, this conference was targeted specifically at addressing the issues around managing, refurbishing and building data centres and the impact of the “G” (Government) Cloud on Public Sector strategy.

We had a good attendance for our case study “Micro Data Centre Refurbishment – Overcoming Physical and Budgetary Constraints in a Legacy Mixed-Use Facility” and the feedback afterwards was excellent.  More details on the conference can be found here: bit.ly/afDHUi

I also travelled to the University of Bremen last week, where I contributed to a joint exercise with Oxford University in exporting Green ICT to Germany.  Howard Noble from Ox Uni spoke on the Desktop PC angle, including tackling the social-psychological obstacles to changing to a sustainable future.  I presented Green ICT from the Data Centre perspective, including demonstrating a methodology for tackling the challenges of sustainability in a systemic way using my in-development Sustainable ICT Maturity Model.

More details on these projects can be found here: http://bit.ly/cUA7wR and http://bit.ly/9mABw5

Bremen University hope to use what they’ve learnt from our visit to build a Green ICT educational programme, incorporating those skills into academics, students, apprentices, technicians and engineers, with collaboration from the Mittelstand (SMEs).  This may be rolled out across the region as a formalised pedagogy that would allow all Universities within Germany to offer these training programmes and set Germany on the path to a more sustainable future.  It is important to note that currently Green ICT has no foothold in the educational sector over there.