Data Centers and Wildfires

As will be discussed in our upcoming note about managing data center with the severe weather caused by climate change, we continue to highlight the need for data center managers to not only review existing emergency plans but also anticipate previously unforeseen challenges. The need to understand the new ‘Normal’

Which brings us to wildfires… For any organization, the health and safety of their workers is the first priority. As we’ve learned through the pandemic, tiered responses and good communication are key to managing unexpected risks. Wildfires, however, present a range of specific challenges to mission-critical facilities. Below we’ll discuss a few key concerns.

The fire itself

To ensure clean, inexpensive electricity, many data centers are sited near hydroelectric power sources. That generally means nearby mountains and forests — areas increasingly at risk of wildfire in an age of climate change.

Road closures and the evacuation of towns in the path of the fire could pose immediate risks to staffing levels: workers may not be able to get to the site or may need to deal with their own emergency situations. Regularly review your contingency plans, and ensure personnel are advised of updates. Stay abreast of construction, road repairs or other impediments that might impact alternate routes. Develop and test notification plans in advance.

There are other risks as well. Wildfires could burn transmission lines, interrupting the flow of power to the facility. Post-fire, burn scar erosion can cause landslides that take out fiber cable — sometimes in multiple areas and in so doing, thwart redundancies. With climate change, the risks extend beyond the imminent, and preparation is key.

Wind

There doesn’t need to be a fire for high winds to affect data centers. After the disastrous 2018 Camp Fire in California that killed 86 people, PG&E (Pacific Gas and Electric Company) announced a policy of pre-emptively interrupting service to at-risk areas when forecasts indicate conditions may increase the risk of wildfire (e.g., high winds in an already hot, dry environment). These planned outages may continue for a decade, as the utility company upgrades and hardens its systems to deal with the risks more extreme conditions bring.

On-site power generation is one option data center operators might consider as a hedge against uncertain utility power supplies. However, a smooth transition to on-site power is never a given, no matter the amount of preparation. As a result, utility power outages — planned or unplanned — increase the risk of failure for data center operators.

Facility managers need to consider not only their risk from the direct line of fire but also from blown embers. Wind-blown embers can spark dry grasses at fence lines, collect near fuel tanks and even ignite employee vehicles in the parking lot (e.g., a canvas pickup bed cover). Think expansively about the possible risks and mitigation strategies.

Smoke and ash

Facilities many miles away from an active wildfire can be affected by it, potentially for weeks at a time.

All data centers are required to have some form of fresh air supply. This is necessary to replace the carbon dioxide we exhale with oxygen. That means managing the flow of ash and other particulates into the facility via the ventilation system. Beyond that, ash can be transported into the facility on workers’ clothes, maintenance tool kits, equipment and more – even slight movements can cause it to become airborne. All data centers near a wildfire are at some risk from smoke and ash infiltration and should take action to mitigate contamination.

Engine generators — which may well be pressed into service — pose particular concerns. Most have filters on the combustion air intake (and those will likely need to be replaced more often than usual), but there is rarely any filtration on the intake for the engine cooling. Consult with manufacturers to learn more about appropriate remedial measures that do not impact performance or affect warranties.

The fuel storage tanks and the fuel itself also require attention. Almost all diesel tanks are atmospheric types, which means they have vents. These vents allow air into the tanks when fuel is withdrawn and allow a path for air to escape when fuel is added. With the anticipated increased generator operation, the vents will be pulling in more air — which means ash can get into the tanks. Over time the ash will settle to the bottom of the tank but in the short term, it can be picked up by the fuel pumps and clog fuel filters. Longer term, the impact on fuel quality should be assessed.

The cooling system of almost all facilities will be affected in some way. Condenser units for air conditioners, air-cooled chillers, or cooling towers are all susceptible to ash in the air, as it will either contaminate the water in a cooling tower or clog condenser coils. This will result in less cooling capacity, and backup units may be similarly affected. Direct air-cooled data centers need to monitor air quality and adjust their filter maintenance schedules as conditions dictate.

Data center managers experiencing the impact of wildfires should document conditions and actions taken in response, then share lessons learned with others. As in the aftermath of Superstorm Sandy, knowledge sharing through organizations like the Uptime Institute Network can be invaluable in learning to cope with the new reality of a climate-changed world.

Pandemics: Operators plan to be ready next time

Data center managers, on both the facilities and the IT side of operations, are known for their preparedness. Even so, the pandemic caught most by surprise. Few had an effective pandemic plan in place, and most had to react and adapt on the fly, as best as they could. A small but significant number suffered outages or service brownouts as a result of changed traffic patterns, component shortages and staffing problems.

Operators do not intend to be caught out again. Virtually all (94%) of the respondents to a July 2020 Uptime Institute COVID-19 impact survey said they will improve their pandemic readiness and business continuity planning (see figure below).


 

 


Against a background expectation that another pandemic will occur, pandemic awareness and planning have already been added to the business continuity playbook at many organizations, both as an extension to management and operations and to disaster recovery planning.

Some of the procedures and processes that managers expect to have in place in two to three years’ time require changes in technology and strategy and may involve major investments. For example, more are planning to adopt remote monitoring and some automation, and to strengthen resiliency — all of which require investment and planning.

But more routinely, operators expect to clean more regularly, to separate workers into teams, to conduct health screening for visitors, to change air filters more often, and to store more spare parts. These are all part of pandemic awareness and amended processes; they are not short-term measures, but long-term adaptations. Some plan to add emergency accommodation and food storage on-site.

A number of operators also told Uptime Institute they will also enhance their rapid response plans so they will be ready to move to a high alert status at any moment. This will involve immediate implementation of staffing plans, organizing emergency fuel supplies, and changing maintenance processes, for example. To help, they may store personal protective equipment, pay for third-party services, buy reserve cloud capacity, and move to pre-agreed maintenance and management procedures. Staff are likely to be routinely trained in pandemic control and response — an area rarely touched upon before COVID-19.


For more detailed guidance on pandemic preparedness, see our report Pandemic planning and response: A guide for critical infrastructure. The full report Post-pandemic data centers is available to members of the Uptime Institute Network here.

Lithium-ion batteries in the Data Center: An ethical dimension?

One of the emerging trends in data centers is the use of lithium ion (Li-ion) batteries, both for distributed and centralized uninterruptible power supplies. Research by Uptime Institute and others predicts high levels of adoption in the years ahead. The primary reasons for this are technical, relating to energy density, rechargeability and management. But Li-ion energy storage is also regarded as a key component in renewable energy distribution, which is being adopted primarily to reduce carbon emissions.

But one question keeps coming up that has, on occasion, put proponents of Li-ion on the defensive: How “green” — and how ethically responsible — are Li-ion batteries? Is there an environmental dimension to the use of Li-ion technologies?

Before considering this question, we should note that data centers will probably only ever account for a tiny proportion of demand for Li-ion batteries. Even in 2019, with electric vehicles still in early infancy, the automotive sector accounted for 60% of Li-ion battery use. Demand overall is expected to grow tenfold (1,000%) to 2030, according to mining analysts Roskill, driven primarily by mobile applications. That notwithstanding, big data center operators, such as Google, Microsoft, Equinix and others, use a lot of batteries. They pride themselves on sound environmental policies, and they are paying attention. Indeed, activists may give them no choice.

There are two issues with these batteries: first, the use of rare or expensive metals, and the environmental impact of mining these; and second — related to this — the current lack of recycling.

In a Li-ion battery, there are two elements that are a concern: lithium and cobalt. It is possible to do without the latter, but at a cost of some energy density — a critical factor in deployment. Analysts in the mineral sector say that there are sufficient reserves of both metals to meet medium-term demand, although cobalt reserves are less plentiful. But there is strong commercial pressure among battery makers, car manufacturers and other big industrial consumers to secure access to the main reserves, extract the minerals profitably, and meet current demand. This led to some big speculative price rises in 2019, although increased mining activity since has brought prices (and price forecasts) down.

For lithium, half of the world’s resources are in the “lithium triangle,” mostly in pristine salt flats spanning areas of Argentina, Chile and Bolivia. Mining companies have been accused of exploiting local populations, extracting excessive amounts of valuable water, and damaging unspoiled habitats. For cobalt, more than half of the world’s reserves are in the Democratic Republic of Congo, where mining companies have been accused of child exploitation and serious health and safety violations. In 2019, a human rights groups filed a suit against Apple, Google, Dell, Microsoft and Tesla on behalf of 14 families in Congo.

For the data center industry, and the renewable energy industry, many of these concerns may seem far up the supply chain. But buyers can move the market: They can favor suppliers who source their components responsibly and pressure vendors to improve their supplier’s practices. They could also choose suppliers who pursue alternative sources of these metals in regions with more robust monitoring of environmental and health and safety concerns.

Recycling and extending the life of batteries also plays a role. At present, while almost all lead-acid batteries are recycled, Li-ion batteries are not. The most common way to get rid of an old Li-ion battery is to burn it — they burn exceedingly well.

There is a big effort underway to improve Li-ion technology. Tesla and GM are both working on million-mile batteries, designed to outlast the car, as well as designing batteries for easier reconditioning and recycling. Such technical advances can also be applied to stationary Li-ion batteries (which mostly have a different chemistry). Tesla is among the battery suppliers now targeting the data center market.

Although currently few Li-ion batteries are recycled, there are now dozens of companies with Li-ion recycling services or technologies. This activity will eventually reduce the pressure on mining companies to extract the minerals at such a rapid rate, and in such a damaging way.

Repurposing second-use batteries from mobile use to stationary applications, such as solar battery farms, is also likely to prove economic; this may lead to a “residual value” market in batteries and more renewable applications. Although some tests have proven favorable, data centers are unlikely to be suitable for this. The best way for data center operators to reduce the impact of Li-ion use will be to open a serious dialog with suppliers.

Data center workforce diversity makes good business sense

Increasingly, data centers cannot find qualified candidates for open jobs. Companies that commit to diverse and inclusive workplaces are more likely to have better financial performance; greater innovation and productivity; and higher employee-ambassador recruitment, employee retention and employee job satisfaction rates.

Diversity and Data Center Hiring

Which regions have the most energy efficient data centers?

When the PUE (Power Usage Effectiveness) metric for data centers was first agreed upon by the members of The Green Grid back in 2007, almost everyone in that crowded room in California agreed: This is not intended to be used as comparative metric across sites (as every situation is different) but it would be an excellent metric when comparing the effect of improvements at any site over time.

It is with some wariness, then, that Uptime Institute publishes the chart below showing that, overall, Europe has the most energy efficient data centers, and the Middle East and Africa, the least efficient.

The data comes from a question Uptime asks each year as part of our global data center survey: What is the average annual PUE for your largest data center? Among the nearly 450 respondents to that question in 2020, the average PUE of their largest data center was 1.59. This number was down slightly from last year, but effectively continued a trend of no improvement, or marginal improvement, since 2013.

The chart is based on a subset of the data — those who reported PUEs between 1.0 and 2.19 (313 respondents). We considered others to be special cases or extreme outliers (see below). Regional averages ranged from a low of 1.46 (Europe) to 1.79 (Africa and the Middle East).


 

 


In the areas with the highest PUEs — Latin America, Africa, the Middle East and much of the Asia-Pacific region — climate may play a factor. Regions with less temperate environmental profiles may be less able to take advantage of free cooling technologies, which can substantially reduce power use. This is especially so in areas with high humidity or water constraints.

Other factors may cause operators to take an approach that is less energy efficient than they would like, but that is less risky. For example, operators in regions that are prone to supply chain issues, that lack reliable access to service technicians, or that are subject to power grid instability may hesitate to use techniques or technologies that save energy but may have a more narrow margin for error. They may opt for higher levels of redundancy, which, of course, require more power.

The two regions reporting the lowest PUEs — Europe and the US/Canada — have roughly equivalent sample sizes, but Europe’s average PUE is almost 5% lower. Why? This might be because European energy prices tend to be higher, and/or that the attitudes of operators and regulators are more environmentally conscientious than those in the US. Another contributing factor: London, Amsterdam and Frankfurt house a big portion of the European continent’s data centers, along with the Nordic countries and Dublin. All have temperate climates.

What of those who said, “Our PUE is over 2.19”? Although in most cases the sample size on a per-region basis was too small to be significant, it was nevertheless noteworthy that in the Asia-Pacific region, more than one in 10 reported a PUE greater than 2.19.

When Uptime asked respondents for PUE information, there were several answer options not addressed above (e.g., don’t operate a data center, data center is filling/emptying, don’t know). Interestingly, one in six respondents (of a survey of operators/owners/managers) didn’t know the average PUE of their largest data center. European respondents were twice as likely to know as US/Canadian respondents: for many, a good PUE figure is a badge of honor. A poor one requires an explanation.


The full report Uptime Institute global data center survey 2020 is available to members of the Uptime Institute Network here.

Humidity and COVID-19 in data centers

Data center managers have gone to some lengths to avoid transmission of the COVID-19 virus in their facilities. Fortunately, many factors help keep transmission rates low in data centers: few staff are required; most jobs do not require close physical interaction with colleagues; and air filtration may help to reduce, if not necessarily eliminate, airborne transmission.

Presently, researchers are still debating the extent to which the virus can be transmitted through the air. Early in July, over 200 scientists wrote the World Health Organization (WHO), asking it to officially acknowledge that coronavirus transmission could be airborne. (Airborne transmission requires tiny particles containing infectious microorganisms stay suspended in the air for long periods of time. This is distinct from droplet transmission, which requires relatively close proximity and shorter time frames.) If this is the case, it means maintaining a 2-meter (approximately 6-foot) distance from others may be a less effective means of preventing transmission, especially indoors, than previously thought.

If a virus is transmitted in this way, the level of humidity can play a significant role in increasing or decreasing infection rates. Uptime Institute’s advisory report Pandemic planning and response: A guide for critical infrastructure addresses humidity and air filtration, but primarily in terms of ensuring appropriate air temperature and humidity for effective operation, especially if additional filtration is used.

Research suggests that not only does low humidity help particles stay aloft longer, but also dry air decreases the effectiveness of the initial human response to a viral infection. Cold, dry air brought in from outside, then warmed, may be a particular concern, because the humidity drops significantly.

At the same time, higher levels of humidity increase condensation, even if not perceptibly. This enables the virus to remain infectious on surfaces for a longer period of time.

As it turns out, humidity may not be such a concern for data centers — although it may be an issue for offices and other indoor spaces. The ideal level of humidity to minimize airborne transmission of a virus may be in the region of 40% to 60%. Fortunately, this a perfect match for data centers: ASHRAE’s 2016 guidelines recommend data centers maintain ambient relative humidity of 50% (previously 45% to 55%). Data centers operators should, however, review the ideal humidity for non-white space areas.