The Business Case for Smart Energy is Still in the Making

Smart Energy is getting a lot of airplay in the data center world at present. New or planned products that fall under this broad banner include Energy-as-a-Service uninterruptible power supplies, software-defined power systems, and adaptable redundant systems that enable operators to raise or reduce their level of redundancy according to business needs. It might be argued that some long-established products, such as power capping/management, fall under this banner.

All of these products share a common characteristic.  A software control system sits above the hard-wired power switching or power controls in a data center — or between the data center and the utility — and adjusts, in real or near-real time, the demand for and availability of power according to a set of policies. Depending on the infrastructure and systems in place, the power can be controlled by availability, by directional flow, by demand, or even by frequency or voltage.

The policies embedded in the management system might be financial, but might equally be concerned with availability, redundancy, safety and other factors. The data to help make those decisions may come from price signals or models, demand (from IT), equipment status, battery levels or maintenance systems. Implemented effectively, the technology is clearly powerful.

In discussions with vendors, Uptime Institute finds another commonly shared, if not necessarily universal, characteristic: the business case for adoption is a little fuzzy. Rather like data center infrastructure management software, there is a long list of benefits (discussed in the Uptime Institute Intelligence report Smart Energy in the data center), but it is not clear that any single one is overwhelming, given the upfront investment and the concerns about the introduction of complex and unfamiliar new technologies. In recent discussions among Uptime Institute members with considerable data center footprints, releasing stranded capacity and alleviating power needs at peak times emerged as a pressing problem.

The technology certainly has high promise. As our chart above shows in simple graphical form, a big potential benefit is really a form of peak shaving … either trimming demand or increasing capacity. If this can be embedded and trusted, it could significantly cut the capex and opex of data centers.

——————————————————————————–

For more on Smart Energy in the data center, a wealth of research is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday.  For membership information click here.

Renewed Pressure for Renewables to Power the Data Center

In a recent presentation at the Energy Smart data center conference in Stockholm, Gary Cook, the Greenpeace activist who has tracked data center carbon emissions for a decade, showed a slide of logos, indicating companies that have made a commitment to use 100 percent renewable energy for their IT. Cook showed the commitment started with big brand consumer-facing IT (such as Google and Apple), then spread to big data center operators (such as Equinix and Digital Realty), and now is being adopted by business-to-business companies such as HP Enterprise.

Our research supports Cook’s view that this small cluster of logos will grow to a forest in the years ahead, with a surge of renewed enthusiasm coming from top-level executives. The reason is not altruistic: corporate leaders, investors and shareholders are
exerting increasing pressure on enterprises to actively address climate change risk, better manage natural resources, such as water, and become more energy efficient.

At present, data center operators may not be heavily exposed to the effects of this top-level interest in climate change, but Uptime Institute advises data center operators to prepare for more interest and more pressure.

Financial pressure is one big reason: According to The Forum for Sustainable and Responsible Investment, a U.S.-based membership association formed to advance sustainable, responsible and impactive investing, the amount of funds invested by money managers that incorporate environmental, social and governance (ESG) criteria increased from $8.1 trillion in 2016 to $11.6 trillion in 2018 (see chart below).

Source: The Forum for Sustainable and Responsible Investment, 2018

C-level executives have little choice but to prioritize company objectives and allocate funds in response to these increased investor calls for climate change and sustainability efforts — it could affect the share price. For whatever reason, altruistic or financial, the investments are being made: In a recent report, Schneider Electric reports that companies spend more than $450 billion on energy efficiency and sustainability initiatives, and 63 percent of Fortune 100 companies have set one or more clean energy targets.

There is some evidence, although not conclusive, that companies that commit themselves to time-binding, greenhouse gas emission reduction targets outperform other companies on the financial markets. This may be due to better management in the first place, the push to efficiency or access to more capital. In recent years, Ceres, the MIT Sloan Management Review (a Massachusetts Institute of Technology publication that covers management practices) and The Boston Consulting Group have all drawn similar conclusions about a commitment to ESG and improved revenues and share prices.

Schneider Electric took notice of this investment trend in its 2019 Corporate Energy & Sustainability Progress Report, which it discussed in a recent webinar. Schneider reported that 42 percent of enterprises have customer/investor relations in mind when they publicly commit to energy- and carbon-reduction initiatives, which only slightly trails environment concerns (44 percent).

Source: Schneider Electric, 2019

In recent weeks, no less than four data center operators in Sweden, Singapore, France and the U.S. have told us about the growing importance of reducing energy/carbon emissions. There is a resurgence in green thinking, often coming from top management. These changes will eventually reach many others in IT and data center operations, which will require them to improve their environmental and sustainability performance as well as reduce risk.

——————————————————————————–

For more information on the renewed interest and fiscal pressures being seen for companies to adopt cleaner energy strategies for the infrastructure, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday.  For membership information click here.

For most, virtualization reduces data center capacity demand more than anything else

The public cloud is dampening demand for data center capacity and leading to closures, consolidation and a big rethink on data center ownership. Right? Not quite, according to the latest Uptime Intelligence research.

In enterprise and colocation data centers, we found that virtualization helps free up data center capacity more than any other technology or service, with public cloud and new server technologies coming some way behind. And despite this, participants in our research told us enterprise data center demand (especially for storage) is still rising.

In a April 2019 report by Uptime Institute, “Capacity planning in a complex, hybrid world”, we asked more than 250 C-level executives and data center and IT managers at enterprises globally which technologies have the highest impact on data center demand. Virtualization was cited by 51 percent and public cloud by only 32 percent. This was a surprise to us — we had expected cloud to have a greater impact.

The findings underline the power of virtualization, which is mostly adopted for other purposes (such as rapid provisioning) but helps push up server utilization and thus saves space, use of capital (IT) equipment, cooling and, of course, power. Some 40 percent of respondents said virtual machine (VM) compression, increasing the number of VMs per host server, is further reducing capacity demand.

A warning to operators, however: The capacity benefits of virtualization, once carried out, may be short lived. One-third of respondents said that virtualization helped initially but is no longer a factor in reducing capacity demand. Levels of virtualization are now very high in many organizations — above 90 percent is common.

Some operators are adopting a method of virtualization known as application containers (‘containers’), the most common of which is Docker. Unlike VMs, containers do not require a dedicated, pre-provisioned support environment and, therefore, will usually require less compute and memory capacity. Just 23 percent of respondents said they are using containers. About one-quarter of those using or considering containers expect it will reduce their physical server footprint further (or offset growth).

——————————————————————————–

For more information on capacity planning and the role virtualization has in forming strategic plans for essential IT service delivery, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday.  For membership information click here.

The Data Center Staffing and Skills Shortage is here NOW!

Sometimes it can be hard to get people to talk about their issues — other times, it can be hard to keep them quiet. A recent Uptime Institute Network member’s meeting began as an open discussion but was soon dominated by one issue: data center staffing.

The members’ concerns reflect the growing disquiet in the industry. Data centers are struggling to recruit and retain sufficient qualified staff to provide and grow reliable operations. In Uptime Institute’s 2018 annual global survey of data center operators, over half of the respondents reported that they were either having difficulty finding candidates to fill open jobs or were having trouble retaining data center staff.

A tight labor market is exacerbating the issue: job vacancies in the United States hit a record high in December 2018, and the US is not the only country with a robust job-seeker forecast. With a large number of experienced managers set to leave the workforce in the next decade or two, analysts now question whether labor shortages will prove a drag on growth. Data center operators have reported losing staff not only to mission critical industries, such as hospitals and utilities, but also to unexpected enterprises — even fairgrounds. Not to mention competition from hyperscales, which are luring experienced data center staff away with hard to resist salaries.

An aging workforce is of particular concern in the fast-growing IT/data center industry. Almost three-quarters of the respondents to our 2018 survey had more than 15 years of work experience, and more than a third had over 25 years’ experience.

Despite the need for more qualified workers, over half of respondents reported that women comprise less than six percent of their data center design, build or operations staff. But a majority (73 percent) felt that the lack of diversity was not a concern.

This may prove to be complacent. McKinsey’s longitudinal data on over 1,000 companies in 12 countries shows a significant correlation between diversity and business performance. And a large study (over 1,000 firms in 35 countries and 24 industries) recently profiled in the Harvard Business Review clarified two important questions about the impact of gender diversity on business performance: First, intention matters. Gender diversity yields benefits only in those industries that view inclusion as important — this may be an important issue for the data center sector to address. Second, the study distinguished cause and effect: Women weren’t just more attracted to high-performing companies; hiring more women led to better performance.

There are many strategies for attracting and keeping data center staff, but none will be a panacea. Watch for new Uptime Institute initiatives and research in the coming months, available to Uptime Institute Members.

——————————————————————————–

For more information on Staffing and Skills needed for the data center, and the impact the growing concern is already having on operational execution, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts  and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday.  For membership information click here.

Is PUE actually going UP?

One of the more intriguing results of the Uptime Institute Global Data Center Survey 2019 concerned energy efficiency.  For years, data centers have become ever more efficient, with power usage effectiveness (PUE) ratings across the industry (apparently) falling.  Big operators, such as hyperscale cloud companies and big colos, regularly claim annual or design PUE figures between 1.1 and 1.4.  It is an industry success story — a response to both higher power prices and concerns about carbon dioxide emissions.

Uptime Institute has tracked industry average PUE numbers, at intervals, over 12 years (see figure below).  And this year, for the first time, there was no recorded improvement. In fact, energy efficiency deteriorated slightly, from an average PUE of 1.58 in 2018 to 1.67 in 2019 (lower is better). Can this really be right, and if so, how do we explain it?

Has PUE Improvement Stalled?

Has PUE Improvement Stalled?

The first question is, “Is the data good?” Our respondents are informed (data center operations staff and IT management from around the world) and our sample size for this topic was quite large (624) — those who didn’t know the answer were taken out of the sample. And while there may be a margin of error, we can already see on a year-by-year basis the improvements have flattened out. We can at least conclude that energy efficiency has stopped improving.

The number is also realistic. We know that most operators cannot compete with the finely tuned, aggressively efficient hyperscale data centers in energy efficiency, nor indeed with newer, highly efficient colocation sites. As we said, in these sectors, PUE values of 1.1 to 1.4 are frequently claimed.

What explanations do we have? It is speculation, but we think that several factors could have caused a slight, and probably temporary, halt in PUE improvements. For example, the higher and extreme temperatures experienced in the last year in many parts of the world where data centers are situated could account for increased use of cooling and, hence, higher PUEs. Another factor is that utilization in many data centers — although certainly not in all — has fallen as certain workloads are moved to public cloud services. This means more data centers may be operated below their optimal design efficiency, or they may be cooling inefficiently due to poor layout of servers. Another possible reason is that more operators have higher density racks (we know to this from separate data). This may push cooling systems to work harder or to switch from free cooling to mechanical modes.

Certainly, there is an explanation for the flattening out of the numbers over the 12 years. The most dramatic increases in energy efficiency were achieved between 2007 and 2013, often by taking steps such as hot/cold air separation, raising temperatures, or applying more control on cooling, fans and power distribution. The widespread adoption of free air cooling (direct and indirect) in newer builds has also helped to bring the overall level of energy use down. But it is clear that the easiest steps have largely been taken.

Even so, we do still find these results a little puzzling. Smaller data centers tend to have much higher PUEs and we know there is an industry trend of consolidation, so many are closing. And most colos, a thriving sector, have PUEs below 1.5. Finally, of course, is the addition of new data centers — which tend to have lower PUEs. These factors, coupled with the overall improvement in technology and knowledge, mean PUEs should still be edging down.

One thing we do know and must emphasize: The average PUE per data center does not equal the overall PUE per kW of IT load. This is undoubtedly going down, although it is harder to track. Our data, along with everyone else’s, shows a rapid growth in the proportion of workloads in the public cloud — and there, PUEs are very low. Similarly, more work is in large colos.

But it would also a mistake to think this is the solution. Most mission-critical enterprise IT is not currently going into the public cloud, and enterprise energy efficiency remains important.

A final point: PUE is not the only or even the most important metric to track energy efficiency. Data center operators should always watch and understand the total energy consumption of their data centers, with the goal of improving both IT and facility energy efficiency.

—————————————————————————————————————————————————-

The full report Uptime Institute global data center survey 2019 is available to members of the Uptime Institute Network here. Our upcoming webinar (May 29, 2019 at 12 noon EDT) discussing the survey results is open to the general public.

Comparing the severity of IT service outages: Uptime Institute’s Outage Severity Rating

Avoiding IT service outages is a big concern for any operator or service provider, especially one providing a business-critical service.  But when an outage does occur, the business impact can vary from “barely noticeable” to “huge and expensive.”  Anticipating and modeling the impact of a service interruption should be a part of incident planning and is key to determining the level of investment that should be made to reduce incidents and their impact.

In recent years, Uptime Institute has been collecting data about service outages, including the costs, the consequences and, most notably, the most common causes.  One of our findings is that organizations often don’t collect full financial data about the impact of outages, or if they do, it might take months for these to become apparent. Many of the costs are hidden, even if the outcry from managers and even non-paying customers is most certainly not.  But cost is not a proxy for impact: even a relatively short and inexpensive outage at a big, consumer-facing service provider can attract negative, national headlines.

Another clear trend, now that so many applications are distributed and interlinked, is that “outages” can often be partial, affecting users in different ways. This has, in some cases, enabled some major operators to claim very impressive availability figures in spite of poor customer experience. Their argument: Just because a service is slow or can’t perform some functions doesn’t mean it is “down.”

To give managers a shorthand way to talk about the impact of a service outage, Uptime Institute developed the Outage Severity Rating (below). The rating is not scientific and might be compared to the internationally used Beaufort Scale, which describes how various wind speeds are experienced on land and sea.

The Uptime Institute Outage Severity Rating describes the business/service impact of an IT service interruption, regardless of the cause.

By applying this scale to widely reported outages from 2016-2018, Uptime Institute tracked 11 “Severe” Category 5 outages and 46 “Serious” Category 4 outages. Of these 11 severe outages, no fewer than five occurred at airlines. In each case, multi-million-dollar losses occurred, as flights were cancelled and travelers stranded. Compensation was paid, and negative headlines ensued.

Analysis suggests both obvious and less obvious reasons why airlines were hit so hard: the obvious one is that airlines are not only highly dependent on IT for almost all elements of the operations, but also that the impact of disruption is immediate and expensive. Less obviously, many airlines have been disrupted by low cost competition and forced to “do more with less” in the field of IT. This leads to errors and over-thrifty outsourcing, and it makes incidents more likely.

If we consider Categories 4 and 5 together, the banking and financial services sector is the most over-weighted. For this sector, outage causes varied widely, and in some cases, cost cutting was a factor. More commonly, the real challenge was simply managing complexity and recovering from failures fast enough to reduce the impact.

——————————————————————————–

Members of Uptime Institute Network experience HALF of the incidents that cause these type of service disruptions. Members share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday.  For membership information click here.