A furious — but late — response to the National Fire Protection Association’s (NFPA’s) proposed standard 855, Standard for the Installation of Stationary Energy Storage, should put the whole data center industry on notice that it needs to increase its participation in standards development worldwide. This is not the first time professional bodies outside of the data center industry have developed standards without sufficient considered input from data center owners and operators.
NFPA’s Standard 855 is controversial because if it is ratified by the NFPA (a U.S.-based fire-safety standards organization), it will eventually become the basis of building codes in the U.S. and elsewhere. These codes will regulate how batteries (including nickel-cadmium and lithium-ion) can be deployed in data centers.
In this case, the concerns center on whether some of the safety provisions that make sense for renewable energy storage arrays will prove too costly and even counterproductive when applied to data centers. NFPA 855’s provisions include minimum spacing requirements and restrictions on siting and power capacity. In addition, exceptions specifically relating to data centers are found in Chapter 4 of the proposed standard rather than its Scope, which may make them less convincing to jurisdictional authorities.
According to our research, the data center industry became aware of the more controversial elements of the standard only after the public comment process had ended and two revisions completed. In this case and others, well-meaning — but self-selected — individuals had already made their decisions. This is important, because NFPA 855 forms the basis of codes that empower jurisdictional authorities, as well as other code enforcement officials.
It is not clear that the NFPA committee included enough data center representatives, who may have been able to explain that batteries designed for data center use already incorporate significant protections and that older battery types have been used for many years, without serious incident. In addition, IEEE (the Institute of Electrical and Electronics Engineers, an international engineering organization) and Underwriter’s Laboratory (UL, an international safety organization) have already developed a series of standards pertaining to the use of Li-ion batteries in data centers.
Background
According to opponents of the new standard as it stands, NFPA 855 was originally intended to apply only to Li-ion batteries. The California Energy Storage Alliance (a membership-based advocacy group formed to advance large-scale renewable energy projects) requested the standard. It wanted to make it easier to gain approvals to site energy storage systems in support of the state’s renewable energy programs.
The request was granted, and NFPA drew upon its membership to create a committee that included the expertise necessary to develop the standard. Not until near the close of the second review period (and long after public comment closed) did the data center industry note that the draft standard proposed to regulate battery use in data centers in a wholly new and unnecessary way.
In February 2019, an ad hoc group of 10 individuals from IEEE’s ESSB (Energy Storage and Stationary Battery) committee — five of whom are also members of NFPA’s 855 committee — launched a grass roots campaign to return the draft standard to the committee to be rewritten. The effort will require the support of two-thirds of the NFPA members at its technical meetings in San Antonio, TX, June 17-20, 2019. The campaign is unusual, as NFPA procedures normally allow for only limited revision of a proposed standard after the close of the public comment period.
In the short term there is little that non-NFPA members can do but wait for the results of the June meeting. NFPA members, of course, can make their voices heard at the meeting. Long term, however, is a different matter: organizations should resolve to increase their industry participation, submit public comments on relevant standards and even become involved in the standards-making process. The data center industry can only influence relevant standards if they participate in their development. Unfortunately, the list of relevant standards-making organizations is lengthy (see Ansi Standards for a partial list). Click here to learn more about proposed NFPA standard 855.
——————————————————————————–
Get in the conversation! A wealth of data center design and operational information and guidance is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/06/DC-FIREv2.jpg10842994Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2019-06-27 10:00:142019-06-27 09:42:26Proposed NFPA 855 battery standard worries Data Center industry
Smart Energy is getting a lot of airplay in the data center world at present. New or planned products that fall under this broad banner include Energy-as-a-Service uninterruptible power supplies, software-defined power systems, and adaptable redundant systems that enable operators to raise or reduce their level of redundancy according to business needs. It might be argued that some long-established products, such as power capping/management, fall under this banner.
All of these products share a common characteristic. A software control system sits above the hard-wired power switching or power controls in a data center — or between the data center and the utility — and adjusts, in real or near-real time, the demand for and availability of power according to a set of policies. Depending on the infrastructure and systems in place, the power can be controlled by availability, by directional flow, by demand, or even by frequency or voltage.
The policies embedded in the management system might be financial, but might equally be concerned with availability, redundancy, safety and other factors. The data to help make those decisions may come from price signals or models, demand (from IT), equipment status, battery levels or maintenance systems. Implemented effectively, the technology is clearly powerful.
In discussions with vendors, Uptime Institute finds another commonly shared, if not necessarily universal, characteristic: the business case for adoption is a little fuzzy. Rather like data center infrastructure management software, there is a long list of benefits (discussed in the Uptime Institute Intelligence report Smart Energy in the data center), but it is not clear that any single one is overwhelming, given the upfront investment and the concerns about the introduction of complex and unfamiliar new technologies. In recent discussions among Uptime Institute members with considerable data center footprints, releasing stranded capacity and alleviating power needs at peak times emerged as a pressing problem.
The technology certainly has high promise. As our chart above shows in simple graphical form, a big potential benefit is really a form of peak shaving … either trimming demand or increasing capacity. If this can be embedded and trusted, it could significantly cut the capex and opex of data centers.
——————————————————————————–
For more on Smart Energy in the data center, a wealth of research is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/04/SMART-POWER.jpg5541177Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngAndy Lawrence, Executive Director of Research, Uptime Institute, [email protected]2019-06-18 09:00:262019-05-10 15:04:46The Business Case for Smart Energy is Still in the Making
In a recent presentation at the Energy Smart data center conference in Stockholm, Gary Cook, the Greenpeace activist who has tracked data center carbon emissions for a decade, showed a slide of logos, indicating companies that have made a commitment to use 100 percent renewable energy for their IT. Cook showed the commitment started with big brand consumer-facing IT (such as Google and Apple), then spread to big data center operators (such as Equinix and Digital Realty), and now is being adopted by business-to-business companies such as HP Enterprise.
Our research supports Cook’s view that this small cluster of logos will grow to a forest in the years ahead, with a surge of renewed enthusiasm coming from top-level executives. The reason is not altruistic: corporate leaders, investors and shareholders are
exerting increasing pressure on enterprises to actively address climate change risk, better manage natural resources, such as water, and become more energy efficient.
At present, data center operators may not be heavily exposed to the effects of this top-level interest in climate change, but Uptime Institute advises data center operators to prepare for more interest and more pressure.
Financial pressure is one big reason: According to The Forum for Sustainable and Responsible Investment, a U.S.-based membership association formed to advance sustainable, responsible and impactive investing, the amount of funds invested by money managers that incorporate environmental, social and governance (ESG) criteria increased from $8.1 trillion in 2016 to $11.6 trillion in 2018 (see chart below).
C-level executives have little choice but to prioritize company objectives and allocate funds in response to these increased investor calls for climate change and sustainability efforts — it could affect the share price. For whatever reason, altruistic or financial, the investments are being made: In a recent report, Schneider Electric reports that companies spend more than $450 billion on energy efficiency and sustainability initiatives, and 63 percent of Fortune 100 companies have set one or more clean energy targets.
There is some evidence, although not conclusive, that companies that commit themselves to time-binding, greenhouse gas emission reduction targets outperform other companies on the financial markets. This may be due to better management in the first place, the push to efficiency or access to more capital. In recent years, Ceres, the MIT Sloan Management Review (a Massachusetts Institute of Technology publication that covers management practices) and The Boston Consulting Group have all drawn similar conclusions about a commitment to ESG and improved revenues and share prices.
Schneider Electric took notice of this investment trend in its 2019 Corporate Energy & Sustainability Progress Report, which it discussed in a recent webinar. Schneider reported that 42 percent of enterprises have customer/investor relations in mind when they publicly commit to energy- and carbon-reduction initiatives, which only slightly trails environment concerns (44 percent).
In recent weeks, no less than four data center operators in Sweden, Singapore, France and the U.S. have told us about the growing importance of reducing energy/carbon emissions. There is a resurgence in green thinking, often coming from top management. These changes will eventually reach many others in IT and data center operations, which will require them to improve their environmental and sustainability performance as well as reduce risk.
——————————————————————————–
For more information on the renewed interest and fiscal pressures being seen for companies to adopt cleaner energy strategies for the infrastructure, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/04/RENEWABLE.jpg5681371Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2019-06-10 09:00:072019-05-10 15:04:27Renewed Pressure for Renewables to Power the Data Center
The public cloud is dampening demand for data center capacity and leading to closures, consolidation and a big rethink on data center ownership. Right? Not quite, according to the latest Uptime Intelligence research.
In enterprise and colocation data centers, we found that virtualization helps free up data center capacity more than any other technology or service, with public cloud and new server technologies coming some way behind. And despite this, participants in our research told us enterprise data center demand (especially for storage) is still rising.
In a April 2019 report by Uptime Institute, “Capacity planning in a complex, hybrid world”, we asked more than 250 C-level executives and data center and IT managers at enterprises globally which technologies have the highest impact on data center demand. Virtualization was cited by 51 percent and public cloud by only 32 percent. This was a surprise to us — we had expected cloud to have a greater impact.
The findings underline the power of virtualization, which is mostly adopted for other purposes (such as rapid provisioning) but helps push up server utilization and thus saves space, use of capital (IT) equipment, cooling and, of course, power. Some 40 percent of respondents said virtual machine (VM) compression, increasing the number of VMs per host server, is further reducing capacity demand.
A warning to operators, however: The capacity benefits of virtualization, once carried out, may be short lived. One-third of respondents said that virtualization helped initially but is no longer a factor in reducing capacity demand. Levels of virtualization are now very high in many organizations — above 90 percent is common.
Some operators are adopting a method of virtualization known as application containers (‘containers’), the most common of which is Docker. Unlike VMs, containers do not require a dedicated, pre-provisioned support environment and, therefore, will usually require less compute and memory capacity. Just 23 percent of respondents said they are using containers. About one-quarter of those using or considering containers expect it will reduce their physical server footprint further (or offset growth).
——————————————————————————–
For more information on capacity planning and the role virtualization has in forming strategic plans for essential IT service delivery, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/04/VIRTUALIZATION.jpg5431260Rhonda Ascierto, Vice President, Research, Uptime Institutehttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngRhonda Ascierto, Vice President, Research, Uptime Institute2019-05-30 09:00:102019-05-10 15:03:59For most, virtualization reduces data center capacity demand more than anything else
Sometimes it can be hard to get people to talk about their issues — other times, it can be hard to keep them quiet. A recent Uptime Institute Network member’s meeting began as an open discussion but was soon dominated by one issue: data center staffing.
The members’ concerns reflect the growing disquiet in the industry. Data centers are struggling to recruit and retain sufficient qualified staff to provide and grow reliable operations. In Uptime Institute’s 2018 annual global survey of data center operators, over half of the respondents reported that they were either having difficulty finding candidates to fill open jobs or were having trouble retaining data center staff.
A tight labor market is exacerbating the issue: job vacancies in the United States hit a record high in December 2018, and the US is not the only country with a robust job-seeker forecast. With a large number of experienced managers set to leave the workforce in the next decade or two, analysts now question whether labor shortages will prove a drag on growth. Data center operators have reported losing staff not only to mission critical industries, such as hospitals and utilities, but also to unexpected enterprises — even fairgrounds. Not to mention competition from hyperscales, which are luring experienced data center staff away with hard to resist salaries.
An aging workforce is of particular concern in the fast-growing IT/data center industry. Almost three-quarters of the respondents to our 2018 survey had more than 15 years of work experience, and more than a third had over 25 years’ experience.
Despite the need for more qualified workers, over half of respondents reported that women comprise less than six percent of their data center design, build or operations staff. But a majority (73 percent) felt that the lack of diversity was not a concern.
This may prove to be complacent. McKinsey’s longitudinal data on over 1,000 companies in 12 countries shows a significant correlation between diversity and business performance. And a large study (over 1,000 firms in 35 countries and 24 industries) recently profiled in the Harvard Business Review clarified two important questions about the impact of gender diversity on business performance: First, intention matters. Gender diversity yields benefits only in those industries that view inclusion as important — this may be an important issue for the data center sector to address. Second, the study distinguished cause and effect: Women weren’t just more attracted to high-performing companies; hiring more women led to better performance.
There are many strategies for attracting and keeping data center staff, but none will be a panacea. Watch for new Uptime Institute initiatives and research in the coming months, available to Uptime Institute Members.
——————————————————————————–
For more information on Staffing and Skills needed for the data center, and the impact the growing concern is already having on operational execution, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/04/SKILLS-SHORTAGE.jpg9762375Sandra Vailhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngSandra Vail2019-05-22 09:00:482019-05-10 15:03:44The Data Center Staffing and Skills Shortage is here NOW!
One of the more intriguing results of the Uptime Institute Global Data Center Survey 2019 concerned energy efficiency. For years, data centers have become ever more efficient, with power usage effectiveness (PUE) ratings across the industry (apparently) falling. Big operators, such as hyperscale cloud companies and big colos, regularly claim annual or design PUE figures between 1.1 and 1.4. It is an industry success story — a response to both higher power prices and concerns about carbon dioxide emissions.
Uptime Institute has tracked industry average PUE numbers, at intervals, over 12 years (see figure below). And this year, for the first time, there was no recorded improvement. In fact, energy efficiency deteriorated slightly, from an average PUE of 1.58 in 2018 to 1.67 in 2019 (lower is better). Can this really be right, and if so, how do we explain it?
Has PUE Improvement Stalled?
The first question is, “Is the data good?” Our respondents are informed (data center operations staff and IT management from around the world) and our sample size for this topic was quite large (624) — those who didn’t know the answer were taken out of the sample. And while there may be a margin of error, we can already see on a year-by-year basis the improvements have flattened out. We can at least conclude that energy efficiency has stopped improving.
The number is also realistic. We know that most operators cannot compete with the finely tuned, aggressively efficient hyperscale data centers in energy efficiency, nor indeed with newer, highly efficient colocation sites. As we said, in these sectors, PUE values of 1.1 to 1.4 are frequently claimed.
What explanations do we have? It is speculation, but we think that several factors could have caused a slight, and probably temporary, halt in PUE improvements. For example, the higher and extreme temperatures experienced in the last year in many parts of the world where data centers are situated could account for increased use of cooling and, hence, higher PUEs. Another factor is that utilization in many data centers — although certainly not in all — has fallen as certain workloads are moved to public cloud services. This means more data centers may be operated below their optimal design efficiency, or they may be cooling inefficiently due to poor layout of servers. Another possible reason is that more operators have higher density racks (we know to this from separate data). This may push cooling systems to work harder or to switch from free cooling to mechanical modes.
Certainly, there is an explanation for the flattening out of the numbers over the 12 years. The most dramatic increases in energy efficiency were achieved between 2007 and 2013, often by taking steps such as hot/cold air separation, raising temperatures, or applying more control on cooling, fans and power distribution. The widespread adoption of free air cooling (direct and indirect) in newer builds has also helped to bring the overall level of energy use down. But it is clear that the easiest steps have largely been taken.
Even so, we do still find these results a little puzzling. Smaller data centers tend to have much higher PUEs and we know there is an industry trend of consolidation, so many are closing. And most colos, a thriving sector, have PUEs below 1.5. Finally, of course, is the addition of new data centers — which tend to have lower PUEs. These factors, coupled with the overall improvement in technology and knowledge, mean PUEs should still be edging down.
One thing we do know and must emphasize: The average PUE per data center does not equal the overall PUE per kW of IT load. This is undoubtedly going down, although it is harder to track. Our data, along with everyone else’s, shows a rapid growth in the proportion of workloads in the public cloud — and there, PUEs are very low. Similarly, more work is in large colos.
But it would also a mistake to think this is the solution. Most mission-critical enterprise IT is not currently going into the public cloud, and enterprise energy efficiency remains important.
A final point: PUE is not the only or even the most important metric to track energy efficiency. Data center operators should always watch and understand the total energy consumption of their data centers, with the goal of improving both IT and facility energy efficiency.
The full report Uptime Institute global data center survey 2019 is available to members of the Uptime Institute Network here. Our upcoming webinar (May 29, 2019 at 12 noon EDT) discussing the survey results is open to the general public.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/05/PUE-line.jpg7181582Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngAndy Lawrence, Executive Director of Research, Uptime Institute, [email protected]2019-05-15 11:30:352019-05-17 10:43:52Is PUE actually going UP?
Proposed NFPA 855 battery standard worries Data Center industry
/in Design, News/by Kevin HeslinA furious — but late — response to the National Fire Protection Association’s (NFPA’s) proposed standard 855, Standard for the Installation of Stationary Energy Storage, should put the whole data center industry on notice that it needs to increase its participation in standards development worldwide. This is not the first time professional bodies outside of the data center industry have developed standards without sufficient considered input from data center owners and operators.
NFPA’s Standard 855 is controversial because if it is ratified by the NFPA (a U.S.-based fire-safety standards organization), it will eventually become the basis of building codes in the U.S. and elsewhere. These codes will regulate how batteries (including nickel-cadmium and lithium-ion) can be deployed in data centers.
In this case, the concerns center on whether some of the safety provisions that make sense for renewable energy storage arrays will prove too costly and even counterproductive when applied to data centers. NFPA 855’s provisions include minimum spacing requirements and restrictions on siting and power capacity. In addition, exceptions specifically relating to data centers are found in Chapter 4 of the proposed standard rather than its Scope, which may make them less convincing to jurisdictional authorities.
According to our research, the data center industry became aware of the more controversial elements of the standard only after the public comment process had ended and two revisions completed. In this case and others, well-meaning — but self-selected — individuals had already made their decisions. This is important, because NFPA 855 forms the basis of codes that empower jurisdictional authorities, as well as other code enforcement officials.
It is not clear that the NFPA committee included enough data center representatives, who may have been able to explain that batteries designed for data center use already incorporate significant protections and that older battery types have been used for many years, without serious incident. In addition, IEEE (the Institute of Electrical and Electronics Engineers, an international engineering organization) and Underwriter’s Laboratory (UL, an international safety organization) have already developed a series of standards pertaining to the use of Li-ion batteries in data centers.
Background
According to opponents of the new standard as it stands, NFPA 855 was originally intended to apply only to Li-ion batteries. The California Energy Storage Alliance (a membership-based advocacy group formed to advance large-scale renewable energy projects) requested the standard. It wanted to make it easier to gain approvals to site energy storage systems in support of the state’s renewable energy programs.
The request was granted, and NFPA drew upon its membership to create a committee that included the expertise necessary to develop the standard. Not until near the close of the second review period (and long after public comment closed) did the data center industry note that the draft standard proposed to regulate battery use in data centers in a wholly new and unnecessary way.
In February 2019, an ad hoc group of 10 individuals from IEEE’s ESSB (Energy Storage and Stationary Battery) committee — five of whom are also members of NFPA’s 855 committee — launched a grass roots campaign to return the draft standard to the committee to be rewritten. The effort will require the support of two-thirds of the NFPA members at its technical meetings in San Antonio, TX, June 17-20, 2019. The campaign is unusual, as NFPA procedures normally allow for only limited revision of a proposed standard after the close of the public comment period.
In the short term there is little that non-NFPA members can do but wait for the results of the June meeting. NFPA members, of course, can make their voices heard at the meeting. Long term, however, is a different matter: organizations should resolve to increase their industry participation, submit public comments on relevant standards and even become involved in the standards-making process. The data center industry can only influence relevant standards if they participate in their development. Unfortunately, the list of relevant standards-making organizations is lengthy (see Ansi Standards for a partial list). Click here to learn more about proposed NFPA standard 855.
——————————————————————————–
Get in the conversation! A wealth of data center design and operational information and guidance is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
The Business Case for Smart Energy is Still in the Making
/in Design/by Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]Smart Energy is getting a lot of airplay in the data center world at present. New or planned products that fall under this broad banner include Energy-as-a-Service uninterruptible power supplies, software-defined power systems, and adaptable redundant systems that enable operators to raise or reduce their level of redundancy according to business needs. It might be argued that some long-established products, such as power capping/management, fall under this banner.
All of these products share a common characteristic. A software control system sits above the hard-wired power switching or power controls in a data center — or between the data center and the utility — and adjusts, in real or near-real time, the demand for and availability of power according to a set of policies. Depending on the infrastructure and systems in place, the power can be controlled by availability, by directional flow, by demand, or even by frequency or voltage.
The policies embedded in the management system might be financial, but might equally be concerned with availability, redundancy, safety and other factors. The data to help make those decisions may come from price signals or models, demand (from IT), equipment status, battery levels or maintenance systems. Implemented effectively, the technology is clearly powerful.
In discussions with vendors, Uptime Institute finds another commonly shared, if not necessarily universal, characteristic: the business case for adoption is a little fuzzy. Rather like data center infrastructure management software, there is a long list of benefits (discussed in the Uptime Institute Intelligence report Smart Energy in the data center), but it is not clear that any single one is overwhelming, given the upfront investment and the concerns about the introduction of complex and unfamiliar new technologies. In recent discussions among Uptime Institute members with considerable data center footprints, releasing stranded capacity and alleviating power needs at peak times emerged as a pressing problem.
The technology certainly has high promise. As our chart above shows in simple graphical form, a big potential benefit is really a form of peak shaving … either trimming demand or increasing capacity. If this can be embedded and trusted, it could significantly cut the capex and opex of data centers.
——————————————————————————–
For more on Smart Energy in the data center, a wealth of research is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
Renewed Pressure for Renewables to Power the Data Center
/in Design, Executive/by Kevin HeslinIn a recent presentation at the Energy Smart data center conference in Stockholm, Gary Cook, the Greenpeace activist who has tracked data center carbon emissions for a decade, showed a slide of logos, indicating companies that have made a commitment to use 100 percent renewable energy for their IT. Cook showed the commitment started with big brand consumer-facing IT (such as Google and Apple), then spread to big data center operators (such as Equinix and Digital Realty), and now is being adopted by business-to-business companies such as HP Enterprise.
Our research supports Cook’s view that this small cluster of logos will grow to a forest in the years ahead, with a surge of renewed enthusiasm coming from top-level executives. The reason is not altruistic: corporate leaders, investors and shareholders are
exerting increasing pressure on enterprises to actively address climate change risk, better manage natural resources, such as water, and become more energy efficient.
At present, data center operators may not be heavily exposed to the effects of this top-level interest in climate change, but Uptime Institute advises data center operators to prepare for more interest and more pressure.
Financial pressure is one big reason: According to The Forum for Sustainable and Responsible Investment, a U.S.-based membership association formed to advance sustainable, responsible and impactive investing, the amount of funds invested by money managers that incorporate environmental, social and governance (ESG) criteria increased from $8.1 trillion in 2016 to $11.6 trillion in 2018 (see chart below).
C-level executives have little choice but to prioritize company objectives and allocate funds in response to these increased investor calls for climate change and sustainability efforts — it could affect the share price. For whatever reason, altruistic or financial, the investments are being made: In a recent report, Schneider Electric reports that companies spend more than $450 billion on energy efficiency and sustainability initiatives, and 63 percent of Fortune 100 companies have set one or more clean energy targets.
There is some evidence, although not conclusive, that companies that commit themselves to time-binding, greenhouse gas emission reduction targets outperform other companies on the financial markets. This may be due to better management in the first place, the push to efficiency or access to more capital. In recent years, Ceres, the MIT Sloan Management Review (a Massachusetts Institute of Technology publication that covers management practices) and The Boston Consulting Group have all drawn similar conclusions about a commitment to ESG and improved revenues and share prices.
Schneider Electric took notice of this investment trend in its 2019 Corporate Energy & Sustainability Progress Report, which it discussed in a recent webinar. Schneider reported that 42 percent of enterprises have customer/investor relations in mind when they publicly commit to energy- and carbon-reduction initiatives, which only slightly trails environment concerns (44 percent).
In recent weeks, no less than four data center operators in Sweden, Singapore, France and the U.S. have told us about the growing importance of reducing energy/carbon emissions. There is a resurgence in green thinking, often coming from top management. These changes will eventually reach many others in IT and data center operations, which will require them to improve their environmental and sustainability performance as well as reduce risk.
——————————————————————————–
For more information on the renewed interest and fiscal pressures being seen for companies to adopt cleaner energy strategies for the infrastructure, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
For most, virtualization reduces data center capacity demand more than anything else
/in Design/by Rhonda Ascierto, Vice President, Research, Uptime InstituteThe public cloud is dampening demand for data center capacity and leading to closures, consolidation and a big rethink on data center ownership. Right? Not quite, according to the latest Uptime Intelligence research.
In enterprise and colocation data centers, we found that virtualization helps free up data center capacity more than any other technology or service, with public cloud and new server technologies coming some way behind. And despite this, participants in our research told us enterprise data center demand (especially for storage) is still rising.
In a April 2019 report by Uptime Institute, “Capacity planning in a complex, hybrid world”, we asked more than 250 C-level executives and data center and IT managers at enterprises globally which technologies have the highest impact on data center demand. Virtualization was cited by 51 percent and public cloud by only 32 percent. This was a surprise to us — we had expected cloud to have a greater impact.
The findings underline the power of virtualization, which is mostly adopted for other purposes (such as rapid provisioning) but helps push up server utilization and thus saves space, use of capital (IT) equipment, cooling and, of course, power. Some 40 percent of respondents said virtual machine (VM) compression, increasing the number of VMs per host server, is further reducing capacity demand.
A warning to operators, however: The capacity benefits of virtualization, once carried out, may be short lived. One-third of respondents said that virtualization helped initially but is no longer a factor in reducing capacity demand. Levels of virtualization are now very high in many organizations — above 90 percent is common.
Some operators are adopting a method of virtualization known as application containers (‘containers’), the most common of which is Docker. Unlike VMs, containers do not require a dedicated, pre-provisioned support environment and, therefore, will usually require less compute and memory capacity. Just 23 percent of respondents said they are using containers. About one-quarter of those using or considering containers expect it will reduce their physical server footprint further (or offset growth).
——————————————————————————–
For more information on capacity planning and the role virtualization has in forming strategic plans for essential IT service delivery, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
The Data Center Staffing and Skills Shortage is here NOW!
/in Design, Executive, Operations/by Sandra VailSometimes it can be hard to get people to talk about their issues — other times, it can be hard to keep them quiet. A recent Uptime Institute Network member’s meeting began as an open discussion but was soon dominated by one issue: data center staffing.
The members’ concerns reflect the growing disquiet in the industry. Data centers are struggling to recruit and retain sufficient qualified staff to provide and grow reliable operations. In Uptime Institute’s 2018 annual global survey of data center operators, over half of the respondents reported that they were either having difficulty finding candidates to fill open jobs or were having trouble retaining data center staff.
A tight labor market is exacerbating the issue: job vacancies in the United States hit a record high in December 2018, and the US is not the only country with a robust job-seeker forecast. With a large number of experienced managers set to leave the workforce in the next decade or two, analysts now question whether labor shortages will prove a drag on growth. Data center operators have reported losing staff not only to mission critical industries, such as hospitals and utilities, but also to unexpected enterprises — even fairgrounds. Not to mention competition from hyperscales, which are luring experienced data center staff away with hard to resist salaries.
An aging workforce is of particular concern in the fast-growing IT/data center industry. Almost three-quarters of the respondents to our 2018 survey had more than 15 years of work experience, and more than a third had over 25 years’ experience.
Despite the need for more qualified workers, over half of respondents reported that women comprise less than six percent of their data center design, build or operations staff. But a majority (73 percent) felt that the lack of diversity was not a concern.
This may prove to be complacent. McKinsey’s longitudinal data on over 1,000 companies in 12 countries shows a significant correlation between diversity and business performance. And a large study (over 1,000 firms in 35 countries and 24 industries) recently profiled in the Harvard Business Review clarified two important questions about the impact of gender diversity on business performance: First, intention matters. Gender diversity yields benefits only in those industries that view inclusion as important — this may be an important issue for the data center sector to address. Second, the study distinguished cause and effect: Women weren’t just more attracted to high-performing companies; hiring more women led to better performance.
There are many strategies for attracting and keeping data center staff, but none will be a panacea. Watch for new Uptime Institute initiatives and research in the coming months, available to Uptime Institute Members.
——————————————————————————–
For more information on Staffing and Skills needed for the data center, and the impact the growing concern is already having on operational execution, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
Is PUE actually going UP?
/in Executive, Operations/by Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]One of the more intriguing results of the Uptime Institute Global Data Center Survey 2019 concerned energy efficiency. For years, data centers have become ever more efficient, with power usage effectiveness (PUE) ratings across the industry (apparently) falling. Big operators, such as hyperscale cloud companies and big colos, regularly claim annual or design PUE figures between 1.1 and 1.4. It is an industry success story — a response to both higher power prices and concerns about carbon dioxide emissions.
Uptime Institute has tracked industry average PUE numbers, at intervals, over 12 years (see figure below). And this year, for the first time, there was no recorded improvement. In fact, energy efficiency deteriorated slightly, from an average PUE of 1.58 in 2018 to 1.67 in 2019 (lower is better). Can this really be right, and if so, how do we explain it?
Has PUE Improvement Stalled?
The first question is, “Is the data good?” Our respondents are informed (data center operations staff and IT management from around the world) and our sample size for this topic was quite large (624) — those who didn’t know the answer were taken out of the sample. And while there may be a margin of error, we can already see on a year-by-year basis the improvements have flattened out. We can at least conclude that energy efficiency has stopped improving.
The number is also realistic. We know that most operators cannot compete with the finely tuned, aggressively efficient hyperscale data centers in energy efficiency, nor indeed with newer, highly efficient colocation sites. As we said, in these sectors, PUE values of 1.1 to 1.4 are frequently claimed.
What explanations do we have? It is speculation, but we think that several factors could have caused a slight, and probably temporary, halt in PUE improvements. For example, the higher and extreme temperatures experienced in the last year in many parts of the world where data centers are situated could account for increased use of cooling and, hence, higher PUEs. Another factor is that utilization in many data centers — although certainly not in all — has fallen as certain workloads are moved to public cloud services. This means more data centers may be operated below their optimal design efficiency, or they may be cooling inefficiently due to poor layout of servers. Another possible reason is that more operators have higher density racks (we know to this from separate data). This may push cooling systems to work harder or to switch from free cooling to mechanical modes.
Certainly, there is an explanation for the flattening out of the numbers over the 12 years. The most dramatic increases in energy efficiency were achieved between 2007 and 2013, often by taking steps such as hot/cold air separation, raising temperatures, or applying more control on cooling, fans and power distribution. The widespread adoption of free air cooling (direct and indirect) in newer builds has also helped to bring the overall level of energy use down. But it is clear that the easiest steps have largely been taken.
Even so, we do still find these results a little puzzling. Smaller data centers tend to have much higher PUEs and we know there is an industry trend of consolidation, so many are closing. And most colos, a thriving sector, have PUEs below 1.5. Finally, of course, is the addition of new data centers — which tend to have lower PUEs. These factors, coupled with the overall improvement in technology and knowledge, mean PUEs should still be edging down.
One thing we do know and must emphasize: The average PUE per data center does not equal the overall PUE per kW of IT load. This is undoubtedly going down, although it is harder to track. Our data, along with everyone else’s, shows a rapid growth in the proportion of workloads in the public cloud — and there, PUEs are very low. Similarly, more work is in large colos.
But it would also a mistake to think this is the solution. Most mission-critical enterprise IT is not currently going into the public cloud, and enterprise energy efficiency remains important.
A final point: PUE is not the only or even the most important metric to track energy efficiency. Data center operators should always watch and understand the total energy consumption of their data centers, with the goal of improving both IT and facility energy efficiency.
—————————————————————————————————————————————————-
The full report Uptime Institute global data center survey 2019 is available to members of the Uptime Institute Network here. Our upcoming webinar (May 29, 2019 at 12 noon EDT) discussing the survey results is open to the general public.