In our October 2018 report, A mission-critical industry unprepared for climate change, Uptime Institute Intelligence urged data center operators and owners to plan for the effects of climate change. We specifically encouraged data center owners and operators to meet with government officials and utility executives to learn about local and regional disaster preparation and response plans.
A recent public filing by Pacific Gas and Electric (PG&E), California’s largest electric utility, underlines our point and gives data center owners and operators in that state, including areas near Silicon Valley, a lot to discuss.
According to The Wall Street Journal, PG&E plans to dramatically expand the number and size of areas where it will cut power when hot and dry weather makes wildfires likely, effectively eliminating transmission and distribution gear as a cause of wildfires. In addition, the utility announced plans to spend $28 billion over the next four years to modernize infrastructure.
Extreme wildfires, arguably a result of climate change, have caused PG&E and its customers big problems. In 2018, PG&E intentionally interrupted service in two different areas, disrupting essential services and operations in one area (Calistoga) for two days. And on May 16, 2019, California confirmed that utility-owned power started last November’s Camp Fire, which killed 85 people and destroyed the town of Paradise.
The utility’s been forced to take drastic steps: In January 2019, it sought bankruptcy protection, citing more than $30 billion in potential damages (including as much as $10.5 billion related to the Camp Fire) from wildfires cause by its aging infrastructure and failure to address the growing threat of extreme wildfires caused by climate change.
PG&E is in the front line but is not unique. The case demonstrates that it is unwise for data center operators and owners to address reliability in isolation. Circumstances affecting data centers in the PG&E service territory, for instance, can vary widely, making communicating with utility officials and local authorities essential to maintaining operations in a disaster and any recovery plan.
In this case, one might identify three distinct periods:
In the past, when climate change and aging infrastructure combined to gradually increase the risk of wildfire to a crisis point.
Now, when the bankrupt utility suddenly announced a plan to intentionally interrupt service to reduce wildfire risk, even though the experience in Calistoga suggests that customers and local governments are not prepared for the consequences of emergency power outages.
Sometime in the future, when PG&E’s major infrastructure investments begin to bear fruit and utility reliability begins to increase.
Each of these stages brings markedly different reliability and cost considerations, depending on how close a data center is to areas vulnerable to wildfires and where (and when) PG&E makes its infrastructure investments.
The full report, “A mission-critical industry unprepared for climate change” is available to members of Uptime Institute Network. Want to know more about this organization? Check out the complete benefits of membership here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/05/carr-fire-gty-er-180731_hpMain-999x375.png373989Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2019-07-29 08:00:292019-07-07 06:18:23PG&E Turns Power Off (a.k.a. Climate Change and the Data Center)
The spectacular growth of the public cloud has many drivers, only one of which is the deployment, redevelopment or migration of enterprise IT into the cloud. But many groups within the industry — data center builders and operators, hardware and software suppliers, networking companies, and providers of skills and services alike — are closely watching the rate at which enterprise workloads are moving to the cloud, and for good reason: Enterprise IT is a proven, high-margin source of revenue, supported by large, reliable budgets. Attracting — or keeping — enterprise IT business is critical to the existing IT ecosystem.
The popular view is that enterprise IT is steadily migrating to the public cloud, with the infrastructure layers being outsourced to cheaper, more reliable, more flexible, pay-as-you-go cloud services. Amazon Web Services (AWS), Microsoft Azure and Google are the biggest beneficiaries of this shift. There is very little contention on this directional statement.
It is only when we add the element of TIMEFRAME that we start to lose universal agreement. It is frequently reported and forecasted that we will reach a tipping point where traditional data center centric style approaches (including non-public cloud or non-SaaS provided IT) will become prohibitively expensive, less efficient and too difficult to support. Looking at any complex system of change, its pretty clear this is sound thinking, BUT when will this happen? Some of the data published by industry analysts and (self-)proclaimed experts suggests it will be many years, while other studies suggest we will soon be nearing that point. Its really a mixed bag depending on who you are talking to.
The Uptime Intelligence view is this kind of genetic fundamental change usually happens much more slowly than the technologists predict and we expect the traditional infrastructure platforms (including in-house data centers and customer managed co-location sites) to be the bedrock of Enterprise IT for many years to come.
So how can the views vary so widely? In 2018, 451 Research’s “Voice of the Enterprise Digital Pulse” survey asked 1,000 operators, “Where are the majority of your workloads/applications deployed — now, and in two years’ time?” In the second half of 2018, 60% said the bulk of their loads were “on-premises IT”. Only 20% said the bulk of their workloads were already in the public cloud or in SaaS.
That is still a fair portion in the public cloud or SaaS, but it has taken time: public cloud has been available for about 13 years (AWS debuted in 2006) and SaaS, over 20 (Salesforce was launched in 1999). Over this period, enterprises have had the choice of co-location, managed hosting (cloud or otherwise), or cloud and SaaS. If we view all but traditional IT infrastructure as “new,” a rough summary might be to say that, over that time, just over a third of enterprises have mostly favored the public cloud as a location for their NEW applications, while two-thirds of organizations have favored other ways of running their IT that give them more control.
But what happens next? 451 Research’s data (see below) does suggest that the move away from on-site, traditional IT is really starting to step up. Thirty-nine percent (39%) of organizations say that by 2020, the bulk of their data will be in SaaS or a public cloud service. That’s a big number, although it is an aspiration — many organizations have not transitioned anywhere near as fast as they would have liked. But look at this another way: Even in 2020, nearly half of all organizations say their loads will still mostly be in enterprise data centers or under their control in a co-location facility. So a lot of enterprise infrastructure will remain in place, with over a third of these organizations still hosting applications mostly in their own data centers.
But the real story is buried in the science of survey design… the way 451 Research (and other research firms) asked its question doesn’t reveal all the nuances that we see at Uptime Institute Intelligence and in fact their simpler questioning misses the fact that both total infrastructure capacity and the data center portion of it are both growing, albeit at different rates: Data Center based capacity is growing but at a slower rate than the total infrastructure capacity. Some big organizations are indeed shifting the bulk of their NEW work to the cloud, but they are simultaneously maintaining and even expanding a premium data center and some edge facilities. This is a complex topic to be sure.
So let’s focus on brand new data gathered in 2019, with included much more granular questions, in the context of where capacity exists, and where business applications, old and new are being deployed. In Uptime Institute’s 2019 operator survey (which had results released in May 2019), Uptime Institute asked a more comprehensive question: “What percentage of your organization’s total IT would you describe as running in the following environments?” This question focused on percentage of existing workloads, rather than where the main workload was located or what would happen in the future (a more objective representation).
This Uptime Institute study confirmed that the shift to the public cloud and service oriented capacity was happening, but revealed that the shift was less dramatic and much slower than most industry reporters and pundits would suggest. Over 600 operators said that, in 2021, about half of all workloads will still be in enterprise data centers, and only 18% of workloads in public cloud/SaaS services. Just over three-quarters of all workloads, they believe, will still be managed by their own enterprise staff at a variety of locations that include enterprise data centers, co-location venues, server closets and micro/edge type data centers. The science of any survey confirms that the more granularity in the question, the more actionable the intelligence.
So what does it all mean? The IT ecosystem that already supports a diverse hybrid infrastructure is not yet facing a tipping point for any one portion. That doesn’t mean it won’t come eventually. Basic physics kicks in: workloads that are re-engineered to be cloud-compatible are much more easily moved to the cloud! But there are many other factors as well and as such, core infrastructure change will be gradual. These reasons including economics of work, dynamic scale-ability, risk management, support-ability, application support, overall performance, service reliability and the changing business models facing every modern business today. Simply put: The Enterprise IT environment is changing everyday and its hybrid nature is already very challenging, but Enterprise IT is not in eminent terminal decline.
Public cloud operators, meanwhile, will continue to assess what they need to do to attract more critical IT workloads. Uptime Institute Intelligence has found that issues of transparency, trust, governance and service need addressing — outstanding tools and raw infrastructure capacity alone are not sufficient. We continue to monitor the actual shifts in workload placements and will report regularly as needed.
More detailed information on this topic, along with a wealth of other strategic planning tools are available to members of the Uptime Institute Network, a members-only community of industry leaders which drive the data center industry. Detail can be found here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/05/Hybrid-for-blog-with-pct.jpg16184484Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngAndy Lawrence, Executive Director of Research, Uptime Institute, [email protected]2019-07-22 09:00:002019-05-30 14:29:50Enterprise IT and the public cloud: What the numbers tell us
It is still very early days, but it is clear that artificial intelligence (AI) is set to transform the way data centers are designed, managed and operated — eventually. There has been a lot of misrepresentation and hype around AI, and it’s not always clear how it will be applied, and when. The Uptime Institute view is that it will be rolled out slowly, with initially conservative and limited use cases now and for the next few years. But its impact will grow.
There have been some standout applications to date — for example, predictive maintenance and peer bench-marking — and we expect there will be more as suppliers and large companies apply AI to analyze a wider range of relationships and patterns among a vast range of variables, including resource use, environmental impacts, resiliency and equipment configurations.
Today, however, AI is mostly being used in data centers to improve existing functions and processes. Use cases are focused on delivering tangible operational savings, such as cooling efficiency and alarm suppression/rationalization, as well as predicting known risks with greater accuracy than other technologies can offer.
Artificial Intelligence is currently being applied to perform existing well-understood and defined functions and processes faster and more accurately. In other words, not much new, just better. The table below is taken from the new Uptime Intelligence report “Very smart data centers: How artificial intelligence will power operational decisions” (available to Uptime Institute Network Members) and shows AI functions/services that are being offered or that are in development; with a few exceptions, they are likely to be familiar to data center managers, particularly those that have already deployed data center infrastructure management (DCIM) software.
So where might AI be applied beyond these examples? We think it is likely AI will be used to anticipate failure rates, as well as to model costs, budgetary impacts, supply-chain needs and the impact of design changes and configurations. Data centers not yet built could be modeled and simulated in advance, for example, to compare the operational and/or performance profile and total cost of ownership of a Tier II design data center versus a Tier III design.
Meanwhile, we can expect more marketing hype and misinformation, fueled by a combination of AI’s dazzling complexity, which only specialists can deeply understand, and by its novelty in most data centers. For example:
Myth #1: There is a best type of AI for data centers. The best type of AI will depend on the specific task at hand. Simpler big-data approaches (i.e., not AI) can be more suitable in certain situations. For this reason, new “AI-driven” products such as data center management as a service (DMaaS) often use a mix of AI and non-AI techniques.
Myth #2: AI replaces the need for human knowledge. Domain expertise is critical to the usefulness of any big-data approach, including AI. Human data center knowledge is needed to train AI to make reasonable decisions/recommendations and, especially in the early stages of a deployment, to ensure that any AI outcome is appropriate for a particular data center.
Myth #3: Data centers need a lot of data to implement AI. While this is true for those developing AI, it is not the case for those looking to buy the technology. DMaaS and some DCIM systems use pre-built AI models that can provide limited but potentially useful insights within days.
The advent of DMaaS, which first became commercialized in 2016, is likely to drive widespread adoption of AI in data centers. With DMaaS, large sets of monitored data about equipment and operational environments from different facilities (and different customers) are encrypted, pooled in data lakes, and analyzed using AI, anomaly detection, event-stream playback and other approaches.
Several suppliers now offer DMaaS, a service that parallels the practice of large data center operators who use internal data from across their portfolios to inform decision-making and optimize operations. DCIM suppliers are also beginning to embed AI functions into their software.
Data center AI is here today and is available to almost any facility. The technology has moved beyond just hyper-scale facilities and will move beyond known processes and functions — but probably not for another two or three years.
——————————————————————————–
For more information on Artificial Intelligence and how it is already being applied in the data center, along with a wealth of other research consider becoming part of the Uptime Institute Network community. Members of this owner-operator community enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/08/AI-2.jpg25877158Rhonda Ascierto, Vice President, Research, Uptime Institutehttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngRhonda Ascierto, Vice President, Research, Uptime Institute2019-07-15 08:15:032019-07-07 06:17:22Artificial Intelligence in the Data Center: Myth versus Reality
Recently I attended the Data Center Dynamics (DCD) Smart Energy conference in Stockholm. During a panel discussion on energy, data centers and innovation, David Hall (Senior Director of Technology Innovation for Equinix) made two observations, almost in passing, about metrics and monitoring. Both were intriguing and, to my ears, suggested that operators’ thinking about sustainability and data center energy use is starting to evolve.
The first metric was power usage effectiveness (PUE) — the universally applied but widely criticized standard way to measure data center energy efficiency. In essence he asked, What does it matter if your data center PUE is (a very inefficient) 4 or 5, if you are capturing and re-using the heat (i.e., the byproduct of wasted energy)? After all, as many at the conference pointed out, waste energy can be used for district, building or campus heating; for greenhouses; for heating swimming pools; or, as both Facebook (in Northern Sweden) and the National Renewable Energy Laboratory (in Colorado) do, for melting snow.
As moderator of the discussion, I took David a little to task, asking if Equinix was really doing a lot of heat recovery (no) and noting the pride that Equinix takes in its mostly low and very healthy PUE numbers. (I might have added that computers don’t make the most efficient heaters.)
But small details aside, his point is right: It does not make economic or environmental sense to “boil the sky” with warm air or the rivers with warm water. Large data centers, even very efficient ones, can and do put out a lot of heat, which wastes money, burns up energy and pushes up carbon emissions.
Heat recovery has, until now, had little adoption. In Europe, it is probably viewed as peculiarly Nordic. But it might get traction in the future: as capacity moves nearer the edge and more processing takes place in the cities, there is more opportunity to use the heat; as data centers are run at higher temperatures, the “quality” of that heat will be better; and as (if?) liquid cooling finally wins more over more operators, more people will be attracted by the opportunity to plumb the water directly into a local or district heating system. Bottom line: There is likely to be a lot more heat recovery from data centers in the future.
The second metric? In the years ahead, David predicted that “managing state of charge” for batteries will become a key operational concern — perhaps not a metric, exactly, but that may come. This doesn’t sound wildly exciting, but there is an important reason: At present, most batteries are lead-acid and, because this technology is not suited to multiple or rapid recharges, the batteries are kept in a fully charged state more or less permanently.
But lithium-ion (Li-ion) batteries are a different matter. They can be cycled thousands of times with relatively minimal degradation. Over time, Li-ion batteries, mostly but not only in uninterruptible power supplies, will be used in ways that utilize this capability. They will be charged when energy is cheap and available, then discharged when energy is needed, when it provides additional capacity, when it can be sold, or when it can be re-allocated/distributed to a different area of the data center in greater need. Smart energy systems are already making use of this capability — battery monitoring is coupled with data about IT loads, redundancy, utility power, cooling and other data. In a more advanced, “energy-smart” data center, operators will need to know the state of charge of every energy storage device in the system at all times.
Another interesting fact that came up at the DCD Smart Energy event: Several delegates said they were working on projects using previously owned lithium-ion batteries, most notably from electric vehicles (which have higher rapid charge requirements). The issues about whether these batteries are the most suitable type of Li-ion chemistry for the data center aside, using former vehicle batteries is very economical and should cause few problems as long as the batteries are monitored. Again, this suggests the economics and use case for Li-ion are steadily tilting in favor of wider adoption: expect more Li-ion and more “energy-smart” data centers.
——————————————————————————–
For more information on advanced energy management and the potential for Lithium batteries to transform the data center power cost model, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/04/RECOVERY2.jpg4451256Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngAndy Lawrence, Executive Director of Research, Uptime Institute, [email protected]2019-07-08 09:00:282019-05-10 15:05:26Recover heat, re-charge power
A furious — but late — response to the National Fire Protection Association’s (NFPA’s) proposed standard 855, Standard for the Installation of Stationary Energy Storage, should put the whole data center industry on notice that it needs to increase its participation in standards development worldwide. This is not the first time professional bodies outside of the data center industry have developed standards without sufficient considered input from data center owners and operators.
NFPA’s Standard 855 is controversial because if it is ratified by the NFPA (a U.S.-based fire-safety standards organization), it will eventually become the basis of building codes in the U.S. and elsewhere. These codes will regulate how batteries (including nickel-cadmium and lithium-ion) can be deployed in data centers.
In this case, the concerns center on whether some of the safety provisions that make sense for renewable energy storage arrays will prove too costly and even counterproductive when applied to data centers. NFPA 855’s provisions include minimum spacing requirements and restrictions on siting and power capacity. In addition, exceptions specifically relating to data centers are found in Chapter 4 of the proposed standard rather than its Scope, which may make them less convincing to jurisdictional authorities.
According to our research, the data center industry became aware of the more controversial elements of the standard only after the public comment process had ended and two revisions completed. In this case and others, well-meaning — but self-selected — individuals had already made their decisions. This is important, because NFPA 855 forms the basis of codes that empower jurisdictional authorities, as well as other code enforcement officials.
It is not clear that the NFPA committee included enough data center representatives, who may have been able to explain that batteries designed for data center use already incorporate significant protections and that older battery types have been used for many years, without serious incident. In addition, IEEE (the Institute of Electrical and Electronics Engineers, an international engineering organization) and Underwriter’s Laboratory (UL, an international safety organization) have already developed a series of standards pertaining to the use of Li-ion batteries in data centers.
Background
According to opponents of the new standard as it stands, NFPA 855 was originally intended to apply only to Li-ion batteries. The California Energy Storage Alliance (a membership-based advocacy group formed to advance large-scale renewable energy projects) requested the standard. It wanted to make it easier to gain approvals to site energy storage systems in support of the state’s renewable energy programs.
The request was granted, and NFPA drew upon its membership to create a committee that included the expertise necessary to develop the standard. Not until near the close of the second review period (and long after public comment closed) did the data center industry note that the draft standard proposed to regulate battery use in data centers in a wholly new and unnecessary way.
In February 2019, an ad hoc group of 10 individuals from IEEE’s ESSB (Energy Storage and Stationary Battery) committee — five of whom are also members of NFPA’s 855 committee — launched a grass roots campaign to return the draft standard to the committee to be rewritten. The effort will require the support of two-thirds of the NFPA members at its technical meetings in San Antonio, TX, June 17-20, 2019. The campaign is unusual, as NFPA procedures normally allow for only limited revision of a proposed standard after the close of the public comment period.
In the short term there is little that non-NFPA members can do but wait for the results of the June meeting. NFPA members, of course, can make their voices heard at the meeting. Long term, however, is a different matter: organizations should resolve to increase their industry participation, submit public comments on relevant standards and even become involved in the standards-making process. The data center industry can only influence relevant standards if they participate in their development. Unfortunately, the list of relevant standards-making organizations is lengthy (see Ansi Standards for a partial list). Click here to learn more about proposed NFPA standard 855.
——————————————————————————–
Get in the conversation! A wealth of data center design and operational information and guidance is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/06/DC-FIREv2.jpg10842994Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2019-06-27 10:00:142019-06-27 09:42:26Proposed NFPA 855 battery standard worries Data Center industry
Smart Energy is getting a lot of airplay in the data center world at present. New or planned products that fall under this broad banner include Energy-as-a-Service uninterruptible power supplies, software-defined power systems, and adaptable redundant systems that enable operators to raise or reduce their level of redundancy according to business needs. It might be argued that some long-established products, such as power capping/management, fall under this banner.
All of these products share a common characteristic. A software control system sits above the hard-wired power switching or power controls in a data center — or between the data center and the utility — and adjusts, in real or near-real time, the demand for and availability of power according to a set of policies. Depending on the infrastructure and systems in place, the power can be controlled by availability, by directional flow, by demand, or even by frequency or voltage.
The policies embedded in the management system might be financial, but might equally be concerned with availability, redundancy, safety and other factors. The data to help make those decisions may come from price signals or models, demand (from IT), equipment status, battery levels or maintenance systems. Implemented effectively, the technology is clearly powerful.
In discussions with vendors, Uptime Institute finds another commonly shared, if not necessarily universal, characteristic: the business case for adoption is a little fuzzy. Rather like data center infrastructure management software, there is a long list of benefits (discussed in the Uptime Institute Intelligence report Smart Energy in the data center), but it is not clear that any single one is overwhelming, given the upfront investment and the concerns about the introduction of complex and unfamiliar new technologies. In recent discussions among Uptime Institute members with considerable data center footprints, releasing stranded capacity and alleviating power needs at peak times emerged as a pressing problem.
The technology certainly has high promise. As our chart above shows in simple graphical form, a big potential benefit is really a form of peak shaving … either trimming demand or increasing capacity. If this can be embedded and trusted, it could significantly cut the capex and opex of data centers.
——————————————————————————–
For more on Smart Energy in the data center, a wealth of research is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/04/SMART-POWER.jpg5541177Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngAndy Lawrence, Executive Director of Research, Uptime Institute, [email protected]2019-06-18 09:00:262019-05-10 15:04:46The Business Case for Smart Energy is Still in the Making
PG&E Turns Power Off (a.k.a. Climate Change and the Data Center)
/in Design, Executive, Operations/by Kevin HeslinIn our October 2018 report, A mission-critical industry unprepared for climate change, Uptime Institute Intelligence urged data center operators and owners to plan for the effects of climate change. We specifically encouraged data center owners and operators to meet with government officials and utility executives to learn about local and regional disaster preparation and response plans.
A recent public filing by Pacific Gas and Electric (PG&E), California’s largest electric utility, underlines our point and gives data center owners and operators in that state, including areas near Silicon Valley, a lot to discuss.
According to The Wall Street Journal, PG&E plans to dramatically expand the number and size of areas where it will cut power when hot and dry weather makes wildfires likely, effectively eliminating transmission and distribution gear as a cause of wildfires. In addition, the utility announced plans to spend $28 billion over the next four years to modernize infrastructure.
Extreme wildfires, arguably a result of climate change, have caused PG&E and its customers big problems. In 2018, PG&E intentionally interrupted service in two different areas, disrupting essential services and operations in one area (Calistoga) for two days. And on May 16, 2019, California confirmed that utility-owned power started last November’s Camp Fire, which killed 85 people and destroyed the town of Paradise.
The utility’s been forced to take drastic steps: In January 2019, it sought bankruptcy protection, citing more than $30 billion in potential damages (including as much as $10.5 billion related to the Camp Fire) from wildfires cause by its aging infrastructure and failure to address the growing threat of extreme wildfires caused by climate change.
PG&E is in the front line but is not unique. The case demonstrates that it is unwise for data center operators and owners to address reliability in isolation. Circumstances affecting data centers in the PG&E service territory, for instance, can vary widely, making communicating with utility officials and local authorities essential to maintaining operations in a disaster and any recovery plan.
In this case, one might identify three distinct periods:
Each of these stages brings markedly different reliability and cost considerations, depending on how close a data center is to areas vulnerable to wildfires and where (and when) PG&E makes its infrastructure investments.
The full report, “A mission-critical industry unprepared for climate change” is available to members of Uptime Institute Network. Want to know more about this organization? Check out the complete benefits of membership here.
Enterprise IT and the public cloud: What the numbers tell us
/in Executive/by Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]The spectacular growth of the public cloud has many drivers, only one of which is the deployment, redevelopment or migration of enterprise IT into the cloud. But many groups within the industry — data center builders and operators, hardware and software suppliers, networking companies, and providers of skills and services alike — are closely watching the rate at which enterprise workloads are moving to the cloud, and for good reason: Enterprise IT is a proven, high-margin source of revenue, supported by large, reliable budgets. Attracting — or keeping — enterprise IT business is critical to the existing IT ecosystem.
The popular view is that enterprise IT is steadily migrating to the public cloud, with the infrastructure layers being outsourced to cheaper, more reliable, more flexible, pay-as-you-go cloud services. Amazon Web Services (AWS), Microsoft Azure and Google are the biggest beneficiaries of this shift. There is very little contention on this directional statement.
It is only when we add the element of TIMEFRAME that we start to lose universal agreement. It is frequently reported and forecasted that we will reach a tipping point where traditional data center centric style approaches (including non-public cloud or non-SaaS provided IT) will become prohibitively expensive, less efficient and too difficult to support. Looking at any complex system of change, its pretty clear this is sound thinking, BUT when will this happen? Some of the data published by industry analysts and (self-)proclaimed experts suggests it will be many years, while other studies suggest we will soon be nearing that point. Its really a mixed bag depending on who you are talking to.
The Uptime Intelligence view is this kind of genetic fundamental change usually happens much more slowly than the technologists predict and we expect the traditional infrastructure platforms (including in-house data centers and customer managed co-location sites) to be the bedrock of Enterprise IT for many years to come.
So how can the views vary so widely? In 2018, 451 Research’s “Voice of the Enterprise Digital Pulse” survey asked 1,000 operators, “Where are the majority of your workloads/applications deployed — now, and in two years’ time?” In the second half of 2018, 60% said the bulk of their loads were “on-premises IT”. Only 20% said the bulk of their workloads were already in the public cloud or in SaaS.
That is still a fair portion in the public cloud or SaaS, but it has taken time: public cloud has been available for about 13 years (AWS debuted in 2006) and SaaS, over 20 (Salesforce was launched in 1999). Over this period, enterprises have had the choice of co-location, managed hosting (cloud or otherwise), or cloud and SaaS. If we view all but traditional IT infrastructure as “new,” a rough summary might be to say that, over that time, just over a third of enterprises have mostly favored the public cloud as a location for their NEW applications, while two-thirds of organizations have favored other ways of running their IT that give them more control.
But what happens next? 451 Research’s data (see below) does suggest that the move away from on-site, traditional IT is really starting to step up. Thirty-nine percent (39%) of organizations say that by 2020, the bulk of their data will be in SaaS or a public cloud service. That’s a big number, although it is an aspiration — many organizations have not transitioned anywhere near as fast as they would have liked. But look at this another way: Even in 2020, nearly half of all organizations say their loads will still mostly be in enterprise data centers or under their control in a co-location facility. So a lot of enterprise infrastructure will remain in place, with over a third of these organizations still hosting applications mostly in their own data centers.
But the real story is buried in the science of survey design… the way 451 Research (and other research firms) asked its question doesn’t reveal all the nuances that we see at Uptime Institute Intelligence and in fact their simpler questioning misses the fact that both total infrastructure capacity and the data center portion of it are both growing, albeit at different rates: Data Center based capacity is growing but at a slower rate than the total infrastructure capacity. Some big organizations are indeed shifting the bulk of their NEW work to the cloud, but they are simultaneously maintaining and even expanding a premium data center and some edge facilities. This is a complex topic to be sure.
So let’s focus on brand new data gathered in 2019, with included much more granular questions, in the context of where capacity exists, and where business applications, old and new are being deployed. In Uptime Institute’s 2019 operator survey (which had results released in May 2019), Uptime Institute asked a more comprehensive question: “What percentage of your organization’s total IT would you describe as running in the following environments?” This question focused on percentage of existing workloads, rather than where the main workload was located or what would happen in the future (a more objective representation).
This Uptime Institute study confirmed that the shift to the public cloud and service oriented capacity was happening, but revealed that the shift was less dramatic and much slower than most industry reporters and pundits would suggest. Over 600 operators said that, in 2021, about half of all workloads will still be in enterprise data centers, and only 18% of workloads in public cloud/SaaS services. Just over three-quarters of all workloads, they believe, will still be managed by their own enterprise staff at a variety of locations that include enterprise data centers, co-location venues, server closets and micro/edge type data centers. The science of any survey confirms that the more granularity in the question, the more actionable the intelligence.
So what does it all mean? The IT ecosystem that already supports a diverse hybrid infrastructure is not yet facing a tipping point for any one portion. That doesn’t mean it won’t come eventually. Basic physics kicks in: workloads that are re-engineered to be cloud-compatible are much more easily moved to the cloud! But there are many other factors as well and as such, core infrastructure change will be gradual. These reasons including economics of work, dynamic scale-ability, risk management, support-ability, application support, overall performance, service reliability and the changing business models facing every modern business today. Simply put: The Enterprise IT environment is changing everyday and its hybrid nature is already very challenging, but Enterprise IT is not in eminent terminal decline.
Public cloud operators, meanwhile, will continue to assess what they need to do to attract more critical IT workloads. Uptime Institute Intelligence has found that issues of transparency, trust, governance and service need addressing — outstanding tools and raw infrastructure capacity alone are not sufficient. We continue to monitor the actual shifts in workload placements and will report regularly as needed.
More detailed information on this topic, along with a wealth of other strategic planning tools are available to members of the Uptime Institute Network, a members-only community of industry leaders which drive the data center industry. Detail can be found here.
Artificial Intelligence in the Data Center: Myth versus Reality
/in Executive, Operations/by Rhonda Ascierto, Vice President, Research, Uptime InstituteIt is still very early days, but it is clear that artificial intelligence (AI) is set to transform the way data centers are designed, managed and operated — eventually. There has been a lot of misrepresentation and hype around AI, and it’s not always clear how it will be applied, and when. The Uptime Institute view is that it will be rolled out slowly, with initially conservative and limited use cases now and for the next few years. But its impact will grow.
There have been some standout applications to date — for example, predictive maintenance and peer bench-marking — and we expect there will be more as suppliers and large companies apply AI to analyze a wider range of relationships and patterns among a vast range of variables, including resource use, environmental impacts, resiliency and equipment configurations.
Today, however, AI is mostly being used in data centers to improve existing functions and processes. Use cases are focused on delivering tangible operational savings, such as cooling efficiency and alarm suppression/rationalization, as well as predicting known risks with greater accuracy than other technologies can offer.
Artificial Intelligence is currently being applied to perform existing well-understood and defined functions and processes faster and more accurately. In other words, not much new, just better. The table below is taken from the new Uptime Intelligence report “Very smart data centers: How artificial intelligence will power operational decisions” (available to Uptime Institute Network Members) and shows AI functions/services that are being offered or that are in development; with a few exceptions, they are likely to be familiar to data center managers, particularly those that have already deployed data center infrastructure management (DCIM) software.
So where might AI be applied beyond these examples? We think it is likely AI will be used to anticipate failure rates, as well as to model costs, budgetary impacts, supply-chain needs and the impact of design changes and configurations. Data centers not yet built could be modeled and simulated in advance, for example, to compare the operational and/or performance profile and total cost of ownership of a Tier II design data center versus a Tier III design.
Meanwhile, we can expect more marketing hype and misinformation, fueled by a combination of AI’s dazzling complexity, which only specialists can deeply understand, and by its novelty in most data centers. For example:
Myth #1: There is a best type of AI for data centers. The best type of AI will depend on the specific task at hand. Simpler big-data approaches (i.e., not AI) can be more suitable in certain situations. For this reason, new “AI-driven” products such as data center management as a service (DMaaS) often use a mix of AI and non-AI techniques.
Myth #2: AI replaces the need for human knowledge. Domain expertise is critical to the usefulness of any big-data approach, including AI. Human data center knowledge is needed to train AI to make reasonable decisions/recommendations and, especially in the early stages of a deployment, to ensure that any AI outcome is appropriate for a particular data center.
Myth #3: Data centers need a lot of data to implement AI. While this is true for those developing AI, it is not the case for those looking to buy the technology. DMaaS and some DCIM systems use pre-built AI models that can provide limited but potentially useful insights within days.
The advent of DMaaS, which first became commercialized in 2016, is likely to drive widespread adoption of AI in data centers. With DMaaS, large sets of monitored data about equipment and operational environments from different facilities (and different customers) are encrypted, pooled in data lakes, and analyzed using AI, anomaly detection, event-stream playback and other approaches.
Several suppliers now offer DMaaS, a service that parallels the practice of large data center operators who use internal data from across their portfolios to inform decision-making and optimize operations. DCIM suppliers are also beginning to embed AI functions into their software.
Data center AI is here today and is available to almost any facility. The technology has moved beyond just hyper-scale facilities and will move beyond known processes and functions — but probably not for another two or three years.
——————————————————————————–
For more information on Artificial Intelligence and how it is already being applied in the data center, along with a wealth of other research consider becoming part of the Uptime Institute Network community. Members of this owner-operator community enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
Recover heat, re-charge power
/in Operations/by Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]Recently I attended the Data Center Dynamics (DCD) Smart Energy conference in Stockholm. During a panel discussion on energy, data centers and innovation, David Hall (Senior Director of Technology Innovation for Equinix) made two observations, almost in passing, about metrics and monitoring. Both were intriguing and, to my ears, suggested that operators’ thinking about sustainability and data center energy use is starting to evolve.
The first metric was power usage effectiveness (PUE) — the universally applied but widely criticized standard way to measure data center energy efficiency. In essence he asked, What does it matter if your data center PUE is (a very inefficient) 4 or 5, if you are capturing and re-using the heat (i.e., the byproduct of wasted energy)? After all, as many at the conference pointed out, waste energy can be used for district, building or campus heating; for greenhouses; for heating swimming pools; or, as both Facebook (in Northern Sweden) and the National Renewable Energy Laboratory (in Colorado) do, for melting snow.
As moderator of the discussion, I took David a little to task, asking if Equinix was really doing a lot of heat recovery (no) and noting the pride that Equinix takes in its mostly low and very healthy PUE numbers. (I might have added that computers don’t make the most efficient heaters.)
But small details aside, his point is right: It does not make economic or environmental sense to “boil the sky” with warm air or the rivers with warm water. Large data centers, even very efficient ones, can and do put out a lot of heat, which wastes money, burns up energy and pushes up carbon emissions.
Heat recovery has, until now, had little adoption. In Europe, it is probably viewed as peculiarly Nordic. But it might get traction in the future: as capacity moves nearer the edge and more processing takes place in the cities, there is more opportunity to use the heat; as data centers are run at higher temperatures, the “quality” of that heat will be better; and as (if?) liquid cooling finally wins more over more operators, more people will be attracted by the opportunity to plumb the water directly into a local or district heating system. Bottom line: There is likely to be a lot more heat recovery from data centers in the future.
The second metric? In the years ahead, David predicted that “managing state of charge” for batteries will become a key operational concern — perhaps not a metric, exactly, but that may come. This doesn’t sound wildly exciting, but there is an important reason: At present, most batteries are lead-acid and, because this technology is not suited to multiple or rapid recharges, the batteries are kept in a fully charged state more or less permanently.
But lithium-ion (Li-ion) batteries are a different matter. They can be cycled thousands of times with relatively minimal degradation. Over time, Li-ion batteries, mostly but not only in uninterruptible power supplies, will be used in ways that utilize this capability. They will be charged when energy is cheap and available, then discharged when energy is needed, when it provides additional capacity, when it can be sold, or when it can be re-allocated/distributed to a different area of the data center in greater need. Smart energy systems are already making use of this capability — battery monitoring is coupled with data about IT loads, redundancy, utility power, cooling and other data. In a more advanced, “energy-smart” data center, operators will need to know the state of charge of every energy storage device in the system at all times.
Another interesting fact that came up at the DCD Smart Energy event: Several delegates said they were working on projects using previously owned lithium-ion batteries, most notably from electric vehicles (which have higher rapid charge requirements). The issues about whether these batteries are the most suitable type of Li-ion chemistry for the data center aside, using former vehicle batteries is very economical and should cause few problems as long as the batteries are monitored. Again, this suggests the economics and use case for Li-ion are steadily tilting in favor of wider adoption: expect more Li-ion and more “energy-smart” data centers.
——————————————————————————–
For more information on advanced energy management and the potential for Lithium batteries to transform the data center power cost model, join the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
Proposed NFPA 855 battery standard worries Data Center industry
/in Design, News/by Kevin HeslinA furious — but late — response to the National Fire Protection Association’s (NFPA’s) proposed standard 855, Standard for the Installation of Stationary Energy Storage, should put the whole data center industry on notice that it needs to increase its participation in standards development worldwide. This is not the first time professional bodies outside of the data center industry have developed standards without sufficient considered input from data center owners and operators.
NFPA’s Standard 855 is controversial because if it is ratified by the NFPA (a U.S.-based fire-safety standards organization), it will eventually become the basis of building codes in the U.S. and elsewhere. These codes will regulate how batteries (including nickel-cadmium and lithium-ion) can be deployed in data centers.
In this case, the concerns center on whether some of the safety provisions that make sense for renewable energy storage arrays will prove too costly and even counterproductive when applied to data centers. NFPA 855’s provisions include minimum spacing requirements and restrictions on siting and power capacity. In addition, exceptions specifically relating to data centers are found in Chapter 4 of the proposed standard rather than its Scope, which may make them less convincing to jurisdictional authorities.
According to our research, the data center industry became aware of the more controversial elements of the standard only after the public comment process had ended and two revisions completed. In this case and others, well-meaning — but self-selected — individuals had already made their decisions. This is important, because NFPA 855 forms the basis of codes that empower jurisdictional authorities, as well as other code enforcement officials.
It is not clear that the NFPA committee included enough data center representatives, who may have been able to explain that batteries designed for data center use already incorporate significant protections and that older battery types have been used for many years, without serious incident. In addition, IEEE (the Institute of Electrical and Electronics Engineers, an international engineering organization) and Underwriter’s Laboratory (UL, an international safety organization) have already developed a series of standards pertaining to the use of Li-ion batteries in data centers.
Background
According to opponents of the new standard as it stands, NFPA 855 was originally intended to apply only to Li-ion batteries. The California Energy Storage Alliance (a membership-based advocacy group formed to advance large-scale renewable energy projects) requested the standard. It wanted to make it easier to gain approvals to site energy storage systems in support of the state’s renewable energy programs.
The request was granted, and NFPA drew upon its membership to create a committee that included the expertise necessary to develop the standard. Not until near the close of the second review period (and long after public comment closed) did the data center industry note that the draft standard proposed to regulate battery use in data centers in a wholly new and unnecessary way.
In February 2019, an ad hoc group of 10 individuals from IEEE’s ESSB (Energy Storage and Stationary Battery) committee — five of whom are also members of NFPA’s 855 committee — launched a grass roots campaign to return the draft standard to the committee to be rewritten. The effort will require the support of two-thirds of the NFPA members at its technical meetings in San Antonio, TX, June 17-20, 2019. The campaign is unusual, as NFPA procedures normally allow for only limited revision of a proposed standard after the close of the public comment period.
In the short term there is little that non-NFPA members can do but wait for the results of the June meeting. NFPA members, of course, can make their voices heard at the meeting. Long term, however, is a different matter: organizations should resolve to increase their industry participation, submit public comments on relevant standards and even become involved in the standards-making process. The data center industry can only influence relevant standards if they participate in their development. Unfortunately, the list of relevant standards-making organizations is lengthy (see Ansi Standards for a partial list). Click here to learn more about proposed NFPA standard 855.
——————————————————————————–
Get in the conversation! A wealth of data center design and operational information and guidance is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.
The Business Case for Smart Energy is Still in the Making
/in Design/by Andy Lawrence, Executive Director of Research, Uptime Institute, [email protected]Smart Energy is getting a lot of airplay in the data center world at present. New or planned products that fall under this broad banner include Energy-as-a-Service uninterruptible power supplies, software-defined power systems, and adaptable redundant systems that enable operators to raise or reduce their level of redundancy according to business needs. It might be argued that some long-established products, such as power capping/management, fall under this banner.
All of these products share a common characteristic. A software control system sits above the hard-wired power switching or power controls in a data center — or between the data center and the utility — and adjusts, in real or near-real time, the demand for and availability of power according to a set of policies. Depending on the infrastructure and systems in place, the power can be controlled by availability, by directional flow, by demand, or even by frequency or voltage.
The policies embedded in the management system might be financial, but might equally be concerned with availability, redundancy, safety and other factors. The data to help make those decisions may come from price signals or models, demand (from IT), equipment status, battery levels or maintenance systems. Implemented effectively, the technology is clearly powerful.
In discussions with vendors, Uptime Institute finds another commonly shared, if not necessarily universal, characteristic: the business case for adoption is a little fuzzy. Rather like data center infrastructure management software, there is a long list of benefits (discussed in the Uptime Institute Intelligence report Smart Energy in the data center), but it is not clear that any single one is overwhelming, given the upfront investment and the concerns about the introduction of complex and unfamiliar new technologies. In recent discussions among Uptime Institute members with considerable data center footprints, releasing stranded capacity and alleviating power needs at peak times emerged as a pressing problem.
The technology certainly has high promise. As our chart above shows in simple graphical form, a big potential benefit is really a form of peak shaving … either trimming demand or increasing capacity. If this can be embedded and trusted, it could significantly cut the capex and opex of data centers.
——————————————————————————–
For more on Smart Energy in the data center, a wealth of research is available to members of the Uptime Institute Network. Members enjoy a continuous stream of relevant and actionable knowledge from our analysts and share a wealth of experiences with their peers from some of the largest companies in the world. Membership instills a primary consciousness about operational efficiency and best practices which can be put into action everyday. For membership information click here.