Data centers have become victims of their own success. Ever-larger data centers have mushroomed across the globe in line with an apparently insatiable demand for computing and storage capacity. The associated energy use is not only expensive (and generating massive carbon emissions) but is also putting pressure on the grid. Most data center developments tend to be concentrated in and around metropolitan areas — making their presence even more palpable and attracting scrutiny.
Despite major achievements in energy performance throughout the 2010s — as witnessed by Uptime data on industry-average PUE — this has created challenges for data center builders and operators. Delivering bulletproof and energy-efficient infrastructure at a competitive cost is already a difficult balancing act, even without having to engage with local government, regulators and the public at large on energy use, environmental impact and carbon footprint.
IT is conspicuously absent from this dialogue. Server and storage infrastructure account for the largest proportion of a data centers’ power consumption and physical footprint. As such, they also offer the greatest potential for energy-efficiency gains and footprint compression. Often the issue is not wasted but unused power: poor capacity-planning practices create demand for additional data center developments even where unused (but provisioned) capacity is available.
Nonetheless, despite growing costs and sustainability pressures, enterprise IT operators — as well as IT vendors — continue to show little interest in the topic.
This will be increasingly untenable in the years ahead. In the face of limited power availability in key data center markets, together with high power prices and mounting pressure to meet sustainability legislation, enterprise IT’s energy footprint will have to be addressed more seriously. This will involve efficiency-improvement measures aimed at using dramatically fewer server and storage systems for the same workload.
Uptime has identified four key areas where pressure on IT will continue to build — all of them pointing in the same direction:
Municipal (local) resistance to new large data centers.
The limited availability of grid power to support increasing data center capacity.
Increasing regulation governing sustainability and carbon reduction, and more stringent reporting requirements.
High energy costs.
Municipalities — and utility providers — need the pace to drop
Concerns over power and land availability have, since 2019, led to greater restrictions on the construction of new data centers (Table 1). This is likely to intensify. Interventions on the part of local government and utility providers typically involve more rigorous application processes, more stringent energy-efficiency requirements and, in some cases, the outright denial of new grid connections for major developments. These restrictions have resulted in costly project delays (and, in some cases, cancellations) for major cloud and colocation providers.
Frankfurt, a key financial hub and home to one of the world’s largest internet exchange ecosystems, set an example. Under a new citywide masterplan (announced in 2022), the city stipulates densified, multistory and energy-optimized data center developments — chiefly out of concerns for sprawling land use and changes to the city’s skyline.
The Dublin area (Ireland) and Loudoun County (Northern Virginia, US) are two stand-out examples (among others) of the grid being under strain and power utilities having temporarily paused or capped new connections because of current shortfalls in generation or transmission capacity. Resolving these limitations is likely to take several years. A number of data center developers in both Dublin and Loudoun County have responded to these challenges by seeking locations further afield.
Table 1 Restrictions on new data centers since 2019 — selected examples
New sustainability regulations
Following years of discussion with key stakeholders, authorities have begun introducing regulation governing performance improvements and sustainability reporting for data centers — a key example being the EC’s Energy Efficiency Directive recast (EED), which will subject data centers directly to regulation aimed at reducing both energy consumption and carbon emissions (see Critical regulation: the EU Energy Efficiency Directive recast).
This regulation creates new, detailed reporting requirements for data centers in the EU and will force operators to improve their energy efficiency and to make their energy performance metrics publicly available — meaning investors and customers will be better equipped to weigh business decisions on the basis of the organizations’ performance. The EED is expected to enter into force in early 2023. At the time of writing (December 2022), the EED could still be amended to include higher targets for efficiency gains (increasing from 9% to 14.5%) by 2030. The EC has already passed legislation mandating regulated organizations to report on climate-related risks, their potential financial impacts and environmental footprint data every year from 2025, and will affect swathes of data centers.
Similar initiatives are now appearing in the US, with the White House Office of Technology and Science Policy’s (OTSP’s) Climate and Energy Implications of Crypto-assets in the US report, published in September 2022. Complementary legislation is being drafted that addresses both crypto and conventional data centers and sets the stage for the introduction of similar regulation to the EED over the next three to five years (see First signs of federal data center reporting mandates appear in US).
Current and draft regulation is predominantly focused on the performance of data center facility infrastructure (power and cooling systems) in curbing the greenhouse gas emissions (GHGs) associated with utility power consumption (Scope 2). While definitions and metrics remain vague (and are subject to ongoing development) it is clear that EC regulators intend to ultimately extend the scope of such regulation to also include IT efficiency.
Expensive energy is here to stay
The current energy crises in the UK, Europe and elsewhere are masking some fundamental energy trends. Energy prices and, consequently, power prices were on an upward trajectory before Russia’s invasion of Ukraine. Wholesale forward prices for electricity were already shooting up — in both the European and US markets — in 2021.
Certain long-term trends also underpin the trajectory towards costlier power and create an environment conducive to volatility. Structural elements to long-term power-price inflation include:
The global economy’s continued dependence on (and continued increasing consumption of) oil and gas.
Underinvestment in fossil-fuel supply capacities while alternative low-carbon generation and energy storage capacities remain in development.
Gargantuan build-out of intermittent power generation capacity (overwhelmingly wind and solar) as opposed to firm low-carbon generation.
Steady growth in power demand arising from economic growth and electrification in transport and industry.
More specifically, baseload power is becoming more expensive because of the economic displacement effect of intermittent renewable energy. Regardless of how much wind and solar (or even hydro) is connected to the grid, reliability and availability considerations mean the grid has to be fully supported by dispatchable generation such as nuclear, coal and, increasingly, gas.
However, customer preference for renewable energy (and its low operational costs) means fleets of dispatchable power plants operate at reduced capacity, with an increasing number on standby. Grid operators — and, ultimately, power consumers — still need to pay for the capital costs and upkeep of this redundant capacity, to guarantee grid security.
IT power consumption will need to be curbed
High energy prices, carbon reporting, grid capacity shortfalls and efficiency issues have been, almost exclusively, a matter of concern for facility operators. But facility operators have now passed the point of diminishing returns, with greater intervention delivering fewer and fewer benefits. In contrast, every watt saved by IT reduces pressures elsewhere. Reporting requirements will, sooner or later, shed light on the vast potential for greater energy efficiency (or, to take a harsher view, expose the full extent of wasted energy) currently hidden in IT infrastructure.
For these reasons, other stakeholders in the data center industry are likely to call upon IT infrastructure buyers and vendors to engage more deeply in these conversations, and to commit to major initiatives. These demands will be completely justified: currently, IT has considerable scope for delivering improved power management and energy efficiency, where required.
Architecting IT infrastructure to deliver improved energy efficiency through better hardware configuration choices, dynamic workload consolidation practices and the use of power-management features (including energy-saving states and power throttling / capping features) — will deliver major energy-efficiency gains. Server utilization, and the inherent efficiency of server hardware, are two key dials that could bring manifold improvements in energy performance compared with typical enterprise IT.
These efficiency gains are not just theoretical: web technology and cloud services operators exploit them wherever they can. There is no reason why other organizations cannot adopt some of these practices and move closer to the performance metrics achievable. In an era of ever-more expensive (and scarce) power resources, together with mounting regulatory pressure, it will be increasingly difficult for IT C-level managers to deny calls to engage in the battle for better energy efficiency.
The full report Five data center predictions for 2023is available to here.
See our Five Data Center Predictions for 2023 webinar here.
Daniel Bizo
Douglas Donnellan
https://journal.uptimeinstitute.com/wp-content/uploads/2023/05/Energy-efficiency-focus-to-shift-to-IT-at-last-featured.jpg5391030Daniel Bizo, Research Director, Uptime Institute Intelligence, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngDaniel Bizo, Research Director, Uptime Institute Intelligence, [email protected]2023-05-03 15:00:002023-05-02 11:33:58Energy-efficiency focus to shift to IT — at last
The past decade has seen numerous reports of so-called cloud “repatriations” — the migration of applications back to on-premises venues following negative experiences with, or unsuccessful migrations to, the public cloud.
A recent Uptime Update (High costs drive cloud repatriation, but impact is overstated) examined why these migrations might occur. The Update revealed that unexpected costs were the primary reason for cloud repatriation, with the cost of data storage being a significant factor in driving expenditure.
Software vendor 37signals recently made headlines after moving its project management platform Basecamp and email service HEY from Amazon Web Services (AWS) and Google Cloud to a colocation facility.
The company has published data on its monthly AWS bills for HEY (Figure 1). The blue line in Figure 1 shows the company’s monthly AWS expenditure. This Update examines this data to understand what lessons can be learned from 37signal’s experience.
37signals’ AWS spend — observations
Based on the spend charts included in 37signals’ blog (simplified in Figure 1), some observations stand out:
The applications that are part of HEY scale proportionally. When database costs increase, for example, the cost of other services increases similarly. This proportionality suggests that applications (and the total resources used across various services) have been architected to scale upwards and downwards, as necessary. As HEY’s costs scale proportionally, it is reasonable to assume that costs are proportional to resources consumed.
Costs (and therefore resource requirements) are relatively constant over the year — there are no dramatic increases or decreases from month to month.
Database and search are substantial components of 37signals’ bills. The company’s database is not expanding, however, suggesting that the company is effective in preventing sprawl. 37signals’ data does not appear to have “gravity” — “gravity” here meaning the greater the amount of data stored in a system the more data (and, very often, software applications) it will attract over time.
While 37signals’ applications are architected to scale upwards and downwards as necessary, these applications rarely need to scale rapidly to address unexpected demand. This consistency allows 37signals to purchase servers that are likely to be utilized effectively over their life cycle without performance being impacted due to low capacity.
This high utilization level supports the company’s premise that — at least for its own specific use cases — on-premises infrastructure may be cheaper than public cloud.
Return on server investment
As with any capital investment, a server is expected to provide a return — either through increased revenue, or higher productivity. If a server has been purchased but is sitting unused on a data center floor, no value is being obtained, and CAPEX is not being recovered while that asset depreciates.
At the same time, there is a downside to using every server at its maximum capacity. If asset utilization is too high, there is nowhere for applications to scale up if needed. The lack of a capacity buffer could result in application downtime, frequent performance issues, and even lost revenue or productivity.
Suppose 37signals decided to buy all server hardware one year in advance, predicted its peak usage over the year precisely, and purchased enough IT to deliver that peak (shown in orange on Figure 1). Under this ideal scenario, the company would achieve a 98% utilization of its assets over that period (in a financial, not computing or data-storage sense) — that is, 98% of its investment would be used over the year for a value-adding activity.
The chance of the company being able to make such a perfect prediction is unlikely. Overestimating capacity requirements would result in lower utilization and, accordingly, more waste. Underestimating capacity requirements would result in performance issues. A more sensible approach would be to purchase servers as soon as required (shown in green on Figure 1). This strategy would achieve 92% utilization. In practice, however, the company would have more servers idle for immediate capacity, decreasing utilization further.
Cloud providers can never achieve such a high level of utilization (although non-guaranteed “spot” purchases can help). Their entire proposition relies on being able to deliver capacity when needed. As a result, cloud services must have servers available when required — and lots of them.
Why utilization matters
Table 1 makes simple assumptions that demonstrate the challenge a cloud provider faces in provisioning excess capacity.
These calculations show that this on-premises implementation costs $10,000 in total, with the cloud provider’s total costs being $16,000. Cloud buyers rent units of resources, however, with the price paid covering both operating costs (such as power), the resources being used, and the depreciating value (and costs) of servers held in reserve. A cloud buyer would pay a minimum of $1,777 per unit, compared with a unit cost of $1,111 in an on-premises venue. The exact figures are not directly relevant: what is relevant is the fact that the input cost using public cloud is 60% more per unit —purely because of utilization.
Of course, this calculation is a highly simplified explanation of a complex situation. But, in summary, the cloud provider is responsible for making sure capacity is readily available (whether this be servers, network equipment, data centers, or storage arrays) while ensuring sufficient utilization such that costs remain low. In an on-premises data center this balancing act is in the hands of the organization. If enterprise capacity requirements are stable or slow-growing, it can be easier to balance performance against cost.
Sustaining utilization
It is likely that 37signals has done its calculations and is confident that migration is the right move. Success in migration relies on several assumptions. Organizations considering migrating from the public cloud back to on-premises infrastructure are best placed to make a cost-saving when:
There are unlikely to be sudden drops in resource requirements, such that on-premises servers are sitting idle and depreciating without adding value.
Unexpected spikes in resource requirements (that would mean the company could not otherwise meet demand, and the user experience and performance would be impacted) are unlikely. An exception here would be if a decline in user experience and performance did not impact business value — for example, if capacity issues meant employees were unable to access their CEO’s blog simultaneously.
Supply chains can deliver servers (and data center space) quickly in line with demand without the overheads involved in holding many additional servers (i.e., depreciating assets) in stock.
Skills are available to manage those aspects of the infrastructure for which the cloud provider was previously responsible (e.g., MySQL, capacity planning). These factors have not been considered in this update.
The risk is that 37signals (or any other company moving back to the public cloud) might not be confident of these criteria being met in the longer term. Were the situation to change unexpectedly, the cost profile of on-premises versus public cloud can be substantially altered.
https://journal.uptimeinstitute.com/wp-content/uploads/2023/04/Asset-utilization-drives-cloud-repatriation-economics-featured.jpg5391030Dr. Owen Rogers, Research Director for Cloud Computing, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngDr. Owen Rogers, Research Director for Cloud Computing, Uptime Institute, [email protected]2023-04-26 15:00:002023-04-25 10:08:53Asset utilization drives cloud repatriation economics
A proposed permanent network of electromagnetic monitoring stations across the continental US, operating in tandem with a machine learning (ML) algorithm, could facilitate accurate predictions of geomagnetic disturbances (GMDs). If realized, this predictive system could help grid operators avert disruption and reduce the likelihood of damage to their — and their customers’ — infrastructure, including data centers.
Geomagnetic disturbances, also referred to as “geomagnetic storms” or “geomagnetic EMP”, occur when violent solar events interact with Earth’s atmosphere and magnetic field. Solar events that cause geomagnetic EMP (such as coronal mass ejection, or solar flares) occur frequently but chaotically, and are often directed away from Earth. The only long-term available predictions are probabilistic, and imprecise: for example, an extreme geomagnetic EMP typically occurs once every 25 years. When a solar event occurs, the US Space Weather Prediction Center (SWPC) can give hours’ to days’ notice of when it is expected to reach Earth. At present, these warnings lack practical information regarding the intensity and the location of such EMPs’ effects on power infrastructure and customer equipment (such as data centers).
A GMD produces ground-induced currents (GICs) in electrical conductors. The low frequency of a GMD concentrates GICs in very long electrical conductors — such as, for example, the high-voltage transmission lines in a power grid. A severe GMD can cause high-voltage transformer damage and widespread power outages — which could last indefinitely: high-voltage transformers have long manufacturing lead times, even in normal circumstances. Some grid operators have begun protecting their infrastructure against GICs. Data centers, however, are at risk of secondary GIC effects through their connections to the power grid: and many data center operators have not taken protective measures against GMDs, or any other form of EMP (see Electromagnetic pulse and its threat to data centers).
In the event of a less intense GMD, grid operators can often compensate for GICs, without failures. Data centers, however, may experience power-quality issues such as harmonic distortions (defects in AC voltage waveforms). Most data center uninterruptable power supply (UPS) systems are designed to accommodate some harmonics and protect downstream equipment, but the intense effects of a GMD can overwhelm these built-in protections — potentially damaging the UPS or other equipment. The effects of harmonics inside a data center can include inefficient UPS operation, UPS rectifier damage, tripped circuit breakers, overheated wiring, malfunctioning motors in mechanical equipment and, ultimately, physical damage to IT equipment.
The benefit to data center operators from improved forecasting of GMD effects is greatest in the event of these less intense incidents, which threaten equipment damage to power customers but are insufficient to bring down the power grid. An operator’s best defense against secondary GIC effects is to pre-emptively disconnect from the grid and run on backup generators. Actionable, accurate, and localized forecasting of GIC effects would better prepare operators to disconnect in time to avert damage (and to avoid unnecessary generator runtime in regions where this is strictly regulated).
An added challenge regarding the issue of geomagnetic effects on power infrastructure is that it is interdisciplinary: the interactions between Earth’s magnetic field and the power grid have historically not been well understood by experts in either geology or electrical infrastructure. Computationally simulating the effects of geomagnetic events on grid infrastructure is still not practically feasible.
This might change with rapid advancements in computer performance and modeling methods. At the 2022 Infragard National Disaster Resilience Council Summit in the US, researchers at Oregon State University presented a machine learning approach that could produce detailed geomagnetic forecasting — the objective here being to inform grid operators of their assessment of necessary protection of grid infrastructure.
Better modeling and forecasting of GMD effects requires many measurements spanning a geographic area of interest. The Magnetotelluric (MT) Array collects data across the continental US, using seven permanent stations, and over 1,600 temporary locations (as at 2022), arranged 43 miles (70 km) apart, on a grid. Over 1,900 temporary MT stations are planned by 2024. Station instruments measure time-dependent changes in Earth’s electric and magnetic fields, providing insight into the resistivity and electromagnetic impedance of Earth’s crust and upper mantle, in three dimensions. This data informs predictions of GIC intensity, which closely correlates with damaging effects on power infrastructure. The MT Array provides a dramatic and much-needed improvement to the resolution of data available on these geomagnetic effects.
Researchers trained a machine learning model on two months of continuous and simultaneous data output from an array of 25 MT stations in Alaska (US). The trained model effectively predicts geomagnetic effects, with 30 minutes’ advance notice. Fortunately, scaling these forecast abilities to the continental US will not require the long-term operation of thousands of MT stations. The trained model can forecast geomagnetic effects at the 43 miles (70 km) resolution of the full MT Array with significantly fewer permanent stations providing input.
The proposed permanent network is called the “Internet of MT” (IoMT) and would cover the continental US with just 500 permanently installed devices to produce ongoing forecasts, on a grid at 87 mile (140 km) spacing. These devices are designed differently from the equipment at today’s MT Array stations: while collecting the same types of data, they have several advantages. Powered by solar panels and allowing data to be uploaded automatically through a mobile network connection, the IoMT devices have a smaller footprint and a much lower cost of acquisition — approximately $5,000 per station (in contrast to current MT Array station equipment, which would cost $60,000 to install permanently).
The MT Array has, so far, been financed through funding from various US government agencies, including the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the United States Geological Survey (USGS). Though the IoMT’s equipment design promises a significantly lower cost of acquisition and installation than the technology used in today’s temporary array, funding for this next phase has not yet been secured.
Detailed geomagnetic forecasts could make it possible for grid operators to take proactive steps to protect their infrastructure — preventing prolonged power outages and sparing their customers (including data centers) damaging secondary effects. The predictions offered through the IoMT provide a model that could be used worldwide to address the risks inherent in the threat of geomagnetic EMP. Though it is too early to anticipate how this data could be distributed to data center operators, the value of proactive defense from GMDs may support a subscription service — for instance, on the part of companies that provide weather data.
https://journal.uptimeinstitute.com/wp-content/uploads/2023/04/Forecasting-the-Solar-Storm-Threat-featured.jpg5391030Jacqueline Davis, Research Analyst, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngJacqueline Davis, Research Analyst, Uptime Institute, [email protected]2023-04-19 15:00:002023-04-18 10:29:17Forecasting the solar storm threat
Big public-cloud operators have often had to compete against each other — sometimes ferociously. Only rarely have they had to compete against alternative platforms for corporate IT, however. More often than not, chief information officers (CIOs) responsible for mission-critical IT have seen a move to the public cloud as low-risk, flexible, forward-looking and, ultimately, inexpensive. But these assumptions are now coming under pressure.
As the coming years threaten to be economically and politically turbulent, infrastructure and supply chains will be subject to disruption. Increasing government and stakeholder interest will force enterprises to scrutinize the financial and other risks of moving on-premises applications to the public cloud. More effort, and more investment, may be required to ensure that resiliency is both maintained and is clearly evident to its customers. While cloud has, in the past, been viewed as a low-risk option, the balance of uncertainty is changing — as are the cost equations.
Although the picture is complicated, with many factors at play, there are some signs that these pressures may, already, be slowing down adoption. Amazon Web Services (AWS), the largest cloud provider, reported a historic slowdown in growth in the second half of 2022, after nearly a decade of 30% to 40% increases year-on-year. Microsoft, too, has flagged a likely slowdown in the growth of its Azure cloud service.
No one in the industry is suggesting that the adoption of public cloud has peaked, or that it is no longer of strategic value to large enterprises. Use of the public cloud is still growing dramatically and is still driving growth in the data center industry. Public cloud will continue to be the near-automatic choice for most new applications, but organizations with complex, critical and hybrid requirements are likely to slow down or pause their migrations from on-premises infrastructure to the cloud.
Is the cloud honeymoon over?
Many businesses have been under pressure to move applications to the cloud quickly, without comprehensive analysis of the costs, benefits and risks. CIOs, often prompted or backed by heads of finance or chief executives, have favored the cloud over on-premises IT for new and / or major projects.
Data from the Uptime Institute Global Data Center Survey 2022 suggests that, while many were initially wary, organizations are becoming more confident in using the cloud for their most important critical workloads. The proportion of respondents not placing mission-critical workloads into the public cloud has dropped from 74% in 2019 to 63% in 2022. Figure 1 shows the growth in on-premises to cloud migrations, encouraged by C-level enthusiasm and positive perceptions of inexpensive performance.
High-profile cloud outages, however, together with increasing regulatory interest, are encouraging some customers to take a closer look. Customers are beginning to recognize that not all applications have been architected to take advantage of key cloud features — and architecting applications properly can be very costly. “Lifting and shifting” applications that cannot scale, or that cannot track changes in user demand or resource supply dynamically, is unlikely to deliver the full benefits of the cloud and could create new challenges. Figure 1 shows how several internal (IT) and external (macroeconomic) pressures could suppress growth in the future.
One particular challenge is that many applications have not been rearchitected to meet business objectives — most notably resiliency. Many cloud customers are not fully aware of their responsibilities regarding the resiliency and scalability of their application architecture, in the belief that cloud companies take care of this automatically. Cloud providers, however, make it explicitly clear that zones will suffer outages occasionally and that customers are required to play their part. Cloud providers recommend that customers distribute workloads across multiple availability zones, thereby increasing the likelihood that applications will remain functional, even if a single availability zone falters.
Research by Uptime shows how vulnerable enterprise-cloud customers are to single-zone outages currently. Data from the Uptime Institute Global Data Center Survey 2022 shows that only 35% of respondents believe the loss of an availability zone would result in significant performance issues, and only 16% of respondents indicated that the loss of an availability zone would not impact their cloud applications.
To capture the full benefits of the cloud and to reduce the risk of outages, organizations need to (re)architect for resiliency. This resiliency has an upfront and ongoing cost implication, and this needs to be factored in when a decision is made to migrate applications from on-premises to the cloud. Uptime Intelligence has previously found that architecting an application across dual availability zones can cost 43% more than a non-duplicated application (see Public cloud costs versus resiliency: stateless applications). Building across regions, which further improves resiliency, can double costs. Some applications might not be worth migrating to the cloud, given the additional expense of resiliency being factored into application architecture.
Economic forces will reduce pressure to migrate to the cloud
Successful and fully functional cloud migrations of critical workloads carry additional costs that are often substantial — a factor that is only now starting to be fully understood by many organizations.
These costs include both the initial phase — when applications have to be redeveloped to be cloud-native, at a time when skills are in short supply and high demand — and the ongoing consumption charges that arise from long periods of operation across multiple zones. It is clear that the cost of the cloud has not always been factored in: a major reason for organizations moving their workloads back to on-premises from the public cloud being cost (cited by 43% of respondents to Uptime Institute’s Data Center Capacity Trends Survey 2022).
Server refresh cycles often act as a trigger for cloud migration. Rather than purchasing new physical servers, IT C-level leaders choose to lift-and-shift applications to the public cloud. Uptime’s 2015 global survey of data center managers showed that 35% of respondents kept their servers in operation for five years or more; this proportion had increased to 52% by 2022. During challenging economic times, CIOs may be choosing to keep existing servers running instead of investing in a migration to the cloud.
Even if CIOs continue to exert pressure for a move to the cloud, this will be muted by the need to justify the expense of migration. Despite allowing for a reduction in on-premises IT and in data center footprints, many organizations do not have the leeway to handle the unexpected costs required to make cloud applications more resilient or performant. Poor access to capital, together with tighter budgets, will force executives to think carefully about the need for full cloud migrations. Application migrations with a clear return on investment will continue to move to the cloud; those that are borderline may be put on the back burner until conditions are clearer.
Additional pressure from regulators
Governments are also becoming concerned that cloud applications are not sufficiently resilient, or that they present other risks. The dominance of Amazon, Google and Microsoft (the “hyperscalers”) has raised concerns regarding “concentration risk” — an over-reliance on a limited number of cloud providers — in several countries and key sectors.
Regulators are taking steps to assess and manage this concentration risk, amid concerns that it could threaten the stability of many economies. The EC’s recently adopted Digital Operational Resilience Act (DORA) provides a framework for making the oversight of outsourced IT providers (including cloud) the responsibility of financial market players. The UK government’s Office of Communications (Ofcom) has launched a study into the country’s £15 billion public-cloud-services market. The long-standing but newly updated Gramm-Leach-Bliley Act (GLBA, also known as the Financial Services Modernization Act) in the US now requires regular cyber and physical security assessments.
The direction is clear. More organizations are going to be required to better evaluate and plan risks arising from third-party providers. This will not always be easy or accurate. Cloud providers face the same array of risks (arising from cyber-security issues, staff shortages, supply chains, extreme weather and unstable grids, etc.) as other operators. They are rarely transparent about the challenges associated with these risks.
Organizations are becoming increasingly aware that lifting and shifting applications from on-premises to public-cloud locations does not guarantee the same levels of performance or availability. Applications must be architected to take advantage of the public cloud — with the resulting upfront and ongoing cost implications. Many organizations may not have the funds (or indeed the expertise and / or staff) to rearchitect applications during these challenging times, particularly if the business benefits are not clear. Legislation will force regulated industries to consider all risks before venturing into the public cloud. Much of this legislation, however, is yet to be drafted or introduced.
How will this affect the overall growth of the public cloud and its appeal to the C-level management? Hyperscaler cloud providers will continue to expand globally and to create new products and services. Enterprise customers, in turn, are likely to continue finding cloud services competitive. The rush to migrate workloads will slow down as organizations do the right thing: assess their risks, design architectures that help mitigate those risks, and move only when ready to do so (and when doing so will add value to the business).
The full report Five data center predictions for 2023 is available here.
See our Five Data Center Predictions for 2023 webinar here.
https://journal.uptimeinstitute.com/wp-content/uploads/2023/04/Cloud-migrations-to-face-closer-scrutiny-featured.jpg5391030Dr. Owen Rogers, Research Director for Cloud Computing, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngDr. Owen Rogers, Research Director for Cloud Computing, Uptime Institute, [email protected]2023-04-12 15:00:002023-04-11 09:48:33Cloud migrations to face closer scrutiny
A host of regulations worldwide have introduced (or will introduce) legal mandates forcing data center operators to report specific operational data and metrics. Key examples include the European Union’s Corporate Sustainability Reporting Directive (CSRD); the European Commission’s proposed Energy Efficiency Directive (EED) recast; the draft US Securities and Exchange Commission’s (SEC) climate disclosure proposal and various national reporting requirements (including in Brazil, Hong Kong, Japan, New Zealand, Singapore, Switzerland, and the UK) under the Task Force on Climate-related Financial Disclosures (TCFD). The industry is not, currently, adequately prepared to address these requirements, however.
Current data-exchange practices lack consistency — and any recognized consensus — on reporting sustainability-related data, such as energy and water use, greenhouse gas (GHG) emissions and operational metrics. Many enterprises have, throughout discussions with Uptime Institute, indicated that it is difficult (and sometimes impossible) to obtain energy and emissions data from colocation and cloud operators.
The Uptime Institute Global Data Center Survey 2022made clear that data center operators’ readiness to report GHG emissions has seen incremental improvement over previous surveys, with only 37% of respondents indicating that they are prepared to publicly report their GHG emission inventories (up by just 4 percentage points on the previous year). Of these, less than one-third of respondents are currently including their Scope 3 emissions inventories.
Fortunately, most of the reporting regimes will become effective for the 2024 reporting year, giving data center managers time to work with their colocation and cloud providers on obtaining the necessary data, and to put their carbon accounting processes in order. While the finer details will vary according to each enterprise’s digital infrastructure footprint, there are certain common steps that data center managers can implement to facilitate the collection of quality data to fulfill these new reporting mandates.
Colocation operations
The GHG Protocol classifies emissions as Scope 1 and Scope 2, where an entity has operational control or financial control. Having analyzed these definitions Uptime’s position is that IT operators exercise both operational and financial control over their IT operations in colocation data centers.
From an operational control standpoint, IT operators specify and purchase the IT equipment installed in the colocation space, set the operating parameters for that equipment (power management settings, virtual machine creation and assignment, hardware utilization levels, etc.) and maintain and monitor operations. Similarly, IT operators have financial control: they purchase, install, operate and maintain the IT equipment. On which basis, GHG emissions from IT operations in a colocation facility should be classified as Scope 2 for the IT operator and Scope 3 for the colocation operator. Emissions (and energy use) from facility functions, such as power distribution losses from the grid connection to the IT hardware, and cooling energy, should fall into Scope 3 for the IT operator tenant.
Table 1 outlines Scope 2 and Scope 3 emissions reporting responsibilities for IT operations in enterprise (owned), colocation and public cloud data centers under GHG Protocol Corporate Accounting and Reporting Standards.
This guidance did not take a position on the assignment of Scope 2 and 3 emissions in colocation operations, however, leaving this decision to individual colocation operators.
In practice, different operators use two different accounting criteria. Equinix, for example, accounts for all energy use and emissions as Scope 2, with emissions effectively passed to tenants as Scope 3. NTT follows the approach (also recommended by Uptime) that GHG emissions from the energy use of IT operations in a colocation facility should be classified as Scope 2 for the IT operator and Scope 3 for the colocation operator.
The use of two different accounting criteria creates confusion and makes the comparison and understanding of emissions reports across the data center industry difficult. The industry needs to settle on a single accounting methodology for emissions reporting.
The GHG Protocol Corporate Accounting and Reporting Standards are likely to be cited as governing the classification of Scope 1, 2 and 3 emissions under legal mandates such as the CSRD and the proposed SEC climate disclosure requirements. Uptime recommends that colocation operators and their tenants conform to the GHG Protocol to meet these legal requirements.
Public cloud operations
Emissions accounting for IT operations in a public-cloud facility is straightforward: all emissions are Scope 2 for the cloud operator (since they own and operate the IT and facilities infrastructure) and Scope 3 for the IT operator (customer).
A cloud operation in a colocation facility adds another layer of allocation. Public-cloud IT energy use should be accounted for as Scope 3 by the colocation operator and Scope 2 by the cloud operator, with facility infrastructure-related emissions accounted for as Scope 2 and Scope 3 by each entity respectively. This represents no change for the IT operator: all emissions associated with its cloud-based applications and data — regardless where that cloud footprint exists — will be accounted for as Scope 3.
IT operators report that they have difficulty obtaining energy-use and emissions information from their cloud providers. The larger cloud operators, and several of the large colocation providers, typically claim that there are zero emissions associated with operations at their facilities because they are carbon-neutral on account of buying renewable energy and carbon offsets. The same providers are typically unable or unwilling to provide more detailed information — making compliance with legally mandated reporting requirements difficult for IT operators.
If IT operators are to comply with forthcoming disclosure obligations they will, in accordance with the GHG Protocol, need data on their energy use and their location-based (grid power mix) and market-based (contractual mix) emissions. They will also need more granular information on renewable energy consumption and the application of renewable energy certificates (RECs) in offsetting grid power use and the associated emissions if they are to fully understand the underlying details.
Required cloud and colocation provider sustainability data
With new sustainability reporting regulations due to take effect in the medium term, IT operators will clearly need more detailed data on energy and emissions from their infrastructure providers in meeting their compliance responsibilities, as well as in assessing the total environmental impact of their operations. Colocation and cloud services providers (and others providing hosting and various IT infrastructure services) will be expected to provide the data listed below — ideally as a condition of any service contract. This data will provide the information necessary to complete TCFD climate disclosures, as well as the IT operator’s sustainability report. Additional data may need to be added to this list to address specific local reporting or operating-efficiency mandates.
Data-transfer requirements for colocation and cloud services contracts should facilitate the annual reporting of operational data including:
IT power consumption as reported through the operator-specific meter.
12-month average PUE for the space supporting the racks.
Quantity of waste heat recovered and reused.
Total-facility electricity consumption (over the year).
The percentage of each type of generation supplying electricity to the facility (i.e., coal, natural gas, wind, solar, biomass etc.).
Quantity of renewable energy consumed by the facility (megawatt hours, MWh).
MWh of RECs / grid offsets (guarantees of origin, GOs) matched to grid purchases to claim renewable energy “use” and / or to offset grid emissions (to include generation type(s) and the avoided emissions value for each group of RECs or GOs used to match grid electricity consumption).
Percentage of renewable energy “used” (consumed and matched) at the facility as declared by the supplier.
Reported location-based emissions for the facility (MT CO2).
Reported market-based emissions for the facility (MT CO2).
Average annual emissions factor of electricity supplied (by utility, energy retailer or country or grid region) (MT CO2/MWh).
Total-facility water consumption (over the year).
Note: GHG emissions values should be reported for the facility’s total fuel and electricity consumption. Scope 1 emissions should include any refrigerant emissions (fugitive or failure).
Energy-use data should be requested monthly or quarterly (recognizing that data reports from service providers will typically lag by one to three months) to allow tracking of power consumption, emissions metrics and objectives throughout the year. Current-year emissions can be estimated using the previous year’s emissions factor for electrical consumption at a facility.
Mandated reporting requirements will typically require data to be submitted in March following the end of the reporting year. Therefore, service agreements should allow for this to be delivered to clients by February.
Colocation and cloud-service providers need to develop methodologies to provide energy-use and location- and market-based-emissions estimates to their clients. Colocation providers should install metering to measure tenants’ IT power consumption, simplifying allocated energy use and emissions reporting. Cloud providers have several different approaches available for measuring or estimating energy use. Algorithms can be created that use IT-system and equipment power and utilization data (with tracking capabilities and / or knowledge of the power-use characteristics of the deployed IT equipment configurations) to estimate a customer’s energy use and associated location-based emissions.
Any calculation methodology should be transparent for customers. Cloud providers will need to choose a methodology that fits with their data collection capabilities and start providing data to their customers as soon as possible.
IT operators need to obtain information on the RECs, GOs and carbon offsets applied to the overall energy use at each facility at which they operate. This data will allow IT operators to validate the actual emissions associated with the energy consumed by the data center, as well as claims regarding renewable energy use and GHG emissions reductions. IT operators will need to exercise due diligence to ensure that data is accurately reported, and will need to match the service provider’s data to the operator’s chosen sustainability metrics.
Data required from IT tenants at colocation facilities
Colocation operators may require operational information from their tenants. The proposed EED recast is likely to require colocation operators to report specific IT operational data — which will have to be supplied by their tenants. At a minimum, colocation operators need to incorporate a clause into their standard contracts requiring tenants to provide legally mandated operational data. Contract language can be made more specific to the facilities covered by the forthcoming mandates as new regulations are promulgated.
Conclusion
The reporting of data center energy use and GHG emissions is undergoing a major transition — from a voluntary effort subject to limited scrutiny to legally mandated reporting requiring third-party assurance. These legal requirements can extend to smaller enterprise and colocation operators: the EED recast, for example, will apply to operations with just 100 kW of installed IT equipment power. These forthcoming requirements will require IT operators to take responsibility for their operations across all data-center categories — owned, colocation and cloud.
This new regulatory environment will mean digital infrastructure managers will now have to facilitate collaboration between their facilities teams, IT teams and data center service providers to create a coherent sustainability strategy across their operations. Processes will need to be created to generate, collect and report the data and metrics needed to comply with these requirements. At the industry level, standards need to be developed to create a consistent framework for data and metrics reporting.
These efforts need to be undertaken with some urgency since many of these new reporting obligations will take effect from the 2023 or 2024 operating year.
https://journal.uptimeinstitute.com/wp-content/uploads/2023/03/Accounting-for-digital-infrastructure-GHG-emissions-featured.jpg5391030Jay Dietrich, Research Director of Sustainability, Uptime Institute, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngJay Dietrich, Research Director of Sustainability, Uptime Institute, [email protected]2023-04-05 15:00:002023-03-30 14:00:14Accounting for digital infrastructure GHG emissions
Up until two years ago, the cost of building and operating data centers had been falling reasonably steeply. Improving technology, greater production volumes as the industry expanded and consolidated, large-scale builds, prefabricated and modular construction techniques, stable energy prices and the low costs of capital have all played a part. While labor costs have risen during this time, better management, processes and automation have helped to prevent spiraling wage bills.
The past two years, however, have seen these trends come to a halt. Ongoing supply chain issues and rising labor, energy and capital costs all set to make building and running data centers more expensive in 2023 and beyond.
But the impact of these cost increases — affecting IT as well as facilities — will be muted due to the durable growth of the data center industry, fueled by global digitization and the overwhelming appetite for more IT. In response, most large data center operators (and data center capacity buyers) are continuing to move forward with expansion projects and taking on more space.
Smaller and medium-sized data center operators, however, that lack the resources to weather higher costs are likely to find this particularly challenging, with some smaller colocation operators (and enterprise data centers) struggling to remain competitive. Increasing overhead costs arising from new regulatory requirements and climbing interest rates will further challenge some operators, but an immediate rush to the public cloud is unlikely since this strategy, too, has non-trivial (and often high) costs.
Capital costs
Capital plays a major part in the data center life cycle costing. Capital has been both cheap and readily available to data center builders for more than a decade: but the market changed in 2022. Countries that are home to major data center markets or to major companies that own and build data centers are now facing decades-high inflation rates (see Table 1), making it more difficult and more expensive to raise capital. But with increasing demand for capacity, partly due to a pent-up demand resulting from construction bottlenecks during the COVID-19 pandemic along with permitting and energy supply problems more recently, the most active and best positioned operators are funding their capacity expansion.
Uptime Institute’s Data Center and IT Spending Survey 2022 shows that more than two-thirds of enterprise and colocation operators expect to spend more in data center costs in 2023. Most enterprise data centers (90%) say they will be adding IT or data center capacity over the next two to three years, with half expecting to construct new facilities (although they may be closing down others).
The recent rise in construction costs may have come as a shock to some. Data center construction costs and lead-times had improved significantly in the 2010s, but we are now seeing a reversal of this trend. An average Tier III enterprise data center (a technical facility with concurrently maintainable site infrastructure) would have cost approximately $12 million per megawatt (MW) in 2010 per Uptime’s estimates (not including land and civil works) and would have taken up to two years to build.
Changes in design and construction had resulted in these costs dropping — in the best cases, to as little as $6 to $8 million per MW immediately before the COVID-19 pandemic, with lead-times cut to less than 12 months. While Uptime has not verified these claims, some projects were reported to have been budgeted at less than $4 million per MW and taken just six months to complete.
The view today is markedly different. Long waiting times for some significant components (such as certain engine generators and centralized UPS systems) are driving up prices. By 2022, costs for Tier III specifications had risen by $1 million to $2 million per MW according to Uptime’s estimates. Lead-times can now reach or exceed 12 months, prolonging capacity expansion and refurbishment projects — and sometimes preventing operators from earning revenue from near complete facilities.
While prices for some construction materials have started to stabilize at an elevated level since the COVID-19 pandemic, prices are expected to increase further in 2023. Product shortages, together with higher prices for labor, semiconductors and power, are all having an inflationary effect across the industry. Concurrently, site acquisitions at major data center hubs with low-latency network connections now come at a premium, as popular data center locations run out of suitable land and power.
Uptime Institute’s Supply Chain Survey 2022 shows computer room cooling units, UPS systems and power distribution components to be the data center equipment most severely impacted by shortages. Of the 678 respondents to this survey, 80% said suppliers had increased their prices over the past 18 months. Notably, Li-ion battery prices, which had been trending downwards every year until 2021, increased in 2022 due to shortages of raw materials coupled with high demand.
More stringent sustainability requirements, too, contribute to higher capital costs. Regulations in some major data center hubs (such as Amsterdam and Singapore) mean only developments with highly energy efficient designs can move forward. But meeting these requirements will come at a cost (engineering fees, structural changes, different cooling systems), lifting the barriers to entry. New energy efficiency standards (as stipulated under the EC’s Energy Efficiency Directive recast, for example) will stress budgets still further (see Critical regulation: the EU Energy Efficiency Directive recast).
Operators are looking to recover the cost of sustainability requirements through efficiency gains. Surging power costs, which are likely to remain high in the coming years, now mean the calculation has shifted in favor of more aggressive energy optimization — but upfront capital requirements will often be higher.
Operating and IT costs
The operating expenditures associated with data centers and IT infrastructure are also set to increase in 2023, due to steep rises in major input costs. Uptime Institute’s Data Center and IT Spending Survey 2022 showed power to be driving the greatest unit cost increases for most operators (see Figure 1) — the result of high gas prices, the transition to renewable energy, imbalances in grid supply and the war in Ukraine.
The UK and the EU have been most affected by these increases, with certain colocation operators passing down some significant increases in energy costs to their customers. While energy prices are expected to drop (at least against the record highs of 2022), they are likely to remain well above the average levels of the past two decades.
Second only to power, IT hardware showed the next greatest increase in unit costs for enterprise data center respondents, partly because of various dislocations in the hardware supply chain, shortages of some processors and switching silicon, and inflation. Demand for IT hardware has continued to outpace supply, and manufacturing backlogs resulting from the COVID-19 pandemic have yet to catch up.
Uptime sees promising signs of improvements in data center hardware supply, largely due to a recent sag in global demand (caused by economic headwinds and IT investment cycles). As a result, prices and lead-times for generic IT hardware (with some exceptions) will likely moderate in the first half of 2023.
If history is any guide, demand for data center IT will rise again some time in 2023 once some major IT infrastructure buyers accelerate their capacity expansion, which will yet again lead to tightness in the supply of select hardware later in the year.
Staffing will also play a major role in the increased cost of running data centers, and is likely to continue to impact the industry beyond 2023. Many operators say they are spending more on labor costs in a bid to retain current staff (see Figure 2). This presents a further challenge for those enterprises that are unable to match salary offers made by some of the booming tech giants.
The aggregate view is clear: the overall costs of building and running data centers is set to rise significantly over the next few years. While businesses can deploy various strategies and technologies — such as automation, energy efficiency and tactical migration to the cloud — to reduce operational costs, these are likely to entail capital investment, new skills and technical complexity.
Will data centers becoming more expensive drive more operators towards colocation or the cloud? It seems unlikely that higher on-premises costs will cause greater migration per se. Results from Uptime Institute’s Data Center and IT Spending Survey 2022 show that despite increasing costs, many operators find that keeping workloads on-premises is still cheaper than colocation (54%, n=96) or migrating to the cloud (64%, n=84).
Estimating the costs of each of these options, however, is difficult in a rapidly changing market, in which some costs are opaque. Given the high costs associated with migrating to the cloud, it is likely to be cheaper for enterprises to endure higher construction and refurbishment costs in the near term and benefit from lower operating costs over the longer term. Not all companies will be able capitalize on this strategy, however.
Those larger organizations with the financial resources to benefit from economies of scale, with the ability to raise capital more easily and with sufficient purchasing power to leverage suppliers, are likely to have lower costs compared with smaller companies (and most enterprise data centers). Given their scale, however, they are still likely to face higher costs elsewhere, such as sustainability reporting and calls for proving — and improving — their infrastructure resiliency and security.
The full report Five data center predictions for 2023 is available to download here.
See our Five Data Center Predictions for 2023 webinar here.
Max Smolaks
Douglas Donnellan
https://journal.uptimeinstitute.com/wp-content/uploads/2023/03/Data-center-costs-set-to-rise-and-rise-featured.jpg5391030Max Smolaks, Research Analyst, [email protected]https://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngMax Smolaks, Research Analyst, [email protected]2023-03-29 15:00:002023-03-27 12:20:19Data center costs set to rise and rise
Energy-efficiency focus to shift to IT — at last
/in Design, Executive, Operations/by Daniel Bizo, Research Director, Uptime Institute Intelligence, [email protected]Data centers have become victims of their own success. Ever-larger data centers have mushroomed across the globe in line with an apparently insatiable demand for computing and storage capacity. The associated energy use is not only expensive (and generating massive carbon emissions) but is also putting pressure on the grid. Most data center developments tend to be concentrated in and around metropolitan areas — making their presence even more palpable and attracting scrutiny.
Despite major achievements in energy performance throughout the 2010s — as witnessed by Uptime data on industry-average PUE — this has created challenges for data center builders and operators. Delivering bulletproof and energy-efficient infrastructure at a competitive cost is already a difficult balancing act, even without having to engage with local government, regulators and the public at large on energy use, environmental impact and carbon footprint.
IT is conspicuously absent from this dialogue. Server and storage infrastructure account for the largest proportion of a data centers’ power consumption and physical footprint. As such, they also offer the greatest potential for energy-efficiency gains and footprint compression. Often the issue is not wasted but unused power: poor capacity-planning practices create demand for additional data center developments even where unused (but provisioned) capacity is available.
Nonetheless, despite growing costs and sustainability pressures, enterprise IT operators — as well as IT vendors — continue to show little interest in the topic.
This will be increasingly untenable in the years ahead. In the face of limited power availability in key data center markets, together with high power prices and mounting pressure to meet sustainability legislation, enterprise IT’s energy footprint will have to be addressed more seriously. This will involve efficiency-improvement measures aimed at using dramatically fewer server and storage systems for the same workload.
Uptime has identified four key areas where pressure on IT will continue to build — all of them pointing in the same direction:
Municipalities — and utility providers — need the pace to drop
Concerns over power and land availability have, since 2019, led to greater restrictions on the construction of new data centers (Table 1). This is likely to intensify. Interventions on the part of local government and utility providers typically involve more rigorous application processes, more stringent energy-efficiency requirements and, in some cases, the outright denial of new grid connections for major developments. These restrictions have resulted in costly project delays (and, in some cases, cancellations) for major cloud and colocation providers.
Frankfurt, a key financial hub and home to one of the world’s largest internet exchange ecosystems, set an example. Under a new citywide masterplan (announced in 2022), the city stipulates densified, multistory and energy-optimized data center developments — chiefly out of concerns for sprawling land use and changes to the city’s skyline.
The Dublin area (Ireland) and Loudoun County (Northern Virginia, US) are two stand-out examples (among others) of the grid being under strain and power utilities having temporarily paused or capped new connections because of current shortfalls in generation or transmission capacity. Resolving these limitations is likely to take several years. A number of data center developers in both Dublin and Loudoun County have responded to these challenges by seeking locations further afield.
Table 1 Restrictions on new data centers since 2019 — selected examples
New sustainability regulations
Following years of discussion with key stakeholders, authorities have begun introducing regulation governing performance improvements and sustainability reporting for data centers — a key example being the EC’s Energy Efficiency Directive recast (EED), which will subject data centers directly to regulation aimed at reducing both energy consumption and carbon emissions (see Critical regulation: the EU Energy Efficiency Directive recast).
This regulation creates new, detailed reporting requirements for data centers in the EU and will force operators to improve their energy efficiency and to make their energy performance metrics publicly available — meaning investors and customers will be better equipped to weigh business decisions on the basis of the organizations’ performance. The EED is expected to enter into force in early 2023. At the time of writing (December 2022), the EED could still be amended to include higher targets for efficiency gains (increasing from 9% to 14.5%) by 2030. The EC has already passed legislation mandating regulated organizations to report on climate-related risks, their potential financial impacts and environmental footprint data every year from 2025, and will affect swathes of data centers.
Similar initiatives are now appearing in the US, with the White House Office of Technology and Science Policy’s (OTSP’s) Climate and Energy Implications of Crypto-assets in the US report, published in September 2022. Complementary legislation is being drafted that addresses both crypto and conventional data centers and sets the stage for the introduction of similar regulation to the EED over the next three to five years (see First signs of federal data center reporting mandates appear in US).
Current and draft regulation is predominantly focused on the performance of data center facility infrastructure (power and cooling systems) in curbing the greenhouse gas emissions (GHGs) associated with utility power consumption (Scope 2). While definitions and metrics remain vague (and are subject to ongoing development) it is clear that EC regulators intend to ultimately extend the scope of such regulation to also include IT efficiency.
Expensive energy is here to stay
The current energy crises in the UK, Europe and elsewhere are masking some fundamental energy trends. Energy prices and, consequently, power prices were on an upward trajectory before Russia’s invasion of Ukraine. Wholesale forward prices for electricity were already shooting up — in both the European and US markets — in 2021.
Certain long-term trends also underpin the trajectory towards costlier power and create an environment conducive to volatility. Structural elements to long-term power-price inflation include:
More specifically, baseload power is becoming more expensive because of the economic displacement effect of intermittent renewable energy. Regardless of how much wind and solar (or even hydro) is connected to the grid, reliability and availability considerations mean the grid has to be fully supported by dispatchable generation such as nuclear, coal and, increasingly, gas.
However, customer preference for renewable energy (and its low operational costs) means fleets of dispatchable power plants operate at reduced capacity, with an increasing number on standby. Grid operators — and, ultimately, power consumers — still need to pay for the capital costs and upkeep of this redundant capacity, to guarantee grid security.
IT power consumption will need to be curbed
High energy prices, carbon reporting, grid capacity shortfalls and efficiency issues have been, almost exclusively, a matter of concern for facility operators. But facility operators have now passed the point of diminishing returns, with greater intervention delivering fewer and fewer benefits. In contrast, every watt saved by IT reduces pressures elsewhere. Reporting requirements will, sooner or later, shed light on the vast potential for greater energy efficiency (or, to take a harsher view, expose the full extent of wasted energy) currently hidden in IT infrastructure.
For these reasons, other stakeholders in the data center industry are likely to call upon IT infrastructure buyers and vendors to engage more deeply in these conversations, and to commit to major initiatives. These demands will be completely justified: currently, IT has considerable scope for delivering improved power management and energy efficiency, where required.
Architecting IT infrastructure to deliver improved energy efficiency through better hardware configuration choices, dynamic workload consolidation practices and the use of power-management features (including energy-saving states and power throttling / capping features) — will deliver major energy-efficiency gains. Server utilization, and the inherent efficiency of server hardware, are two key dials that could bring manifold improvements in energy performance compared with typical enterprise IT.
These efficiency gains are not just theoretical: web technology and cloud services operators exploit them wherever they can. There is no reason why other organizations cannot adopt some of these practices and move closer to the performance metrics achievable. In an era of ever-more expensive (and scarce) power resources, together with mounting regulatory pressure, it will be increasingly difficult for IT C-level managers to deny calls to engage in the battle for better energy efficiency.
The full report Five data center predictions for 2023 is available to here.
See our Five Data Center Predictions for 2023 webinar here.
Daniel Bizo
Douglas Donnellan
Asset utilization drives cloud repatriation economics
/in Executive, Operations/by Dr. Owen Rogers, Research Director for Cloud Computing, Uptime Institute, [email protected]The past decade has seen numerous reports of so-called cloud “repatriations” — the migration of applications back to on-premises venues following negative experiences with, or unsuccessful migrations to, the public cloud.
A recent Uptime Update (High costs drive cloud repatriation, but impact is overstated) examined why these migrations might occur. The Update revealed that unexpected costs were the primary reason for cloud repatriation, with the cost of data storage being a significant factor in driving expenditure.
Software vendor 37signals recently made headlines after moving its project management platform Basecamp and email service HEY from Amazon Web Services (AWS) and Google Cloud to a colocation facility.
The company has published data on its monthly AWS bills for HEY (Figure 1). The blue line in Figure 1 shows the company’s monthly AWS expenditure. This Update examines this data to understand what lessons can be learned from 37signal’s experience.
37signals’ AWS spend — observations
Based on the spend charts included in 37signals’ blog (simplified in Figure 1), some observations stand out:
While 37signals’ applications are architected to scale upwards and downwards as necessary, these applications rarely need to scale rapidly to address unexpected demand. This consistency allows 37signals to purchase servers that are likely to be utilized effectively over their life cycle without performance being impacted due to low capacity.
This high utilization level supports the company’s premise that — at least for its own specific use cases — on-premises infrastructure may be cheaper than public cloud.
Return on server investment
As with any capital investment, a server is expected to provide a return — either through increased revenue, or higher productivity. If a server has been purchased but is sitting unused on a data center floor, no value is being obtained, and CAPEX is not being recovered while that asset depreciates.
At the same time, there is a downside to using every server at its maximum capacity. If asset utilization is too high, there is nowhere for applications to scale up if needed. The lack of a capacity buffer could result in application downtime, frequent performance issues, and even lost revenue or productivity.
Suppose 37signals decided to buy all server hardware one year in advance, predicted its peak usage over the year precisely, and purchased enough IT to deliver that peak (shown in orange on Figure 1). Under this ideal scenario, the company would achieve a 98% utilization of its assets over that period (in a financial, not computing or data-storage sense) — that is, 98% of its investment would be used over the year for a value-adding activity.
The chance of the company being able to make such a perfect prediction is unlikely. Overestimating capacity requirements would result in lower utilization and, accordingly, more waste. Underestimating capacity requirements would result in performance issues. A more sensible approach would be to purchase servers as soon as required (shown in green on Figure 1). This strategy would achieve 92% utilization. In practice, however, the company would have more servers idle for immediate capacity, decreasing utilization further.
Cloud providers can never achieve such a high level of utilization (although non-guaranteed “spot” purchases can help). Their entire proposition relies on being able to deliver capacity when needed. As a result, cloud services must have servers available when required — and lots of them.
Why utilization matters
Table 1 makes simple assumptions that demonstrate the challenge a cloud provider faces in provisioning excess capacity.
These calculations show that this on-premises implementation costs $10,000 in total, with the cloud provider’s total costs being $16,000. Cloud buyers rent units of resources, however, with the price paid covering both operating costs (such as power), the resources being used, and the depreciating value (and costs) of servers held in reserve. A cloud buyer would pay a minimum of $1,777 per unit, compared with a unit cost of $1,111 in an on-premises venue. The exact figures are not directly relevant: what is relevant is the fact that the input cost using public cloud is 60% more per unit —purely because of utilization.
Of course, this calculation is a highly simplified explanation of a complex situation. But, in summary, the cloud provider is responsible for making sure capacity is readily available (whether this be servers, network equipment, data centers, or storage arrays) while ensuring sufficient utilization such that costs remain low. In an on-premises data center this balancing act is in the hands of the organization. If enterprise capacity requirements are stable or slow-growing, it can be easier to balance performance against cost.
Sustaining utilization
It is likely that 37signals has done its calculations and is confident that migration is the right move. Success in migration relies on several assumptions. Organizations considering migrating from the public cloud back to on-premises infrastructure are best placed to make a cost-saving when:
The risk is that 37signals (or any other company moving back to the public cloud) might not be confident of these criteria being met in the longer term. Were the situation to change unexpectedly, the cost profile of on-premises versus public cloud can be substantially altered.
Forecasting the solar storm threat
/in Design, Executive, Operations/by Jacqueline Davis, Research Analyst, Uptime Institute, [email protected]A proposed permanent network of electromagnetic monitoring stations across the continental US, operating in tandem with a machine learning (ML) algorithm, could facilitate accurate predictions of geomagnetic disturbances (GMDs). If realized, this predictive system could help grid operators avert disruption and reduce the likelihood of damage to their — and their customers’ — infrastructure, including data centers.
Geomagnetic disturbances, also referred to as “geomagnetic storms” or “geomagnetic EMP”, occur when violent solar events interact with Earth’s atmosphere and magnetic field. Solar events that cause geomagnetic EMP (such as coronal mass ejection, or solar flares) occur frequently but chaotically, and are often directed away from Earth. The only long-term available predictions are probabilistic, and imprecise: for example, an extreme geomagnetic EMP typically occurs once every 25 years. When a solar event occurs, the US Space Weather Prediction Center (SWPC) can give hours’ to days’ notice of when it is expected to reach Earth. At present, these warnings lack practical information regarding the intensity and the location of such EMPs’ effects on power infrastructure and customer equipment (such as data centers).
A GMD produces ground-induced currents (GICs) in electrical conductors. The low frequency of a GMD concentrates GICs in very long electrical conductors — such as, for example, the high-voltage transmission lines in a power grid. A severe GMD can cause high-voltage transformer damage and widespread power outages — which could last indefinitely: high-voltage transformers have long manufacturing lead times, even in normal circumstances. Some grid operators have begun protecting their infrastructure against GICs. Data centers, however, are at risk of secondary GIC effects through their connections to the power grid: and many data center operators have not taken protective measures against GMDs, or any other form of EMP (see Electromagnetic pulse and its threat to data centers).
In the event of a less intense GMD, grid operators can often compensate for GICs, without failures. Data centers, however, may experience power-quality issues such as harmonic distortions (defects in AC voltage waveforms). Most data center uninterruptable power supply (UPS) systems are designed to accommodate some harmonics and protect downstream equipment, but the intense effects of a GMD can overwhelm these built-in protections — potentially damaging the UPS or other equipment. The effects of harmonics inside a data center can include inefficient UPS operation, UPS rectifier damage, tripped circuit breakers, overheated wiring, malfunctioning motors in mechanical equipment and, ultimately, physical damage to IT equipment.
The benefit to data center operators from improved forecasting of GMD effects is greatest in the event of these less intense incidents, which threaten equipment damage to power customers but are insufficient to bring down the power grid. An operator’s best defense against secondary GIC effects is to pre-emptively disconnect from the grid and run on backup generators. Actionable, accurate, and localized forecasting of GIC effects would better prepare operators to disconnect in time to avert damage (and to avoid unnecessary generator runtime in regions where this is strictly regulated).
An added challenge regarding the issue of geomagnetic effects on power infrastructure is that it is interdisciplinary: the interactions between Earth’s magnetic field and the power grid have historically not been well understood by experts in either geology or electrical infrastructure. Computationally simulating the effects of geomagnetic events on grid infrastructure is still not practically feasible.
This might change with rapid advancements in computer performance and modeling methods. At the 2022 Infragard National Disaster Resilience Council Summit in the US, researchers at Oregon State University presented a machine learning approach that could produce detailed geomagnetic forecasting — the objective here being to inform grid operators of their assessment of necessary protection of grid infrastructure.
Better modeling and forecasting of GMD effects requires many measurements spanning a geographic area of interest. The Magnetotelluric (MT) Array collects data across the continental US, using seven permanent stations, and over 1,600 temporary locations (as at 2022), arranged 43 miles (70 km) apart, on a grid. Over 1,900 temporary MT stations are planned by 2024. Station instruments measure time-dependent changes in Earth’s electric and magnetic fields, providing insight into the resistivity and electromagnetic impedance of Earth’s crust and upper mantle, in three dimensions. This data informs predictions of GIC intensity, which closely correlates with damaging effects on power infrastructure. The MT Array provides a dramatic and much-needed improvement to the resolution of data available on these geomagnetic effects.
Researchers trained a machine learning model on two months of continuous and simultaneous data output from an array of 25 MT stations in Alaska (US). The trained model effectively predicts geomagnetic effects, with 30 minutes’ advance notice. Fortunately, scaling these forecast abilities to the continental US will not require the long-term operation of thousands of MT stations. The trained model can forecast geomagnetic effects at the 43 miles (70 km) resolution of the full MT Array with significantly fewer permanent stations providing input.
The proposed permanent network is called the “Internet of MT” (IoMT) and would cover the continental US with just 500 permanently installed devices to produce ongoing forecasts, on a grid at 87 mile (140 km) spacing. These devices are designed differently from the equipment at today’s MT Array stations: while collecting the same types of data, they have several advantages. Powered by solar panels and allowing data to be uploaded automatically through a mobile network connection, the IoMT devices have a smaller footprint and a much lower cost of acquisition — approximately $5,000 per station (in contrast to current MT Array station equipment, which would cost $60,000 to install permanently).
The MT Array has, so far, been financed through funding from various US government agencies, including the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the United States Geological Survey (USGS). Though the IoMT’s equipment design promises a significantly lower cost of acquisition and installation than the technology used in today’s temporary array, funding for this next phase has not yet been secured.
Detailed geomagnetic forecasts could make it possible for grid operators to take proactive steps to protect their infrastructure — preventing prolonged power outages and sparing their customers (including data centers) damaging secondary effects. The predictions offered through the IoMT provide a model that could be used worldwide to address the risks inherent in the threat of geomagnetic EMP. Though it is too early to anticipate how this data could be distributed to data center operators, the value of proactive defense from GMDs may support a subscription service — for instance, on the part of companies that provide weather data.
Cloud migrations to face closer scrutiny
/in Executive, Operations/by Dr. Owen Rogers, Research Director for Cloud Computing, Uptime Institute, [email protected]Big public-cloud operators have often had to compete against each other — sometimes ferociously. Only rarely have they had to compete against alternative platforms for corporate IT, however. More often than not, chief information officers (CIOs) responsible for mission-critical IT have seen a move to the public cloud as low-risk, flexible, forward-looking and, ultimately, inexpensive. But these assumptions are now coming under pressure.
As the coming years threaten to be economically and politically turbulent, infrastructure and supply chains will be subject to disruption. Increasing government and stakeholder interest will force enterprises to scrutinize the financial and other risks of moving on-premises applications to the public cloud. More effort, and more investment, may be required to ensure that resiliency is both maintained and is clearly evident to its customers. While cloud has, in the past, been viewed as a low-risk option, the balance of uncertainty is changing — as are the cost equations.
Although the picture is complicated, with many factors at play, there are some signs that these pressures may, already, be slowing down adoption. Amazon Web Services (AWS), the largest cloud provider, reported a historic slowdown in growth in the second half of 2022, after nearly a decade of 30% to 40% increases year-on-year. Microsoft, too, has flagged a likely slowdown in the growth of its Azure cloud service.
No one in the industry is suggesting that the adoption of public cloud has peaked, or that it is no longer of strategic value to large enterprises. Use of the public cloud is still growing dramatically and is still driving growth in the data center industry. Public cloud will continue to be the near-automatic choice for most new applications, but organizations with complex, critical and hybrid requirements are likely to slow down or pause their migrations from on-premises infrastructure to the cloud.
Is the cloud honeymoon over?
Many businesses have been under pressure to move applications to the cloud quickly, without comprehensive analysis of the costs, benefits and risks. CIOs, often prompted or backed by heads of finance or chief executives, have favored the cloud over on-premises IT for new and / or major projects.
Data from the Uptime Institute Global Data Center Survey 2022 suggests that, while many were initially wary, organizations are becoming more confident in using the cloud for their most important critical workloads. The proportion of respondents not placing mission-critical workloads into the public cloud has dropped from 74% in 2019 to 63% in 2022. Figure 1 shows the growth in on-premises to cloud migrations, encouraged by C-level enthusiasm and positive perceptions of inexpensive performance.
High-profile cloud outages, however, together with increasing regulatory interest, are encouraging some customers to take a closer look. Customers are beginning to recognize that not all applications have been architected to take advantage of key cloud features — and architecting applications properly can be very costly. “Lifting and shifting” applications that cannot scale, or that cannot track changes in user demand or resource supply dynamically, is unlikely to deliver the full benefits of the cloud and could create new challenges. Figure 1 shows how several internal (IT) and external (macroeconomic) pressures could suppress growth in the future.
One particular challenge is that many applications have not been rearchitected to meet business objectives — most notably resiliency. Many cloud customers are not fully aware of their responsibilities regarding the resiliency and scalability of their application architecture, in the belief that cloud companies take care of this automatically. Cloud providers, however, make it explicitly clear that zones will suffer outages occasionally and that customers are required to play their part. Cloud providers recommend that customers distribute workloads across multiple availability zones, thereby increasing the likelihood that applications will remain functional, even if a single availability zone falters.
Research by Uptime shows how vulnerable enterprise-cloud customers are to single-zone outages currently. Data from the Uptime Institute Global Data Center Survey 2022 shows that only 35% of respondents believe the loss of an availability zone would result in significant performance issues, and only 16% of respondents indicated that the loss of an availability zone would not impact their cloud applications.
To capture the full benefits of the cloud and to reduce the risk of outages, organizations need to (re)architect for resiliency. This resiliency has an upfront and ongoing cost implication, and this needs to be factored in when a decision is made to migrate applications from on-premises to the cloud. Uptime Intelligence has previously found that architecting an application across dual availability zones can cost 43% more than a non-duplicated application (see Public cloud costs versus resiliency: stateless applications). Building across regions, which further improves resiliency, can double costs. Some applications might not be worth migrating to the cloud, given the additional expense of resiliency being factored into application architecture.
Economic forces will reduce pressure to migrate to the cloud
Successful and fully functional cloud migrations of critical workloads carry additional costs that are often substantial — a factor that is only now starting to be fully understood by many organizations.
These costs include both the initial phase — when applications have to be redeveloped to be cloud-native, at a time when skills are in short supply and high demand — and the ongoing consumption charges that arise from long periods of operation across multiple zones. It is clear that the cost of the cloud has not always been factored in: a major reason for organizations moving their workloads back to on-premises from the public cloud being cost (cited by 43% of respondents to Uptime Institute’s Data Center Capacity Trends Survey 2022).
Server refresh cycles often act as a trigger for cloud migration. Rather than purchasing new physical servers, IT C-level leaders choose to lift-and-shift applications to the public cloud. Uptime’s 2015 global survey of data center managers showed that 35% of respondents kept their servers in operation for five years or more; this proportion had increased to 52% by 2022. During challenging economic times, CIOs may be choosing to keep existing servers running instead of investing in a migration to the cloud.
Even if CIOs continue to exert pressure for a move to the cloud, this will be muted by the need to justify the expense of migration. Despite allowing for a reduction in on-premises IT and in data center footprints, many organizations do not have the leeway to handle the unexpected costs required to make cloud applications more resilient or performant. Poor access to capital, together with tighter budgets, will force executives to think carefully about the need for full cloud migrations. Application migrations with a clear return on investment will continue to move to the cloud; those that are borderline may be put on the back burner until conditions are clearer.
Additional pressure from regulators
Governments are also becoming concerned that cloud applications are not sufficiently resilient, or that they present other risks. The dominance of Amazon, Google and Microsoft (the “hyperscalers”) has raised concerns regarding “concentration risk” — an over-reliance on a limited number of cloud providers — in several countries and key sectors.
Regulators are taking steps to assess and manage this concentration risk, amid concerns that it could threaten the stability of many economies. The EC’s recently adopted Digital Operational Resilience Act (DORA) provides a framework for making the oversight of outsourced IT providers (including cloud) the responsibility of financial market players. The UK government’s Office of Communications (Ofcom) has launched a study into the country’s £15 billion public-cloud-services market. The long-standing but newly updated Gramm-Leach-Bliley Act (GLBA, also known as the Financial Services Modernization Act) in the US now requires regular cyber and physical security assessments.
The direction is clear. More organizations are going to be required to better evaluate and plan risks arising from third-party providers. This will not always be easy or accurate. Cloud providers face the same array of risks (arising from cyber-security issues, staff shortages, supply chains, extreme weather and unstable grids, etc.) as other operators. They are rarely transparent about the challenges associated with these risks.
Organizations are becoming increasingly aware that lifting and shifting applications from on-premises to public-cloud locations does not guarantee the same levels of performance or availability. Applications must be architected to take advantage of the public cloud — with the resulting upfront and ongoing cost implications. Many organizations may not have the funds (or indeed the expertise and / or staff) to rearchitect applications during these challenging times, particularly if the business benefits are not clear. Legislation will force regulated industries to consider all risks before venturing into the public cloud. Much of this legislation, however, is yet to be drafted or introduced.
How will this affect the overall growth of the public cloud and its appeal to the C-level management? Hyperscaler cloud providers will continue to expand globally and to create new products and services. Enterprise customers, in turn, are likely to continue finding cloud services competitive. The rush to migrate workloads will slow down as organizations do the right thing: assess their risks, design architectures that help mitigate those risks, and move only when ready to do so (and when doing so will add value to the business).
The full report Five data center predictions for 2023 is available here.
See our Five Data Center Predictions for 2023 webinar here.
Accounting for digital infrastructure GHG emissions
/in Executive, Operations/by Jay Dietrich, Research Director of Sustainability, Uptime Institute, [email protected]A host of regulations worldwide have introduced (or will introduce) legal mandates forcing data center operators to report specific operational data and metrics. Key examples include the European Union’s Corporate Sustainability Reporting Directive (CSRD); the European Commission’s proposed Energy Efficiency Directive (EED) recast; the draft US Securities and Exchange Commission’s (SEC) climate disclosure proposal and various national reporting requirements (including in Brazil, Hong Kong, Japan, New Zealand, Singapore, Switzerland, and the UK) under the Task Force on Climate-related Financial Disclosures (TCFD). The industry is not, currently, adequately prepared to address these requirements, however.
Current data-exchange practices lack consistency — and any recognized consensus — on reporting sustainability-related data, such as energy and water use, greenhouse gas (GHG) emissions and operational metrics. Many enterprises have, throughout discussions with Uptime Institute, indicated that it is difficult (and sometimes impossible) to obtain energy and emissions data from colocation and cloud operators.
The Uptime Institute Global Data Center Survey 2022 made clear that data center operators’ readiness to report GHG emissions has seen incremental improvement over previous surveys, with only 37% of respondents indicating that they are prepared to publicly report their GHG emission inventories (up by just 4 percentage points on the previous year). Of these, less than one-third of respondents are currently including their Scope 3 emissions inventories.
Fortunately, most of the reporting regimes will become effective for the 2024 reporting year, giving data center managers time to work with their colocation and cloud providers on obtaining the necessary data, and to put their carbon accounting processes in order. While the finer details will vary according to each enterprise’s digital infrastructure footprint, there are certain common steps that data center managers can implement to facilitate the collection of quality data to fulfill these new reporting mandates.
Colocation operations
The GHG Protocol classifies emissions as Scope 1 and Scope 2, where an entity has operational control or financial control. Having analyzed these definitions Uptime’s position is that IT operators exercise both operational and financial control over their IT operations in colocation data centers.
From an operational control standpoint, IT operators specify and purchase the IT equipment installed in the colocation space, set the operating parameters for that equipment (power management settings, virtual machine creation and assignment, hardware utilization levels, etc.) and maintain and monitor operations. Similarly, IT operators have financial control: they purchase, install, operate and maintain the IT equipment. On which basis, GHG emissions from IT operations in a colocation facility should be classified as Scope 2 for the IT operator and Scope 3 for the colocation operator. Emissions (and energy use) from facility functions, such as power distribution losses from the grid connection to the IT hardware, and cooling energy, should fall into Scope 3 for the IT operator tenant.
Table 1 outlines Scope 2 and Scope 3 emissions reporting responsibilities for IT operations in enterprise (owned), colocation and public cloud data centers under GHG Protocol Corporate Accounting and Reporting Standards.
In collaboration with colocation and IT operators, Business for Social Responsibility (a sustainable business network and consultancy) published some initial guidance on emissions accounting in 2017: GHG Emissions Accounting, Renewable Energy Purchases, and Zero-Carbon Accounting: Issues and Considerations for the Colocation Data Center Industry.
This guidance did not take a position on the assignment of Scope 2 and 3 emissions in colocation operations, however, leaving this decision to individual colocation operators.
In practice, different operators use two different accounting criteria. Equinix, for example, accounts for all energy use and emissions as Scope 2, with emissions effectively passed to tenants as Scope 3. NTT follows the approach (also recommended by Uptime) that GHG emissions from the energy use of IT operations in a colocation facility should be classified as Scope 2 for the IT operator and Scope 3 for the colocation operator.
The use of two different accounting criteria creates confusion and makes the comparison and understanding of emissions reports across the data center industry difficult. The industry needs to settle on a single accounting methodology for emissions reporting.
The GHG Protocol Corporate Accounting and Reporting Standards are likely to be cited as governing the classification of Scope 1, 2 and 3 emissions under legal mandates such as the CSRD and the proposed SEC climate disclosure requirements. Uptime recommends that colocation operators and their tenants conform to the GHG Protocol to meet these legal requirements.
Public cloud operations
Emissions accounting for IT operations in a public-cloud facility is straightforward: all emissions are Scope 2 for the cloud operator (since they own and operate the IT and facilities infrastructure) and Scope 3 for the IT operator (customer).
A cloud operation in a colocation facility adds another layer of allocation. Public-cloud IT energy use should be accounted for as Scope 3 by the colocation operator and Scope 2 by the cloud operator, with facility infrastructure-related emissions accounted for as Scope 2 and Scope 3 by each entity respectively. This represents no change for the IT operator: all emissions associated with its cloud-based applications and data — regardless where that cloud footprint exists — will be accounted for as Scope 3.
IT operators report that they have difficulty obtaining energy-use and emissions information from their cloud providers. The larger cloud operators, and several of the large colocation providers, typically claim that there are zero emissions associated with operations at their facilities because they are carbon-neutral on account of buying renewable energy and carbon offsets. The same providers are typically unable or unwilling to provide more detailed information — making compliance with legally mandated reporting requirements difficult for IT operators.
If IT operators are to comply with forthcoming disclosure obligations they will, in accordance with the GHG Protocol, need data on their energy use and their location-based (grid power mix) and market-based (contractual mix) emissions. They will also need more granular information on renewable energy consumption and the application of renewable energy certificates (RECs) in offsetting grid power use and the associated emissions if they are to fully understand the underlying details.
Required cloud and colocation provider sustainability data
With new sustainability reporting regulations due to take effect in the medium term, IT operators will clearly need more detailed data on energy and emissions from their infrastructure providers in meeting their compliance responsibilities, as well as in assessing the total environmental impact of their operations. Colocation and cloud services providers (and others providing hosting and various IT infrastructure services) will be expected to provide the data listed below — ideally as a condition of any service contract. This data will provide the information necessary to complete TCFD climate disclosures, as well as the IT operator’s sustainability report. Additional data may need to be added to this list to address specific local reporting or operating-efficiency mandates.
Data-transfer requirements for colocation and cloud services contracts should facilitate the annual reporting of operational data including:
Note: GHG emissions values should be reported for the facility’s total fuel and electricity consumption. Scope 1 emissions should include any refrigerant emissions (fugitive or failure).
Energy-use data should be requested monthly or quarterly (recognizing that data reports from service providers will typically lag by one to three months) to allow tracking of power consumption, emissions metrics and objectives throughout the year. Current-year emissions can be estimated using the previous year’s emissions factor for electrical consumption at a facility.
Mandated reporting requirements will typically require data to be submitted in March following the end of the reporting year. Therefore, service agreements should allow for this to be delivered to clients by February.
Colocation and cloud-service providers need to develop methodologies to provide energy-use and location- and market-based-emissions estimates to their clients. Colocation providers should install metering to measure tenants’ IT power consumption, simplifying allocated energy use and emissions reporting. Cloud providers have several different approaches available for measuring or estimating energy use. Algorithms can be created that use IT-system and equipment power and utilization data (with tracking capabilities and / or knowledge of the power-use characteristics of the deployed IT equipment configurations) to estimate a customer’s energy use and associated location-based emissions.
Any calculation methodology should be transparent for customers. Cloud providers will need to choose a methodology that fits with their data collection capabilities and start providing data to their customers as soon as possible.
IT operators need to obtain information on the RECs, GOs and carbon offsets applied to the overall energy use at each facility at which they operate. This data will allow IT operators to validate the actual emissions associated with the energy consumed by the data center, as well as claims regarding renewable energy use and GHG emissions reductions. IT operators will need to exercise due diligence to ensure that data is accurately reported, and will need to match the service provider’s data to the operator’s chosen sustainability metrics.
Data required from IT tenants at colocation facilities
Colocation operators may require operational information from their tenants. The proposed EED recast is likely to require colocation operators to report specific IT operational data — which will have to be supplied by their tenants. At a minimum, colocation operators need to incorporate a clause into their standard contracts requiring tenants to provide legally mandated operational data. Contract language can be made more specific to the facilities covered by the forthcoming mandates as new regulations are promulgated.
Conclusion
The reporting of data center energy use and GHG emissions is undergoing a major transition — from a voluntary effort subject to limited scrutiny to legally mandated reporting requiring third-party assurance. These legal requirements can extend to smaller enterprise and colocation operators: the EED recast, for example, will apply to operations with just 100 kW of installed IT equipment power. These forthcoming requirements will require IT operators to take responsibility for their operations across all data-center categories — owned, colocation and cloud.
This new regulatory environment will mean digital infrastructure managers will now have to facilitate collaboration between their facilities teams, IT teams and data center service providers to create a coherent sustainability strategy across their operations. Processes will need to be created to generate, collect and report the data and metrics needed to comply with these requirements. At the industry level, standards need to be developed to create a consistent framework for data and metrics reporting.
These efforts need to be undertaken with some urgency since many of these new reporting obligations will take effect from the 2023 or 2024 operating year.
Data center costs set to rise and rise
/in Design, Executive, Operations/by Max Smolaks, Research Analyst, [email protected]Up until two years ago, the cost of building and operating data centers had been falling reasonably steeply. Improving technology, greater production volumes as the industry expanded and consolidated, large-scale builds, prefabricated and modular construction techniques, stable energy prices and the low costs of capital have all played a part. While labor costs have risen during this time, better management, processes and automation have helped to prevent spiraling wage bills.
The past two years, however, have seen these trends come to a halt. Ongoing supply chain issues and rising labor, energy and capital costs all set to make building and running data centers more expensive in 2023 and beyond.
But the impact of these cost increases — affecting IT as well as facilities — will be muted due to the durable growth of the data center industry, fueled by global digitization and the overwhelming appetite for more IT. In response, most large data center operators (and data center capacity buyers) are continuing to move forward with expansion projects and taking on more space.
Smaller and medium-sized data center operators, however, that lack the resources to weather higher costs are likely to find this particularly challenging, with some smaller colocation operators (and enterprise data centers) struggling to remain competitive. Increasing overhead costs arising from new regulatory requirements and climbing interest rates will further challenge some operators, but an immediate rush to the public cloud is unlikely since this strategy, too, has non-trivial (and often high) costs.
Capital costs
Capital plays a major part in the data center life cycle costing. Capital has been both cheap and readily available to data center builders for more than a decade: but the market changed in 2022. Countries that are home to major data center markets or to major companies that own and build data centers are now facing decades-high inflation rates (see Table 1), making it more difficult and more expensive to raise capital. But with increasing demand for capacity, partly due to a pent-up demand resulting from construction bottlenecks during the COVID-19 pandemic along with permitting and energy supply problems more recently, the most active and best positioned operators are funding their capacity expansion.
Uptime Institute’s Data Center and IT Spending Survey 2022 shows that more than two-thirds of enterprise and colocation operators expect to spend more in data center costs in 2023. Most enterprise data centers (90%) say they will be adding IT or data center capacity over the next two to three years, with half expecting to construct new facilities (although they may be closing down others).
The recent rise in construction costs may have come as a shock to some. Data center construction costs and lead-times had improved significantly in the 2010s, but we are now seeing a reversal of this trend. An average Tier III enterprise data center (a technical facility with concurrently maintainable site infrastructure) would have cost approximately $12 million per megawatt (MW) in 2010 per Uptime’s estimates (not including land and civil works) and would have taken up to two years to build.
Changes in design and construction had resulted in these costs dropping — in the best cases, to as little as $6 to $8 million per MW immediately before the COVID-19 pandemic, with lead-times cut to less than 12 months. While Uptime has not verified these claims, some projects were reported to have been budgeted at less than $4 million per MW and taken just six months to complete.
The view today is markedly different. Long waiting times for some significant components (such as certain engine generators and centralized UPS systems) are driving up prices. By 2022, costs for Tier III specifications had risen by $1 million to $2 million per MW according to Uptime’s estimates. Lead-times can now reach or exceed 12 months, prolonging capacity expansion and refurbishment projects — and sometimes preventing operators from earning revenue from near complete facilities.
While prices for some construction materials have started to stabilize at an elevated level since the COVID-19 pandemic, prices are expected to increase further in 2023. Product shortages, together with higher prices for labor, semiconductors and power, are all having an inflationary effect across the industry. Concurrently, site acquisitions at major data center hubs with low-latency network connections now come at a premium, as popular data center locations run out of suitable land and power.
Uptime Institute’s Supply Chain Survey 2022 shows computer room cooling units, UPS systems and power distribution components to be the data center equipment most severely impacted by shortages. Of the 678 respondents to this survey, 80% said suppliers had increased their prices over the past 18 months. Notably, Li-ion battery prices, which had been trending downwards every year until 2021, increased in 2022 due to shortages of raw materials coupled with high demand.
More stringent sustainability requirements, too, contribute to higher capital costs. Regulations in some major data center hubs (such as Amsterdam and Singapore) mean only developments with highly energy efficient designs can move forward. But meeting these requirements will come at a cost (engineering fees, structural changes, different cooling systems), lifting the barriers to entry. New energy efficiency standards (as stipulated under the EC’s Energy Efficiency Directive recast, for example) will stress budgets still further (see Critical regulation: the EU Energy Efficiency Directive recast).
Operators are looking to recover the cost of sustainability requirements through efficiency gains. Surging power costs, which are likely to remain high in the coming years, now mean the calculation has shifted in favor of more aggressive energy optimization — but upfront capital requirements will often be higher.
Operating and IT costs
The operating expenditures associated with data centers and IT infrastructure are also set to increase in 2023, due to steep rises in major input costs. Uptime Institute’s Data Center and IT Spending Survey 2022 showed power to be driving the greatest unit cost increases for most operators (see Figure 1) — the result of high gas prices, the transition to renewable energy, imbalances in grid supply and the war in Ukraine.
The UK and the EU have been most affected by these increases, with certain colocation operators passing down some significant increases in energy costs to their customers. While energy prices are expected to drop (at least against the record highs of 2022), they are likely to remain well above the average levels of the past two decades.
Second only to power, IT hardware showed the next greatest increase in unit costs for enterprise data center respondents, partly because of various dislocations in the hardware supply chain, shortages of some processors and switching silicon, and inflation. Demand for IT hardware has continued to outpace supply, and manufacturing backlogs resulting from the COVID-19 pandemic have yet to catch up.
Uptime sees promising signs of improvements in data center hardware supply, largely due to a recent sag in global demand (caused by economic headwinds and IT investment cycles). As a result, prices and lead-times for generic IT hardware (with some exceptions) will likely moderate in the first half of 2023.
If history is any guide, demand for data center IT will rise again some time in 2023 once some major IT infrastructure buyers accelerate their capacity expansion, which will yet again lead to tightness in the supply of select hardware later in the year.
Staffing will also play a major role in the increased cost of running data centers, and is likely to continue to impact the industry beyond 2023. Many operators say they are spending more on labor costs in a bid to retain current staff (see Figure 2). This presents a further challenge for those enterprises that are unable to match salary offers made by some of the booming tech giants.
The aggregate view is clear: the overall costs of building and running data centers is set to rise significantly over the next few years. While businesses can deploy various strategies and technologies — such as automation, energy efficiency and tactical migration to the cloud — to reduce operational costs, these are likely to entail capital investment, new skills and technical complexity.
Will data centers becoming more expensive drive more operators towards colocation or the cloud? It seems unlikely that higher on-premises costs will cause greater migration per se. Results from Uptime Institute’s Data Center and IT Spending Survey 2022 show that despite increasing costs, many operators find that keeping workloads on-premises is still cheaper than colocation (54%, n=96) or migrating to the cloud (64%, n=84).
Estimating the costs of each of these options, however, is difficult in a rapidly changing market, in which some costs are opaque. Given the high costs associated with migrating to the cloud, it is likely to be cheaper for enterprises to endure higher construction and refurbishment costs in the near term and benefit from lower operating costs over the longer term. Not all companies will be able capitalize on this strategy, however.
Those larger organizations with the financial resources to benefit from economies of scale, with the ability to raise capital more easily and with sufficient purchasing power to leverage suppliers, are likely to have lower costs compared with smaller companies (and most enterprise data centers). Given their scale, however, they are still likely to face higher costs elsewhere, such as sustainability reporting and calls for proving — and improving — their infrastructure resiliency and security.
The full report Five data center predictions for 2023 is available to download here.
See our Five Data Center Predictions for 2023 webinar here.
Max Smolaks
Douglas Donnellan