US operators scour Inflation Reduction Act for incentives

US operators scour Inflation Reduction Act for incentives

In the struggle to reduce carbon emissions and increase renewable energy, the US Inflation Reduction Act (IRA), passed in August 2022, is a landmark development. The misleadingly named Act, which is lauded by environmental experts and castigated by foreign leaders, is intended to rapidly accelerate the decarbonization of the world’s largest economy by introducing nearly $400 billion in federal funding over the next 10 years.

Reducing the carbon intensity of electricity production is a major focus of the act and the US clean energy industry will greatly benefit from the tax credits encouraging renewable energy development. But it also includes provisions intended to “re-shore” and create jobs in the US and to ensure that US companies have greater control over the energy supply chain. Abroad, foreign leaders have raised objections over these protectionist provisions creating (or aggravating) a political rift between the US and its allies and trading partners. In response to the IRA, the EU has redirected funds to buoy its low-carbon industries, threatened retaliatory measures and is considering the adoption of similar legislation.

While the politicians argue, stakeholders in the US have been scouring the IRA’s 274 pages for opportunities to capitalize on these lucrative incentives. Organizations that use a lot of electricity are also likely to benefit — including large data centers and their suppliers. Lawyers, accountants and investors working for organizations planning large-scale digital infrastructure investments will see opportunities too. Some of these will be substantial.

Summary of opportunities

For digital infrastructure companies, it may be possible to secure support in the following areas:

  • Renewable energy prices / power purchase agreements. Demand for renewable energy and the associated renewable energy credits is likely to be very high in the next decade, so prices are likely to rise. The tax incentives in the IRA will help to bring these prices down for renewable energy generators. By working with electricity providers and possibly co-investing, some data center operators will be able to secure lower energy prices.
  • Energy efficiency. Commercial building operators will find it easier to earn tax credits for reducing energy use. However, data centers that have already improved energy efficiency will struggle to reach the 25% reduction required to qualify. Operators may want to reduce energy use on the IT side, but this would not meet the eligibility requirements for this credit.
  • Equipment discounts / tax benefits. The act provides incentives for energy storage equipment (batteries or other technologies) which are necessary to operate a carbon-free grid. There are tax concessions for low-carbon energy generation, storage and microgrid equipment. Vendors may also qualify for tax benefits that can be sold.
  • Renewable energy generation. Most data centers generate little or no onsite renewable energy. In most cases, the power generated on site can support only a tiny fraction of the IT load. Even so, the many new incentives for equipment and batteries may make this more cost effective; and large operators may find it worthwhile to invest in generation at scale.

Detailed provisions

Any US operator considering significant investments in renewable generation and/or energy storage, including, for example, a UPS — is advised to study the act closely.

Uptime Institute Intelligence’s view is that the following IRA tax credits will apply to operators:

  • Investment tax credit (ITC), section 48. Of the available tax credits, the most significant for operators is the ITC. The ITC encourages renewable, low-carbon energy use by reducing capital costs by up to 30% through 2032. It applies to capital spending on assets, including solar, wind, geothermal equipment, electrochemical fuel cells, energy storage and microgrid controllers. The ITC will make investing in solar, wind, energy storage and fuel cells more attractive. The ITC is likely to catalyze investment in, and the deployment of, low carbon energy technologies.
  • Energy efficiency commercial buildings deduction, section 179D. This tax credit will encourage investment in energy efficiency and retrofits in commercial buildings. The incentive applies to projects that deliver at least a 25% energy efficiency improvement (reduced from the existing 50% threshold) for a building, compared with ASHRAE’s 90.1 standard reference building. The energy efficiency tax credit applies to projects in the following categories: interior lighting, heating, cooling, ventilation or the building envelope. To meet the 25% threshold, operators can retrofit several building systems. Qualified projects earn a tax credit of between 50 cents and $5 a square foot, depending on efficiency gains and labor requirements.
  • Production tax credit (PTC), section 45. This incentive does not directly apply to data center operators but will affect their business if they buy renewable energy. This tax credit rewards low-carbon energy producers by increasing their profit margin. The PTC only applies to energy producers that sell to a third party, rather than consume it directly. Qualifying projects include wind, solar and hydropower facilities. The PTC scales with inflation and lasts for 10 years. In 2022, the maximum value of the PTC was 2.6 cents per kilowatt-hour (kWh) — for reference, the average US industrial energy price in September 2022 was 10 cents per kWh. If the credit is fully passed on to consumers, energy costs will be reduced by about 25%. (Note: eligible projects must choose between the PTC and the ITC.)

For the tax credits mentioned above, organizations must meet the prevailing wage and apprenticeship requirements (initial guidance by the US Treasury Department and the Internal Revenue Service can be found here) to receive the maximum credit unless the nameplate generation capacity of the project is less than 1 megawatt for the ITC and PTC.

The incentives listed above will be available until 2032 creating certainty for operators considering an investment in renewables, efficiency retrofits or the renewable energy industry. Additionally, these tax credits are transferable: they can be sold — for cash — to another company with tax liability, such as a bank.

Hyperscalers and large colocation providers are best positioned to capitalize on these tax credits: they are building new capacity quickly, possess the expertise and staffing capacity to navigate the legal requirements, and have ambitious net-zero targets.

However, data center operators of all sizes will pursue these incentives where there is a compelling business case. Owners / operators from Uptime Institute’s 2022 global data center survey said more renewable energy purchasing options would deliver the most significant gains in sustainability performance in the next three to five years.

The IRA may also lower the cost barriers for innovative data center designs and typologies. For example, IRA incentives will strengthen the business case for pairing a facility on a microgrid with renewable and long-duration energy storage (LDES). Emerging battery chemistries in development (including iron-air, liquid metal and nickel-zinc) offer discharge durations of 10 hours to 10 days and would benefit from large deployments to prove their viability.

LDES is essential for a reliable low-carbon grid. As the IRA speeds up the deployment of renewables, organizations will need multi-day energy storage to smooth out the high variability of intermittent generators such as solar and wind. Data center facilities may be ideal sites for LDES, even if they are not dedicated for data center use.

Additionally, low-carbon baseload generators such as nuclear, hydrogen and geothermal — all eligible for IRA tax credits — will be needed to replace reliable fossil fuel generators, such as gas turbines and coal power plants.

The incentives in the IRA, welcomed with enthusiasm by climate campaigners the world over, will strengthen the business case in the US for reducing energy consumption, deploying low-carbon energy and energy storage, and/or investing in the clean energy economy.

There is, however, a more problematic side: the rare earth materials and critical components the US will need to meet the objectives of the IRA may be hard to source in sufficient quantities and allegations of protectionism may cause political rifts with other countries.


Lenny Simon, Senior Research Associate [email protected]

Andy Lawrence, Executive Director of Research [email protected]

Energy-efficiency focus to shift to IT — at last

Energy-efficiency focus to shift to IT — at last

Data centers have become victims of their own success. Ever-larger data centers have mushroomed across the globe in line with an apparently insatiable demand for computing and storage capacity. The associated energy use is not only expensive (and generating massive carbon emissions) but is also putting pressure on the grid. Most data center developments tend to be concentrated in and around metropolitan areas — making their presence even more palpable and attracting scrutiny.

Despite major achievements in energy performance throughout the 2010s — as witnessed by Uptime data on industry-average PUE — this has created challenges for data center builders and operators. Delivering bulletproof and energy-efficient infrastructure at a competitive cost is already a difficult balancing act, even without having to engage with local government, regulators and the public at large on energy use, environmental impact and carbon footprint.

IT is conspicuously absent from this dialogue. Server and storage infrastructure account for the largest proportion of a data centers’ power consumption and physical footprint. As such, they also offer the greatest potential for energy-efficiency gains and footprint compression. Often the issue is not wasted but unused power: poor capacity-planning practices create demand for additional data center developments even where unused (but provisioned) capacity is available.

Nonetheless, despite growing costs and sustainability pressures, enterprise IT operators — as well as IT vendors — continue to show little interest in the topic.

This will be increasingly untenable in the years ahead. In the face of limited power availability in key data center markets, together with high power prices and mounting pressure to meet sustainability legislation, enterprise IT’s energy footprint will have to be addressed more seriously. This will involve efficiency-improvement measures aimed at using dramatically fewer server and storage systems for the same workload.

Uptime has identified four key areas where pressure on IT will continue to build — all of them pointing in the same direction:

  • Municipal (local) resistance to new large data centers.
  • The limited availability of grid power to support increasing data center capacity.
  • Increasing regulation governing sustainability and carbon reduction, and more stringent reporting requirements.
  • High energy costs.

Municipalities — and utility providers — need the pace to drop

Concerns over power and land availability have, since 2019, led to greater restrictions on the construction of new data centers (Table 1). This is likely to intensify. Interventions on the part of local government and utility providers typically involve more rigorous application processes, more stringent energy-efficiency requirements and, in some cases, the outright denial of new grid connections for major developments. These restrictions have resulted in costly project delays (and, in some cases, cancellations) for major cloud and colocation providers.

Frankfurt, a key financial hub and home to one of the world’s largest internet exchange ecosystems, set an example. Under a new citywide masterplan (announced in 2022), the city stipulates densified, multistory and energy-optimized data center developments — chiefly out of concerns for sprawling land use and changes to the city’s skyline.

The Dublin area (Ireland) and Loudoun County (Northern Virginia, US) are two stand-out examples (among others) of the grid being under strain and power utilities having temporarily paused or capped new connections because of current shortfalls in generation or transmission capacity. Resolving these limitations is likely to take several years. A number of data center developers in both Dublin and Loudoun County have responded to these challenges by seeking locations further afield.

Table 1 Restrictions on new data centers since 2019 — selected examples

Table: Restrictions on new data centers since 2019 — selected examples

New sustainability regulations

Following years of discussion with key stakeholders, authorities have begun introducing regulation governing performance improvements and sustainability reporting for data centers — a key example being the EC’s Energy Efficiency Directive recast (EED), which will subject data centers directly to regulation aimed at reducing both energy consumption and carbon emissions (see Critical regulation: the EU Energy Efficiency Directive recast).

This regulation creates new, detailed reporting requirements for data centers in the EU and will force operators to improve their energy efficiency and to make their energy performance metrics publicly available — meaning investors and customers will be better equipped to weigh business decisions on the basis of the organizations’ performance. The EED is expected to enter into force in early 2023. At the time of writing (December 2022), the EED could still be amended to include higher targets for efficiency gains (increasing from 9% to 14.5%) by 2030. The EC has already passed legislation mandating regulated organizations to report on climate-related risks, their potential financial impacts and environmental footprint data every year from 2025, and will affect swathes of data centers.

Similar initiatives are now appearing in the US, with the White House Office of Technology and Science Policy’s (OTSP’s) Climate and Energy Implications of Crypto-assets in the US report, published in September 2022. Complementary legislation is being drafted that addresses both crypto and conventional data centers and sets the stage for the introduction of similar regulation to the EED over the next three to five years (see First signs of federal data center reporting mandates appear in US).

Current and draft regulation is predominantly focused on the performance of data center facility infrastructure (power and cooling systems) in curbing the greenhouse gas emissions (GHGs) associated with utility power consumption (Scope 2). While definitions and metrics remain vague (and are subject to ongoing development) it is clear that EC regulators intend to ultimately extend the scope of such regulation to also include IT efficiency.

Expensive energy is here to stay

The current energy crises in the UK, Europe and elsewhere are masking some fundamental energy trends. Energy prices and, consequently, power prices were on an upward trajectory before Russia’s invasion of Ukraine. Wholesale forward prices for electricity were already shooting up — in both the European and US markets — in 2021.

Certain long-term trends also underpin the trajectory towards costlier power and create an environment conducive to volatility. Structural elements to long-term power-price inflation include:

  • The global economy’s continued dependence on (and continued increasing consumption of) oil and gas.
  • Underinvestment in fossil-fuel supply capacities while alternative low-carbon generation and energy storage capacities remain in development.
  • Gargantuan build-out of intermittent power generation capacity (overwhelmingly wind and solar) as opposed to firm low-carbon generation.
  • Steady growth in power demand arising from economic growth and electrification in transport and industry.

More specifically, baseload power is becoming more expensive because of the economic displacement effect of intermittent renewable energy. Regardless of how much wind and solar (or even hydro) is connected to the grid, reliability and availability considerations mean the grid has to be fully supported by dispatchable generation such as nuclear, coal and, increasingly, gas.

However, customer preference for renewable energy (and its low operational costs) means fleets of dispatchable power plants operate at reduced capacity, with an increasing number on standby. Grid operators — and, ultimately, power consumers — still need to pay for the capital costs and upkeep of this redundant capacity, to guarantee grid security.

IT power consumption will need to be curbed

High energy prices, carbon reporting, grid capacity shortfalls and efficiency issues have been, almost exclusively, a matter of concern for facility operators. But facility operators have now passed the point of diminishing returns, with greater intervention delivering fewer and fewer benefits. In contrast, every watt saved by IT reduces pressures elsewhere. Reporting requirements will, sooner or later, shed light on the vast potential for greater energy efficiency (or, to take a harsher view, expose the full extent of wasted energy) currently hidden in IT infrastructure.

For these reasons, other stakeholders in the data center industry are likely to call upon IT infrastructure buyers and vendors to engage more deeply in these conversations, and to commit to major initiatives. These demands will be completely justified: currently, IT has considerable scope for delivering improved power management and energy efficiency, where required.

Architecting IT infrastructure to deliver improved energy efficiency through better hardware configuration choices, dynamic workload consolidation practices and the use of power-management features (including energy-saving states and power throttling / capping features) — will deliver major energy-efficiency gains. Server utilization, and the inherent efficiency of server hardware, are two key dials that could bring manifold improvements in energy performance compared with typical enterprise IT.

These efficiency gains are not just theoretical: web technology and cloud services operators exploit them wherever they can. There is no reason why other organizations cannot adopt some of these practices and move closer to the performance metrics achievable. In an era of ever-more expensive (and scarce) power resources, together with mounting regulatory pressure, it will be increasingly difficult for IT C-level managers to deny calls to engage in the battle for better energy efficiency.

The full report Five data center predictions for 2023 is available to here.

See our Five Data Center Predictions for 2023 webinar here.


Daniel Bizo

Douglas Donnellan

Asset utilization drives cloud repatriation economics

Asset utilization drives cloud repatriation economics

The past decade has seen numerous reports of so-called cloud “repatriations” — the migration of applications back to on-premises venues following negative experiences with, or unsuccessful migrations to, the public cloud.

A recent Uptime Update (High costs drive cloud repatriation, but impact is overstated) examined why these migrations might occur. The Update revealed that unexpected costs were the primary reason for cloud repatriation, with the cost of data storage being a significant factor in driving expenditure.

Software vendor 37signals recently made headlines after moving its project management platform Basecamp and email service HEY from Amazon Web Services (AWS) and Google Cloud to a colocation facility.

The company has published data on its monthly AWS bills for HEY (Figure 1). The blue line in Figure 1 shows the company’s monthly AWS expenditure. This Update examines this data to understand what lessons can be learned from 37signal’s experience.

diagram: 37signals' monthly AWS spend to support HEY in 2022, with capacity profiles
Figure 1 37signals’ monthly AWS spend to support HEY in 2022, with capacity profiles

37signals’ AWS spend — observations

Based on the spend charts included in 37signals’ blog (simplified in Figure 1), some observations stand out:

  • The applications that are part of HEY scale proportionally. When database costs increase, for example, the cost of other services increases similarly. This proportionality suggests that applications (and the total resources used across various services) have been architected to scale upwards and downwards, as necessary. As HEY’s costs scale proportionally, it is reasonable to assume that costs are proportional to resources consumed.
  • Costs (and therefore resource requirements) are relatively constant over the year — there are no dramatic increases or decreases from month to month.
  • Database and search are substantial components of 37signals’ bills. The company’s database is not expanding, however, suggesting that the company is effective in preventing sprawl. 37signals’ data does not appear to have “gravity” — “gravity” here meaning the greater the amount of data stored in a system the more data (and, very often, software applications) it will attract over time.

While 37signals’ applications are architected to scale upwards and downwards as necessary, these applications rarely need to scale rapidly to address unexpected demand. This consistency allows 37signals to purchase servers that are likely to be utilized effectively over their life cycle without performance being impacted due to low capacity.

This high utilization level supports the company’s premise that — at least for its own specific use cases — on-premises infrastructure may be cheaper than public cloud.

Return on server investment

As with any capital investment, a server is expected to provide a return — either through increased revenue, or higher productivity. If a server has been purchased but is sitting unused on a data center floor, no value is being obtained, and CAPEX is not being recovered while that asset depreciates.

At the same time, there is a downside to using every server at its maximum capacity. If asset utilization is too high, there is nowhere for applications to scale up if needed. The lack of a capacity buffer could result in application downtime, frequent performance issues, and even lost revenue or productivity.

Suppose 37signals decided to buy all server hardware one year in advance, predicted its peak usage over the year precisely, and purchased enough IT to deliver that peak (shown in orange on Figure 1). Under this ideal scenario, the company would achieve a 98% utilization of its assets over that period (in a financial, not computing or data-storage sense) — that is, 98% of its investment would be used over the year for a value-adding activity.

The chance of the company being able to make such a perfect prediction is unlikely. Overestimating capacity requirements would result in lower utilization and, accordingly, more waste. Underestimating capacity requirements would result in performance issues. A more sensible approach would be to purchase servers as soon as required (shown in green on Figure 1). This strategy would achieve 92% utilization. In practice, however, the company would have more servers idle for immediate capacity, decreasing utilization further.

Cloud providers can never achieve such a high level of utilization (although non-guaranteed “spot” purchases can help). Their entire proposition relies on being able to deliver capacity when needed. As a result, cloud services must have servers available when required — and lots of them.

Why utilization matters

Table 1 makes simple assumptions that demonstrate the challenge a cloud provider faces in provisioning excess capacity.

Table: Demonstration of how utilization affects server economics
Table 1 Demonstration of how utilization affects server economics

These calculations show that this on-premises implementation costs $10,000 in total, with the cloud provider’s total costs being $16,000. Cloud buyers rent units of resources, however, with the price paid covering both operating costs (such as power), the resources being used, and the depreciating value (and costs) of servers held in reserve. A cloud buyer would pay a minimum of $1,777 per unit, compared with a unit cost of $1,111 in an on-premises venue. The exact figures are not directly relevant: what is relevant is the fact that the input cost using public cloud is 60% more per unit —purely because of utilization.

Of course, this calculation is a highly simplified explanation of a complex situation. But, in summary, the cloud provider is responsible for making sure capacity is readily available (whether this be servers, network equipment, data centers, or storage arrays) while ensuring sufficient utilization such that costs remain low. In an on-premises data center this balancing act is in the hands of the organization. If enterprise capacity requirements are stable or slow-growing, it can be easier to balance performance against cost.

Sustaining utilization

It is likely that 37signals has done its calculations and is confident that migration is the right move. Success in migration relies on several assumptions. Organizations considering migrating from the public cloud back to on-premises infrastructure are best placed to make a cost-saving when:

  • There are unlikely to be sudden drops in resource requirements, such that on-premises servers are sitting idle and depreciating without adding value.
  • Unexpected spikes in resource requirements (that would mean the company could not otherwise meet demand, and the user experience and performance would be impacted) are unlikely. An exception here would be if a decline in user experience and performance did not impact business value — for example, if capacity issues meant employees were unable to access their CEO’s blog simultaneously.
  • Supply chains can deliver servers (and data center space) quickly in line with demand without the overheads involved in holding many additional servers (i.e., depreciating assets) in stock.
  • Skills are available to manage those aspects of the infrastructure for which the cloud provider was previously responsible (e.g., MySQL, capacity planning). These factors have not been considered in this update.

The risk is that 37signals (or any other company moving back to the public cloud) might not be confident of these criteria being met in the longer term. Were the situation to change unexpectedly, the cost profile of on-premises versus public cloud can be substantially altered.

Forecasting the solar storm threat

Forecasting the solar storm threat

A proposed permanent network of electromagnetic monitoring stations across the continental US, operating in tandem with a machine learning (ML) algorithm, could facilitate accurate predictions of geomagnetic disturbances (GMDs). If realized, this predictive system could help grid operators avert disruption and reduce the likelihood of damage to their — and their customers’ — infrastructure, including data centers.

Geomagnetic disturbances, also referred to as “geomagnetic storms” or “geomagnetic EMP”, occur when violent solar events interact with Earth’s atmosphere and magnetic field. Solar events that cause geomagnetic EMP (such as coronal mass ejection, or solar flares) occur frequently but chaotically, and are often directed away from Earth. The only long-term available predictions are probabilistic, and imprecise: for example, an extreme geomagnetic EMP typically occurs once every 25 years. When a solar event occurs, the US Space Weather Prediction Center (SWPC) can give hours’ to days’ notice of when it is expected to reach Earth. At present, these warnings lack practical information regarding the intensity and the location of such EMPs’ effects on power infrastructure and customer equipment (such as data centers).

A GMD produces ground-induced currents (GICs) in electrical conductors. The low frequency of a GMD concentrates GICs in very long electrical conductors — such as, for example, the high-voltage transmission lines in a power grid. A severe GMD can cause high-voltage transformer damage and widespread power outages — which could last indefinitely: high-voltage transformers have long manufacturing lead times, even in normal circumstances. Some grid operators have begun protecting their infrastructure against GICs. Data centers, however, are at risk of secondary GIC effects through their connections to the power grid: and many data center operators have not taken protective measures against GMDs, or any other form of EMP (see Electromagnetic pulse and its threat to data centers).

In the event of a less intense GMD, grid operators can often compensate for GICs, without failures. Data centers, however, may experience power-quality issues such as harmonic distortions (defects in AC voltage waveforms). Most data center uninterruptable power supply (UPS) systems are designed to accommodate some harmonics and protect downstream equipment, but the intense effects of a GMD can overwhelm these built-in protections — potentially damaging the UPS or other equipment. The effects of harmonics inside a data center can include inefficient UPS operation, UPS rectifier damage, tripped circuit breakers, overheated wiring, malfunctioning motors in mechanical equipment and, ultimately, physical damage to IT equipment.

The benefit to data center operators from improved forecasting of GMD effects is greatest in the event of these less intense incidents, which threaten equipment damage to power customers but are insufficient to bring down the power grid. An operator’s best defense against secondary GIC effects is to pre-emptively disconnect from the grid and run on backup generators. Actionable, accurate, and localized forecasting of GIC effects would better prepare operators to disconnect in time to avert damage (and to avoid unnecessary generator runtime in regions where this is strictly regulated).

An added challenge regarding the issue of geomagnetic effects on power infrastructure is that it is interdisciplinary: the interactions between Earth’s magnetic field and the power grid have historically not been well understood by experts in either geology or electrical infrastructure. Computationally simulating the effects of geomagnetic events on grid infrastructure is still not practically feasible.

This might change with rapid advancements in computer performance and modeling methods. At the 2022 Infragard National Disaster Resilience Council Summit in the US, researchers at Oregon State University presented a machine learning approach that could produce detailed geomagnetic forecasting — the objective here being to inform grid operators of their assessment of necessary protection of grid infrastructure.

Better modeling and forecasting of GMD effects requires many measurements spanning a geographic area of interest. The Magnetotelluric (MT) Array collects data across the continental US, using seven permanent stations, and over 1,600 temporary locations (as at 2022), arranged 43 miles (70 km) apart, on a grid. Over 1,900 temporary MT stations are planned by 2024. Station instruments measure time-dependent changes in Earth’s electric and magnetic fields, providing insight into the resistivity and electromagnetic impedance of Earth’s crust and upper mantle, in three dimensions. This data informs predictions of GIC intensity, which closely correlates with damaging effects on power infrastructure. The MT Array provides a dramatic and much-needed improvement to the resolution of data available on these geomagnetic effects.

Diagram: Magnetotelluric Array stations (2022). Map image © Google
Figure 1 Magnetotelluric Array stations (2022). Map image © Google

Researchers trained a machine learning model on two months of continuous and simultaneous data output from an array of 25 MT stations in Alaska (US). The trained model effectively predicts geomagnetic effects, with 30 minutes’ advance notice. Fortunately, scaling these forecast abilities to the continental US will not require the long-term operation of thousands of MT stations. The trained model can forecast geomagnetic effects at the 43 miles (70 km) resolution of the full MT Array with significantly fewer permanent stations providing input.

The proposed permanent network is called the “Internet of MT” (IoMT) and would cover the continental US with just 500 permanently installed devices to produce ongoing forecasts, on a grid at 87 mile (140 km) spacing. These devices are designed differently from the equipment at today’s MT Array stations: while collecting the same types of data, they have several advantages. Powered by solar panels and allowing data to be uploaded automatically through a mobile network connection, the IoMT devices have a smaller footprint and a much lower cost of acquisition — approximately $5,000 per station (in contrast to current MT Array station equipment, which would cost $60,000 to install permanently).

The MT Array has, so far, been financed through funding from various US government agencies, including the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), and the United States Geological Survey (USGS). Though the IoMT’s equipment design promises a significantly lower cost of acquisition and installation than the technology used in today’s temporary array, funding for this next phase has not yet been secured.

Detailed geomagnetic forecasts could make it possible for grid operators to take proactive steps to protect their infrastructure — preventing prolonged power outages and sparing their customers (including data centers) damaging secondary effects. The predictions offered through the IoMT provide a model that could be used worldwide to address the risks inherent in the threat of geomagnetic EMP. Though it is too early to anticipate how this data could be distributed to data center operators, the value of proactive defense from GMDs may support a subscription service — for instance, on the part of companies that provide weather data.

Cloud migrations to face closer scrutiny

Cloud migrations to face closer scrutiny

Big public-cloud operators have often had to compete against each other — sometimes ferociously. Only rarely have they had to compete against alternative platforms for corporate IT, however. More often than not, chief information officers (CIOs) responsible for mission-critical IT have seen a move to the public cloud as low-risk, flexible, forward-looking and, ultimately, inexpensive. But these assumptions are now coming under pressure.

As the coming years threaten to be economically and politically turbulent, infrastructure and supply chains will be subject to disruption. Increasing government and stakeholder interest will force enterprises to scrutinize the financial and other risks of moving on-premises applications to the public cloud. More effort, and more investment, may be required to ensure that resiliency is both maintained and is clearly evident to its customers. While cloud has, in the past, been viewed as a low-risk option, the balance of uncertainty is changing — as are the cost equations.

Although the picture is complicated, with many factors at play, there are some signs that these pressures may, already, be slowing down adoption. Amazon Web Services (AWS), the largest cloud provider, reported a historic slowdown in growth in the second half of 2022, after nearly a decade of 30% to 40% increases year-on-year. Microsoft, too, has flagged a likely slowdown in the growth of its Azure cloud service.

No one in the industry is suggesting that the adoption of public cloud has peaked, or that it is no longer of strategic value to large enterprises. Use of the public cloud is still growing dramatically and is still driving growth in the data center industry. Public cloud will continue to be the near-automatic choice for most new applications, but organizations with complex, critical and hybrid requirements are likely to slow down or pause their migrations from on-premises infrastructure to the cloud.

Is the cloud honeymoon over?

Many businesses have been under pressure to move applications to the cloud quickly, without comprehensive analysis of the costs, benefits and risks. CIOs, often prompted or backed by heads of finance or chief executives, have favored the cloud over on-premises IT for new and / or major projects.

Data from the Uptime Institute Global Data Center Survey 2022 suggests that, while many were initially wary, organizations are becoming more confident in using the cloud for their most important critical workloads. The proportion of respondents not placing mission-critical workloads into the public cloud has dropped from 74% in 2019 to 63% in 2022. Figure 1 shows the growth in on-premises to cloud migrations, encouraged by C-level enthusiasm and positive perceptions of inexpensive performance.

diagram: Drivers for and barriers to cloud migration (infrastructure-level factors)
Figure 1 Drivers for and barriers to cloud migration (infrastructure-level factors)

High-profile cloud outages, however, together with increasing regulatory interest, are encouraging some customers to take a closer look. Customers are beginning to recognize that not all applications have been architected to take advantage of key cloud features — and architecting applications properly can be very costly. “Lifting and shifting” applications that cannot scale, or that cannot track changes in user demand or resource supply dynamically, is unlikely to deliver the full benefits of the cloud and could create new challenges. Figure 1 shows how several internal (IT) and external (macroeconomic) pressures could suppress growth in the future.

One particular challenge is that many applications have not been rearchitected to meet business objectives — most notably resiliency. Many cloud customers are not fully aware of their responsibilities regarding the resiliency and scalability of their application architecture, in the belief that cloud companies take care of this automatically. Cloud providers, however, make it explicitly clear that zones will suffer outages occasionally and that customers are required to play their part. Cloud providers recommend that customers distribute workloads across multiple availability zones, thereby increasing the likelihood that applications will remain functional, even if a single availability zone falters.

Research by Uptime shows how vulnerable enterprise-cloud customers are to single-zone outages currently. Data from the Uptime Institute Global Data Center Survey 2022 shows that only 35% of respondents believe the loss of an availability zone would result in significant performance issues, and only 16% of respondents indicated that the loss of an availability zone would not impact their cloud applications.

To capture the full benefits of the cloud and to reduce the risk of outages, organizations need to (re)architect for resiliency. This resiliency has an upfront and ongoing cost implication, and this needs to be factored in when a decision is made to migrate applications from on-premises to the cloud. Uptime Intelligence has previously found that architecting an application across dual availability zones can cost 43% more than a non-duplicated application (see Public cloud costs versus resiliency: stateless applications). Building across regions, which further improves resiliency, can double costs. Some applications might not be worth migrating to the cloud, given the additional expense of resiliency being factored into application architecture.

Economic forces will reduce pressure to migrate to the cloud

Successful and fully functional cloud migrations of critical workloads carry additional costs that are often substantial — a factor that is only now starting to be fully understood by many organizations.

These costs include both the initial phase — when applications have to be redeveloped to be cloud-native, at a time when skills are in short supply and high demand — and the ongoing consumption charges that arise from long periods of operation across multiple zones. It is clear that the cost of the cloud has not always been factored in: a major reason for organizations moving their workloads back to on-premises from the public cloud being cost (cited by 43% of respondents to Uptime Institute’s Data Center Capacity Trends Survey 2022).

Server refresh cycles often act as a trigger for cloud migration. Rather than purchasing new physical servers, IT C-level leaders choose to lift-and-shift applications to the public cloud. Uptime’s 2015 global survey of data center managers showed that 35% of respondents kept their servers in operation for five years or more; this proportion had increased to 52% by 2022. During challenging economic times, CIOs may be choosing to keep existing servers running instead of investing in a migration to the cloud.

Even if CIOs continue to exert pressure for a move to the cloud, this will be muted by the need to justify the expense of migration. Despite allowing for a reduction in on-premises IT and in data center footprints, many organizations do not have the leeway to handle the unexpected costs required to make cloud applications more resilient or performant. Poor access to capital, together with tighter budgets, will force executives to think carefully about the need for full cloud migrations. Application migrations with a clear return on investment will continue to move to the cloud; those that are borderline may be put on the back burner until conditions are clearer.

Additional pressure from regulators

Governments are also becoming concerned that cloud applications are not sufficiently resilient, or that they present other risks. The dominance of Amazon, Google and Microsoft (the “hyperscalers”) has raised concerns regarding “concentration risk” — an over-reliance on a limited number of cloud providers — in several countries and key sectors.

Regulators are taking steps to assess and manage this concentration risk, amid concerns that it could threaten the stability of many economies. The EC’s recently adopted Digital Operational Resilience Act (DORA) provides a framework for making the oversight of outsourced IT providers (including cloud) the responsibility of financial market players. The UK government’s Office of Communications (Ofcom) has launched a study into the country’s £15 billion public-cloud-services market. The long-standing but newly updated Gramm-Leach-Bliley Act (GLBA, also known as the Financial Services Modernization Act) in the US now requires regular cyber and physical security assessments.

The direction is clear. More organizations are going to be required to better evaluate and plan risks arising from third-party providers. This will not always be easy or accurate. Cloud providers face the same array of risks (arising from cyber-security issues, staff shortages, supply chains, extreme weather and unstable grids, etc.) as other operators. They are rarely transparent about the challenges associated with these risks.

Organizations are becoming increasingly aware that lifting and shifting applications from on-premises to public-cloud locations does not guarantee the same levels of performance or availability. Applications must be architected to take advantage of the public cloud — with the resulting upfront and ongoing cost implications. Many organizations may not have the funds (or indeed the expertise and / or staff) to rearchitect applications during these challenging times, particularly if the business benefits are not clear. Legislation will force regulated industries to consider all risks before venturing into the public cloud. Much of this legislation, however, is yet to be drafted or introduced.

How will this affect the overall growth of the public cloud and its appeal to the C-level management? Hyperscaler cloud providers will continue to expand globally and to create new products and services. Enterprise customers, in turn, are likely to continue finding cloud services competitive. The rush to migrate workloads will slow down as organizations do the right thing: assess their risks, design architectures that help mitigate those risks, and move only when ready to do so (and when doing so will add value to the business).


The full report Five data center predictions for 2023 is available here.

See our Five Data Center Predictions for 2023 webinar here.

Accounting for digital infrastructure GHG emissions

Accounting for digital infrastructure GHG emissions

A host of regulations worldwide have introduced (or will introduce) legal mandates forcing data center operators to report specific operational data and metrics. Key examples include the European Union’s Corporate Sustainability Reporting Directive (CSRD); the European Commission’s proposed Energy Efficiency Directive (EED) recast; the draft US Securities and Exchange Commission’s (SEC) climate disclosure proposal and various national reporting requirements (including in Brazil, Hong Kong, Japan, New Zealand, Singapore, Switzerland, and the UK) under the Task Force on Climate-related Financial Disclosures (TCFD). The industry is not, currently, adequately prepared to address these requirements, however.

Current data-exchange practices lack consistency — and any recognized consensus — on reporting sustainability-related data, such as energy and water use, greenhouse gas (GHG) emissions and operational metrics. Many enterprises have, throughout discussions with Uptime Institute, indicated that it is difficult (and sometimes impossible) to obtain energy and emissions data from colocation and cloud operators.

The Uptime Institute Global Data Center Survey 2022 made clear that data center operators’ readiness to report GHG emissions has seen incremental improvement over previous surveys, with only 37% of respondents indicating that they are prepared to publicly report their GHG emission inventories (up by just 4 percentage points on the previous year). Of these, less than one-third of respondents are currently including their Scope 3 emissions inventories.

Fortunately, most of the reporting regimes will become effective for the 2024 reporting year, giving data center managers time to work with their colocation and cloud providers on obtaining the necessary data, and to put their carbon accounting processes in order. While the finer details will vary according to each enterprise’s digital infrastructure footprint, there are certain common steps that data center managers can implement to facilitate the collection of quality data to fulfill these new reporting mandates.

Colocation operations

The GHG Protocol classifies emissions as Scope 1 and Scope 2, where an entity has operational control or financial control. Having analyzed these definitions Uptime’s position is that IT operators exercise both operational and financial control over their IT operations in colocation data centers.

From an operational control standpoint, IT operators specify and purchase the IT equipment installed in the colocation space, set the operating parameters for that equipment (power management settings, virtual machine creation and assignment, hardware utilization levels, etc.) and maintain and monitor operations. Similarly, IT operators have financial control: they purchase, install, operate and maintain the IT equipment. On which basis, GHG emissions from IT operations in a colocation facility should be classified as Scope 2 for the IT operator and Scope 3 for the colocation operator. Emissions (and energy use) from facility functions, such as power distribution losses from the grid connection to the IT hardware, and cooling energy, should fall into Scope 3 for the IT operator tenant.

Table 1 outlines Scope 2 and Scope 3 emissions reporting responsibilities for IT operations in enterprise (owned), colocation and public cloud data centers under GHG Protocol Corporate Accounting and Reporting Standards.

Table: Emissions Scope assignments for IT operations in different data center types
Table 1 Emissions Scope assignments for IT operations in different data center types

In collaboration with colocation and IT operators, Business for Social Responsibility (a sustainable business network and consultancy) published some initial guidance on emissions accounting in 2017: GHG Emissions Accounting, Renewable Energy Purchases, and Zero-Carbon Accounting: Issues and Considerations for the Colocation Data Center Industry

This guidance did not take a position on the assignment of Scope 2 and 3 emissions in colocation operations, however, leaving this decision to individual colocation operators.

In practice, different operators use two different accounting criteria. Equinix, for example, accounts for all energy use and emissions as Scope 2, with emissions effectively passed to tenants as Scope 3. NTT follows the approach (also recommended by Uptime) that GHG emissions from the energy use of IT operations in a colocation facility should be classified as Scope 2 for the IT operator and Scope 3 for the colocation operator.

The use of two different accounting criteria creates confusion and makes the comparison and understanding of emissions reports across the data center industry difficult. The industry needs to settle on a single accounting methodology for emissions reporting.

The GHG Protocol Corporate Accounting and Reporting Standards are likely to be cited as governing the classification of Scope 1, 2 and 3 emissions under legal mandates such as the CSRD and the proposed SEC climate disclosure requirements. Uptime recommends that colocation operators and their tenants conform to the GHG Protocol to meet these legal requirements.

Public cloud operations

Emissions accounting for IT operations in a public-cloud facility is straightforward: all emissions are Scope 2 for the cloud operator (since they own and operate the IT and facilities infrastructure) and Scope 3 for the IT operator (customer).

A cloud operation in a colocation facility adds another layer of allocation. Public-cloud IT energy use should be accounted for as Scope 3 by the colocation operator and Scope 2 by the cloud operator, with facility infrastructure-related emissions accounted for as Scope 2 and Scope 3 by each entity respectively. This represents no change for the IT operator: all emissions associated with its cloud-based applications and data — regardless where that cloud footprint exists — will be accounted for as Scope 3.

IT operators report that they have difficulty obtaining energy-use and emissions information from their cloud providers. The larger cloud operators, and several of the large colocation providers, typically claim that there are zero emissions associated with operations at their facilities because they are carbon-neutral on account of buying renewable energy and carbon offsets. The same providers are typically unable or unwilling to provide more detailed information — making compliance with legally mandated reporting requirements difficult for IT operators.

If IT operators are to comply with forthcoming disclosure obligations they will, in accordance with the GHG Protocol, need data on their energy use and their location-based (grid power mix) and market-based (contractual mix) emissions. They will also need more granular information on renewable energy consumption and the application of renewable energy certificates (RECs) in offsetting grid power use and the associated emissions if they are to fully understand the underlying details.

Required cloud and colocation provider sustainability data

With new sustainability reporting regulations due to take effect in the medium term, IT operators will clearly need more detailed data on energy and emissions from their infrastructure providers in meeting their compliance responsibilities, as well as in assessing the total environmental impact of their operations. Colocation and cloud services providers (and others providing hosting and various IT infrastructure services) will be expected to provide the data listed below — ideally as a condition of any service contract. This data will provide the information necessary to complete TCFD climate disclosures, as well as the IT operator’s sustainability report. Additional data may need to be added to this list to address specific local reporting or operating-efficiency mandates.

Data-transfer requirements for colocation and cloud services contracts should facilitate the annual reporting of operational data including:

  • IT power consumption as reported through the operator-specific meter.
  • 12-month average PUE for the space supporting the racks.
  • Quantity of waste heat recovered and reused.
  • Total-facility electricity consumption (over the year).
  • The percentage of each type of generation supplying electricity to the facility (i.e., coal, natural gas, wind, solar, biomass etc.).
  • Quantity of renewable energy consumed by the facility (megawatt hours, MWh).
  • MWh of RECs / grid offsets (guarantees of origin, GOs) matched to grid purchases to claim renewable energy “use” and / or to offset grid emissions (to include generation type(s) and the avoided emissions value for each group of RECs or GOs used to match grid electricity consumption).
  • Percentage of renewable energy “used” (consumed and matched) at the facility as declared by the supplier.
  • Reported location-based emissions for the facility (MT CO2).
  • Reported market-based emissions for the facility (MT CO2).
  • Average annual emissions factor of electricity supplied (by utility, energy retailer or country or grid region) (MT CO2/MWh).
  • Total-facility water consumption (over the year).

Note: GHG emissions values should be reported for the facility’s total fuel and electricity consumption. Scope 1 emissions should include any refrigerant emissions (fugitive or failure).

Energy-use data should be requested monthly or quarterly (recognizing that data reports from service providers will typically lag by one to three months) to allow tracking of power consumption, emissions metrics and objectives throughout the year. Current-year emissions can be estimated using the previous year’s emissions factor for electrical consumption at a facility.

Mandated reporting requirements will typically require data to be submitted in March following the end of the reporting year. Therefore, service agreements should allow for this to be delivered to clients by February.

Colocation and cloud-service providers need to develop methodologies to provide energy-use and location- and market-based-emissions estimates to their clients. Colocation providers should install metering to measure tenants’ IT power consumption, simplifying allocated energy use and emissions reporting. Cloud providers have several different approaches available for measuring or estimating energy use. Algorithms can be created that use IT-system and equipment power and utilization data (with tracking capabilities and / or knowledge of the power-use characteristics of the deployed IT equipment configurations) to estimate a customer’s energy use and associated location-based emissions.

Any calculation methodology should be transparent for customers. Cloud providers will need to choose a methodology that fits with their data collection capabilities and start providing data to their customers as soon as possible.

IT operators need to obtain information on the RECs, GOs and carbon offsets applied to the overall energy use at each facility at which they operate. This data will allow IT operators to validate the actual emissions associated with the energy consumed by the data center, as well as claims regarding renewable energy use and GHG emissions reductions. IT operators will need to exercise due diligence to ensure that data is accurately reported, and will need to match the service provider’s data to the operator’s chosen sustainability metrics.

Data required from IT tenants at colocation facilities

Colocation operators may require operational information from their tenants. The proposed EED recast is likely to require colocation operators to report specific IT operational data — which will have to be supplied by their tenants. At a minimum, colocation operators need to incorporate a clause into their standard contracts requiring tenants to provide legally mandated operational data. Contract language can be made more specific to the facilities covered by the forthcoming mandates as new regulations are promulgated.

Conclusion

The reporting of data center energy use and GHG emissions is undergoing a major transition — from a voluntary effort subject to limited scrutiny to legally mandated reporting requiring third-party assurance. These legal requirements can extend to smaller enterprise and colocation operators: the EED recast, for example, will apply to operations with just 100 kW of installed IT equipment power. These forthcoming requirements will require IT operators to take responsibility for their operations across all data-center categories — owned, colocation and cloud.

This new regulatory environment will mean digital infrastructure managers will now have to facilitate collaboration between their facilities teams, IT teams and data center service providers to create a coherent sustainability strategy across their operations. Processes will need to be created to generate, collect and report the data and metrics needed to comply with these requirements. At the industry level, standards need to be developed to create a consistent framework for data and metrics reporting.  

These efforts need to be undertaken with some urgency since many of these new reporting obligations will take effect from the 2023 or 2024 operating year.