Data center costs set to rise and rise

Data center costs set to rise and rise

Up until two years ago, the cost of building and operating data centers had been falling reasonably steeply. Improving technology, greater production volumes as the industry expanded and consolidated, large-scale builds, prefabricated and modular construction techniques, stable energy prices and the low costs of capital have all played a part. While labor costs have risen during this time, better management, processes and automation have helped to prevent spiraling wage bills.

The past two years, however, have seen these trends come to a halt. Ongoing supply chain issues and rising labor, energy and capital costs all set to make building and running data centers more expensive in 2023 and beyond.

But the impact of these cost increases — affecting IT as well as facilities — will be muted due to the durable growth of the data center industry, fueled by global digitization and the overwhelming appetite for more IT. In response, most large data center operators (and data center capacity buyers) are continuing to move forward with expansion projects and taking on more space.

Smaller and medium-sized data center operators, however, that lack the resources to weather higher costs are likely to find this particularly challenging, with some smaller colocation operators (and enterprise data centers) struggling to remain competitive. Increasing overhead costs arising from new regulatory requirements and climbing interest rates will further challenge some operators, but an immediate rush to the public cloud is unlikely since this strategy, too, has non-trivial (and often high) costs.

Capital costs

Capital plays a major part in the data center life cycle costing. Capital has been both cheap and readily available to data center builders for more than a decade: but the market changed in 2022. Countries that are home to major data center markets or to major companies that own and build data centers are now facing decades-high inflation rates (see Table 1), making it more difficult and more expensive to raise capital. But with increasing demand for capacity, partly due to a pent-up demand resulting from construction bottlenecks during the COVID-19 pandemic along with permitting and energy supply problems more recently, the most active and best positioned operators are funding their capacity expansion.

Table: Inflation rates as at September 2022
Table 1 Inflation rates as at September 2022

Uptime Institute’s Data Center and IT Spending Survey 2022 shows that more than two-thirds of enterprise and colocation operators expect to spend more in data center costs in 2023. Most enterprise data centers (90%) say they will be adding IT or data center capacity over the next two to three years, with half expecting to construct new facilities (although they may be closing down others).

The recent rise in construction costs may have come as a shock to some. Data center construction costs and lead-times had improved significantly in the 2010s, but we are now seeing a reversal of this trend. An average Tier III enterprise data center (a technical facility with concurrently maintainable site infrastructure) would have cost approximately $12 million per megawatt (MW) in 2010 per Uptime’s estimates (not including land and civil works) and would have taken up to two years to build.

Changes in design and construction had resulted in these costs dropping — in the best cases, to as little as $6 to $8 million per MW immediately before the COVID-19 pandemic, with lead-times cut to less than 12 months. While Uptime has not verified these claims, some projects were reported to have been budgeted at less than $4 million per MW and taken just six months to complete.

The view today is markedly different. Long waiting times for some significant components (such as certain engine generators and centralized UPS systems) are driving up prices. By 2022, costs for Tier III specifications had risen by $1 million to $2 million per MW according to Uptime’s estimates. Lead-times can now reach or exceed 12 months, prolonging capacity expansion and refurbishment projects — and sometimes preventing operators from earning revenue from near complete facilities.

While prices for some construction materials have started to stabilize at an elevated level since the COVID-19 pandemic, prices are expected to increase further in 2023. Product shortages, together with higher prices for labor, semiconductors and power, are all having an inflationary effect across the industry. Concurrently, site acquisitions at major data center hubs with low-latency network connections now come at a premium, as popular data center locations run out of suitable land and power.

Uptime Institute’s Supply Chain Survey 2022 shows computer room cooling units, UPS systems and power distribution components to be the data center equipment most severely impacted by shortages. Of the 678 respondents to this survey, 80% said suppliers had increased their prices over the past 18 months. Notably, Li-ion battery prices, which had been trending downwards every year until 2021, increased in 2022 due to shortages of raw materials coupled with high demand.

More stringent sustainability requirements, too, contribute to higher capital costs. Regulations in some major data center hubs (such as Amsterdam and Singapore) mean only developments with highly energy efficient designs can move forward. But meeting these requirements will come at a cost (engineering fees, structural changes, different cooling systems), lifting the barriers to entry. New energy efficiency standards (as stipulated under the EC’s Energy Efficiency Directive recast, for example) will stress budgets still further (see Critical regulation: the EU Energy Efficiency Directive recast).

Operators are looking to recover the cost of sustainability requirements through efficiency gains. Surging power costs, which are likely to remain high in the coming years, now mean the calculation has shifted in favor of more aggressive energy optimization — but upfront capital requirements will often be higher.

Operating and IT costs

The operating expenditures associated with data centers and IT infrastructure are also set to increase in 2023, due to steep rises in major input costs. Uptime Institute’s Data Center and IT Spending Survey 2022 showed power to be driving the greatest unit cost increases for most operators (see Figure 1) — the result of high gas prices, the transition to renewable energy, imbalances in grid supply and the war in Ukraine.

The UK and the EU have been most affected by these increases, with certain colocation operators passing down some significant increases in energy costs to their customers. While energy prices are expected to drop (at least against the record highs of 2022), they are likely to remain well above the average levels of the past two decades.

diagram: Enterprise data centers most impacted by IT hardware costs
Figure 1 Enterprise data centers most impacted by IT hardware costs

Second only to power, IT hardware showed the next greatest increase in unit costs for enterprise data center respondents, partly because of various dislocations in the hardware supply chain, shortages of some processors and switching silicon, and inflation. Demand for IT hardware has continued to outpace supply, and manufacturing backlogs resulting from the COVID-19 pandemic have yet to catch up.

Uptime sees promising signs of improvements in data center hardware supply, largely due to a recent sag in global demand (caused by economic headwinds and IT investment cycles). As a result, prices and lead-times for generic IT hardware (with some exceptions) will likely moderate in the first half of 2023.

If history is any guide, demand for data center IT will rise again some time in 2023 once some major IT infrastructure buyers accelerate their capacity expansion, which will yet again lead to tightness in the supply of select hardware later in the year.

Staffing will also play a major role in the increased cost of running data centers, and is likely to continue to impact the industry beyond 2023. Many operators say they are spending more on labor costs in a bid to retain current staff (see Figure 2). This presents a further challenge for those enterprises that are unable to match salary offers made by some of the booming tech giants.

diagram: Labor spending driven by staff retention initiatives
Figure 2 Labor spending driven by staff retention initiatives

The aggregate view is clear: the overall costs of building and running data centers is set to rise significantly over the next few years. While businesses can deploy various strategies and technologies — such as automation, energy efficiency and tactical migration to the cloud — to reduce operational costs, these are likely to entail capital investment, new skills and technical complexity.

Will data centers becoming more expensive drive more operators towards colocation or the cloud? It seems unlikely that higher on-premises costs will cause greater migration per se. Results from Uptime Institute’s Data Center and IT Spending Survey 2022 show that despite increasing costs, many operators find that keeping workloads on-premises is still cheaper than colocation (54%, n=96) or migrating to the cloud (64%, n=84).

Estimating the costs of each of these options, however, is difficult in a rapidly changing market, in which some costs are opaque. Given the high costs associated with migrating to the cloud, it is likely to be cheaper for enterprises to endure higher construction and refurbishment costs in the near term and benefit from lower operating costs over the longer term. Not all companies will be able capitalize on this strategy, however.

Those larger organizations with the financial resources to benefit from economies of scale, with the ability to raise capital more easily and with sufficient purchasing power to leverage suppliers, are likely to have lower costs compared with smaller companies (and most enterprise data centers). Given their scale, however, they are still likely to face higher costs elsewhere, such as sustainability reporting and calls for proving — and improving — their infrastructure resiliency and security.

The full report Five data center predictions for 2023 is available to download here.

See our Five Data Center Predictions for 2023 webinar here.


Max Smolaks

Douglas Donnellan

High costs drive cloud repatriation, but impact is overstated

High costs drive cloud repatriation, but impact is overstated

Unexpected costs are driving some data-heavy and legacy applications back from public-cloud to on-premises locations. However, very few organizations are moving away from the public cloud strategically — let alone altogether.

The past decade has seen numerous reports of so-called cloud “repatriations” — the migration of applications back to on-premises venues following negative experiences with, or unsuccessful migrations to, the public cloud. These reports have been cited by some colocation providers and private-cloud vendors as evidence of the public cloud’s failures, particularly concerning cost and performance.

Cloud-storage vendor Dropbox brought attention to this issue after migrating from Amazon Web Services (AWS) in 2017. Documents submitted to the US Securities Exchange Commission suggest the company saved an estimated $75 million over the next two years, as a result. Software vendor 37signals also made headlines after moving its project management platform Basecamp and email service Hey from AWS and Google Cloud to a colocation facility.

Responses to Uptime Institute’s 2022 Data Center Capacity Trends Survey also indicated that some applications are moving back to the public cloud. One-third (33%) of respondents said their organizations had moved production applications from a public-cloud provider to a colocation facility or data center on a permanent basis (Figure 1). The terms “permanently” and “production” were included in this survey question specifically to ensure that respondents did not consider applications being moved between venues due to application development processes or redistribution across hybrid-cloud deployments.

diagram: Many organizations are moving some applications out of public cloud
Figure 1 Many organizations are moving some applications out of public cloud

Poor planning for scale is driving repatriation

Respondents to Uptime Institute’s 2022 Data Center Capacity Trends Survey cited cost as the biggest driver behind migration back to on-premises facilities (Figure 2).

diagram: Unexpected costs are driving repatriation
Figure 2 Unexpected costs are driving repatriation

Why are costs greater than expected?

Data is often described as having “gravity” — meaning the greater the amount of data stored in a system the more data (and, very often, software applications) it will attract over time. This growth is logical in light of two major drivers: data growth and storage economics. Most users and applications accumulate more data automatically (and perhaps inadvertently) over time: cleaning and deleting data, on the other hand, is a far more manual and onerous (and, therefore, costly) task. At the same time, the economics of data storage promote centralization, largely driven by strong scale efficiencies arising from better storage management. Dropbox’s data-storage needs were always going to grow over time because it aggregated large volumes of consumer and business users — with each gradually storing more data with the service.

The key benefits of cloud computing is scalability — not just upwards during periods of high demand (to meet performance requirements), but also downwards when demand is low (to reduce expenditure). Dropbox cannot shrink its capacity easily, as its data has gravity. It cannot easily reduce costs by scaling back resources. Dropbox, moreover, as a collection of private file repositories, does not benefit from other cloud services (such as web-scale databases, machine learning or Internet of Things technologies) that might use this data. Dropbox needs ever-growing storage capacity — and very little else — from a cloud provider. At Dropbox’s level of scale, the company would inevitably save money by buying storage servers as required and adding them to its data centers.

Does this mean all data-heavy customers should avoid the public cloud?

No. Business value may be derived from using cloud services which use this growing data as a source. This value often justifies the expense of storing cloud data. For example, a colossal database of DNA sequences might create a significant monthly spend. But if a cloud analytics service (one that would otherwise be time consuming and costly to deploy privately) could use this data source to help create new drugs or treatments, the price would probably be worth paying.

Many companies will not have the scale of Dropbox to make on-premises infrastructure cost efficient in comparison with the public cloud. Companies with only a few servers’ worth of storage might not have the appetite (or the staff) to manage storage servers and data centers when they could, alternatively, upload data to the public cloud. However, the ever-growing cost of storage is by no means trivial, even for some smaller companies: 37signals’ main reason for leaving the public cloud was the cost of data storage — which the company stated was over $500,000 per year.

Other migrations away from the public cloud may be due to “lifting-and-shifting” existing applications (from on-premises environments to the public cloud) without rearchitecting these to be scalable. An application that can neither grow to meet demand, nor shrink to reduce costs, rarely benefits from deployment on the public cloud (see Cloud scalability and resiliency from first principles). According to Uptime Institute’s 2022 Data Center Capacity Trends Survey most applications (41%) that were migrated back to on-premises infrastructure were existing applications that had previously been lifted and shifted to the public cloud.

The extent of repatriation is exaggerated

Since Dropbox’s migration, many analyses of cloud repatriation (and the associated commentary) have assumed an all-or-nothing approach to public cloud, forgetting that a mixed approach is a viable option. Organizations have many applications. Some applications can be migrated to the public cloud and perform as expected at an affordable price; others may be less successful. Just because 34% of respondents have migrated someapplications back from the public cloud it does not, necessarily, mean the public cloud has universally failed at those organizations. Nor does it suggest that the public cloud is not a viable model for all applications.

Only 6% of respondents to Uptime Institute’s 2022 Data Center Capacity Trends Survey stated that they had abandoned the public cloud altogether (Figure 3) due to cloud repatriation. Some 23% indicated that repatriation had no impact on public-cloud usage, with 59% indicating that cloud usage had been somewhat reduced by cloud adoption.

diagram: Overall impacts of moving from public cloud
Figure 3 Overall impacts of moving from public cloud

The low numbers of respondents abandoning public cloud suggests most are pursuing a hybrid approach, involving both on-premises and public-cloud venues. These venues don’t necessarily work together as an integrated platform. Hybrid IT here refers to an open-minded strategy regarding which venue is the best location for each application’s requirements.

Conclusion

Some applications are moving back to on-premises locations from the public cloud, with unexpected costs being the most significant driver here. These applications are likely to be slow-growing, data-heavy applications that don’t benefit from other cloud services, or applications that have been lifted and shifted without being refactored for scalability (upwards or downwards). The impact of repatriation on public-cloud adoption is, however, moderate at most. Some applications are moving away from the public cloud, but very few organizations are abandoning the public cloud altogether. Hybrid IT — at both on-premises and cloud venues — is the standard approach. Organizations need to thoroughly analyze the costs, risks and benefits of migrating to the public cloud before they move — not in retrospect.

Too hot to handle? Operators to struggle with new chips

Too hot to handle? Operators to struggle with new chips

Standard IT hardware was a boon for data centers: for almost two decades, mainstream servers have had relatively constant power and cooling requirements. This technical stability moored the planning and design of facilities (for both new builds and retrofits) and has helped attract investment in data center capacity and technical innovation. Furthermore, many organizations are operating data centers near or beyond their design lifespan because, at least in part, they have been able to accommodate several IT refreshes without major facility upgrades.

This stability has helped data center designers and planners. Data center developers could confidently plan for design power averaging between 4 kilowatts (kW) and 6 kW per rack, while (in specifying thermal management criteria) following US industry body ASHRAE’s climatic guidelines. This maturity and consistency in data center power density and cooling standards has, of course, been dependent on stable, predictable power consumption by processors and other server components.

The rapid rise in IT power density, however, now means that plausible design assumptions regarding future power density and environmental conditions are starting to depart from these standard, narrow ranges.

This increases the technical and business risks — particularly because of the risks inherent under future, potentially divergent scenarios. The business costs of incorrect design assumptions can be significant: be too conservative (i.e., retain low-density approaches), and a data center may quickly become limited or even obsolete; be too technically aggressive (i.e., assume or predict highly densified racks and heat reuse) and there is a risk of significant overspend on underutilized capacity and capabilities.

Facilities built today need to remain economically competitive and technically capable for 10 to 15 years. This means certain assumptions must be made through speculation, without data center designers knowing the future specifications of IT racks. As a result, engineers and decision-makers need to grapple with the uncertainty that will surround data center technical requirements for the second half of the 2020s and beyond.

Server heat turns to high

Driven by the rising demand of IT silicon, server power and — in turn — typical rack power are both escalating. Extreme-density racks are also increasingly prevalent in technical computing, high-performance analytics and artificial intelligence training. New builds and retrofits will be more difficult to optimize for future generations of IT.

While server heat output remained relatively modest, it was possible to establish industry standards around air cooling. ASHRAE’s initial recommendations on supply temperature and humidity ranges (in 2004, almost 20 years ago) met the needs and risk appetites of most operators. ASHRAE subsequently encouraged incrementally wider ranges, helping drive industry gains in facilities’ energy efficiency.

Uptime Institute research shows a trend in consistent, if modest, increases in rack power density over the past decade. Contrary to some (aggressive) expectations, the typical rack remains under 10 kW. This long-running trend has picked up pace more recently, and Uptime expects it to accelerate further. The uptick in rack power density is not exclusively due to more heavily loaded racks. It is also due to greater power consumption per server, which is being driven primarily by the mass-market emergence of higher-powered server processors that are attractive for their performance and often superior energy efficiency if utilized well (Figure 1).

diagram: Server power consumption on a steep climb
Figure 1 Server power consumption on a steep climb

This trend will soon reach a point when it starts to destabilize existing facility design assumptions. As semiconductor technology slowly — but surely — approaches its physical limits, there will be major consequences for both power delivery and thermal management (see Silicon heatwave: the looming change in data center climates).

“Hotter” processors are already a reality. Intel’s latest server processor series, expected to be generally available from January 2023, achieves thermal design power (TDP) ratings as high as 350 watt (W) — with optional configuration to more than 400 W should the server owner seek ultimate performance (compared with 120 W to 150 W only 10 years ago). Product roadmaps call for 500 W to 600 W TDP processors in a few years. This will result in mainstream “workhorse” servers approaching or exceeding 1 kW in power consumption each — an escalation that will strain not only cooling, but also power delivery within the server chassis.

Servers for high-performance computing (HPC) applications can act as an early warning of the cooling challenges that mainstream servers will face as their power consumption rises. ASHRAE, in a 2021 update, defined a new thermal standard (Class H1) for high-density servers requiring restricted air supply temperatures (of up to 22°C / 71.6°F) to allow for sufficient cooling, adding a cooling overhead that will worsen energy consumption and power usage effectiveness (PUE). This is largely because of the number of tightly integrated, high-power components. HPC accelerators, such as graphics processing units, can use hundreds of watts each at peak power — in addition to server processors, memory modules and other electronics.

The coming years will see more mainstream servers requiring similar restrictions, even without accelerators or densification. In addition to processor heat output, cooling is also constrained by markedly lower limits on processor case temperatures — e.g., 55°C, down from a typical 80°C to 82°C — for a growing number of models. Other types of data center chips, such as computing accelerators and high-performance switching silicon, are likely to follow suit. This is the key problem: removing greater volumes of lower-temperature heat is thermodynamically challenging.

Data centers strike the balance

Increasing power density may prove difficult at many existing facilities. Power or cooling capacity may be limited by budgetary or facility constraints — and upgrades may be needed for live electrical systems such as UPS, batteries, switchgears and generators. This is expensive and carries operational risks. Without it, however, more powerful IT hardware will result in considerable stranded space. In a few years, the total power of a few servers will exceed 5 kW: and a quarter-rack of richly configured servers can reach 10 kW if concurrently stressed.

Starting with a clean sheet, designers can optimize new data centers for a significantly denser IT configuration. There is, however, a business risk in overspending on costly electrical gear, unless managed by designing a flexible power capacity and technical space (e.g., prefabbed modular infrastructure). Power requirements for the next 10 to 15 years are still too far ahead to be forecast with confidence. Major chipmakers are ready to offer technological guidance covering the next three to five years, at most. Will typical IT racks reach average power capacities of 10 kW, 20 kW or even 30 kW by the end of the decade? What will be the highest power densities a new data center will be expected to handle? Today, even the best informed can only speculate.

Thermal management is becoming tricky too. There are multiple intricacies inherent in any future cooling strategy. Many “legacy” facilities are limited in their ability to supply the necessary air flow to cool high-density IT. The restricted temperatures typically needed by (or preferable for) high-density racks and upcoming next-generation servers, moreover, demand higher cooling power at the risk of losing IT performance (modern silicon throttles itself when it exceeds temperature limits). To which end, ASHRAE recommends dedicated low-temperature areas to minimize the hit on facilities’ energy efficiency.

A growing number of data center operators will consider support for direct liquid cooling (DLC), often as a retrofit. Although DLC engineering and operations practices have matured, and now offer a wider array of options (cold plates or immersion) than ever before, its deployment will come with its own challenges. A current lack of standardization raises fears of vendor lock-in and supply-chain constraints for key parts, as well as a reduced choice in server configurations. In addition, large parts of enterprise IT infrastructure (chiefly storage systems and networking equipment) cannot currently be liquid-cooled.

Although IT vendors are offering (and will continue to offer) more server models with integrated DLC systems, this approach requires bulk buying of IT hardware. For facilities’ management teams, this will lead to technical fragmentation involving multiple DLC vendors, each with its own set of requirements. Data center designers and operations teams will have to plan not only for mixed-density workloads, but also for a more diverse technical environment. The finer details of DLC system maintenance procedures, particularly for immersion-type systems, will be unfamiliar to some data center staff, highlighting the importance of training and codified procedure over muscle memory. The propensity for human error can only increase in such an environment.

The coming changes in data center IT will be powerful. Semiconductor physics is, fundamentally, the key factor behind this dynamic but infrastructure economics is driving it: more powerful chips tend to help deliver infrastructure efficiency gains and, through the applications they run, generate more business value. In a time of technological flux, data center operators will find there are multiple opportunities for gaining an edge over peers and competitors — but not without a level of risk. Going forward, adaptability is key.


See our Five Data Center Predictions for 2023 webinar here.


Jacqueline Davis, Research Analyst, Uptime Institute

Max Smolaks, Research Analyst, Uptime Institute

Geopolitics deepens supply chain worries

Geopolitics deepens supply chain worries

The COVID-19 pandemic — and the subsequent disruption to supply chains — demonstrated the data center industry’s reliance on interdependent global markets and the components they produce. Although the data center sector was just one of many industries affected, the extensive variety of the often complex electrical and mechanical equipment involved exacerbated supply chain problems.

Engine generators illustrate the problem: they typically comprise hundreds of constituent parts shipped from at least a dozen countries spanning North America, Europe and Asia. Shortages of seemingly ordinary components, such as voltage regulators, air filters, valves or battery terminals, can lead to major delays in delivery. Even when the production of data center equipment (such as lead-acid batteries and fiber optic cables) is relatively localized, pricing and availability will be subject to changing dynamics in global markets.

The end of the pandemic does not mean a return to the normality of previous years, as strong pent-up demand, higher costs and abnormally long lead-times persist.

The Uptime Institute Supply Chain Survey 2022 illustrates the extent of the problem, with one in five operators reporting major delays or disruption to their procurement over the previous 18 months. Unsurprisingly, satisfaction with vendors’ supply chain management has nosedived: nearly half of respondents are unhappy with at least some of their suppliers. The availability of computer room cooling units and major electrical equipment — specifically, UPS, engine generators and switchgears — appear to be the greatest pain-points as of the end of 2022. Larger operators (superior purchasing power notwithstanding) are apparently bearing the brunt of supply problems.

A total of 40% of operators responding to the survey confirmed they were investigating additional sources of supply in response to these issues. A similar number reported increasing their inventories of parts and materials to safeguard maintenance schedules and operational resilience. Vendors, too, have taken similar measures to address shortages and delays. Supply appears to be improving as of the second half of 2022, with more than half of operators reporting improvements, albeit slow improvements for most (Figure 1).

diagram: Figure 1 Operators see slow improvements in the data center supply chain
Figure 1 Operators see slow improvements in the data center supply chain

Rising geopolitical tensions generate risks

Crucially, geopolitical dynamics — specifically between the US-led Western alliance, China and, to a lesser degree, Russia — are giving rise to additional threats. Even with more diversified supplies, higher inventory targets and a build-up of muscle memory, the data center industry remains particularly exposed to the threats posed by current geopolitical trajectories.

The profile of these emerging geopolitical risks is starkly different from other major events, however rare. In contrast to a pandemic, natural disaster, or grid energy crisis, it is more difficult to model the occurrence and fallout from geopolitical events — and, consequently, more difficult to develop effective contingency plans. This is because these threats are primarily the results of highly centralized political decision-making in Beijing, Brussels, Moscow and Washington, DC.

The shock therapy of the COVID-19 pandemic has made industries more resilient and mindful of potential future disruptions. Nonetheless, if some of the more radical threats posed by the current geopolitical situation become reality, their effects are likely to be longer lasting and more dramatic than anything experienced up to now.

Uptime Intelligence sees two major areas where the combination of global interdependency and concentration has made digital infrastructure vulnerable to potential economic and military confrontations, should the current geopolitical environment deteriorate further:

  • Semiconductor supply chains.
  • Subsea cable systems.

Semiconductors pose a unique problem

Nothing demonstrates the problem of global interdependency and systemic fragility better than the world’s reliance on advanced semiconductors. This issue is not just about IT hardware: controllers, processors, memory chips and power electronics are embedded in virtually every product of any complexity. Chips are not present just to add functionality or to enhance controls: they have become essential.

The health monitoring, real-time analysis, power correction and accident prevention functions offered by modern electrical gear occur through the use of high-performance chips such as signal processors, field-programmable logic and microprocessors. Some recent delays in data center equipment deliveries (including switchgear and UPS shipments) have been caused by shortages of certain specialist chips.

This reliance is precarious because chip production is globally entangled. Semiconductor manufacturing supply chains span thousands of suppliers, across a wide range of industries, including ultra-pure metals and gases, chemical agents, high-performance lasers and optics, various pieces of wafer processing equipment, clean-room filtration systems and components, and the fine-mechanical packaging of chips. At every stage, only a small number of highly specialist suppliers are able to meet the required quality and performance standards.

The production of state-of-the-art photolithography machines, for example, relies on just three noteworthy vendors — ASML, Canon and Nikon. Of these, only ASML has the capability to produce the most advanced equipment, one which uses extreme ultraviolet wavelengths to create the smallest transistor structures.

The level of complexity and specialization required to manufacture advanced semiconductors means that no single country or trading bloc — no matter how large or resource-rich — is entirely self-sufficient, or will become so within reasonable timeframes and at reasonable economic cost. This means that multiple single points of failure (and potential bottlenecks) in the data center and IT equipment supply chain will persist.

Governments have become acutely aware of these issues. The US government and the European Commission (EC) have responded with legislation directed at supporting and stimulating investment in local production capacity (the US CHIPS and Science Act and the European Chips Act). China, too, while still lagging some five to 10 years behind its international competitors, continues to invest in its capabilities to develop a more competitive semiconductor industry. In the meantime, political battles over intellectual property (combined with problems over the supply of and access to materials, components and expertise) remain ongoing.

Any impact from these legislative initiatives is likely to take a decade or more, however, and will largely only address the “onshoring” of chip manufacturing capacity. Even assuming unsparing political will (and bottomless fiscal support for private investment) to promote self-sufficiency in chipmaking, decoupling semiconductor supply chains (all the way from raw materials to their processing) borders the impossible. The complexity and costs involved cannot be overstated.

It is for this reason that the US government’s increasingly stringent measures, designed to limit China’s access to cutting-edge semiconductor technology, are proving effective. But it is precisely because they are effective, that the situation is becoming more volatile for the entire industry — increasing, as it does, the likelihood of reprisal.

Taiwan is of particular concern. The high concentration of the global semiconductor fabrication and IT hardware manufacturing in and around the island, home to the world’s largest and most advanced contract chipmaking cluster, creates major supply chain vulnerabilities. The consequence of any major confrontation (economic or military) would result in profound and far-reaching disruption for the entire IT industry and many others.

Deep risks around subsea network

The vulnerability of subsea fiber optic cables is another concern — and, as is the case regarding semiconductors, by no means a new issue. Growing geopolitical tensions, however, have raised questions regarding the likelihood of sovereign states engaging in acts of sabotage.

Subsea fiber optic networks consist of hundreds of subsea cables that carry nearly all intercontinental data traffic, supporting trillions of dollars of global economic activity. There are currently more than 500 international and domestic networks in operation, which are owned and operated (almost exclusively) by private companies. The length of these cables make them very difficult to protect against potential threats.

Some subsea cables represent high-value targets for certain actors — and are attractive because they can be damaged or broken in secrecy and without the blowback of a traditional attack.

Most subsea cable breakages do not result in widespread outages. Typically, traffic can be rerouted through other cables, albeit at the cost of increasing latency. But when multiple lines are simultaneously severed in the same region (undermining path diversity), the effect can be more substantial.

In 2006, a major earthquake (with multiple aftershocks) in the Luzon Strait (between Taiwan and the Philippines) resulted in seven of nine subsea cables being taken offline. This caused severe and widespread outages across the Asia-Pacific region, significantly disrupting businesses and consumers in Hong Kong, Japan, Singapore, South Korea and Taiwan. Fixing these vital network connections ultimately involved more than 40% of the global cable repair fleet. Full restoration of services was not complete until seven weeks after the initial outage.

Cables are also vulnerable to human activities — both accidental and deliberate. Ships are the most common cause, as fishing equipment or anchors can catch a cable and damage it. Malicious state actors are also a threat: for example, unidentified individuals seized telecommunications nodes and destroyed terrestrial cables in 2014, as Russia occupied the Crimean peninsula. The same could happen to submarine cables.

Such acts of sabotage fall into the category of hybrid warfare: any such attack would be unlikely to trigger a conflict but, if successfully coordinated, would cause severe disruption. Protecting against such threats — and detecting and monitoring potential threats, or identifying those responsible when attacks occur — is difficult, particularly with regard to subsea cables often spanning thousands of miles. Since the location of these cables is in the public domain, and international law prohibits the boarding of foreign vessels in international waters, protecting these vital facilities is particularly fraught. Taiwan, as an island, is especially vulnerable to attacks on its subsea cables.


See our Five Data Center Predictions for 2023 webinar here.


Daniel Bizo, Research Director, Uptime Institute

Lenny Simon, Senior Research Associate, Uptime Institute

Max Smolaks, Research Analyst, Uptime Institute

Data center staffing — an ongoing struggle

Data center staffing — an ongoing struggle

Attracting and retaining qualified data center staff has been a major industry challenge for years — and continues to cause substantial problems for operators worldwide. Uptime Institute’s 2022 Management and Operations Survey shows staffing and organization (54%) is the leading requirement cited by operators (see Figure 1).

diagram: Staffing is operators’ key requirement
Figure 1. Staffing is operators’ key requirement

In an environment where rapid growth in data center capacity has led to an increase in job openings that continues to outpace recruitment, it’s hardly surprising that attracting and retaining staff is a key pain point for data center operators.

How difficult is it to attract and retain staff?

More than half (53%) of the respondents to Uptime’s 2022 Global Data Center Survey report that their organizations are having difficulties finding qualified candidates — up from 47% in last year’s survey and 38% in 2018.

Staff retention is also an issue: 42% of respondents saying their organization is having difficulty retaining staff due to them being hired away, compared with just 17% four years ago. Moreover, a majority of those workers changing jobs are being hired by direct competitors (see Figure 2).

diagram: More operators struggle with attracting and retaining staff
Figure 2. More operators struggle with attracting and retaining staff

Women an underrepresented force

Historically, data center design and operations teams have employed few women, and this hasn’t improved much since Uptime first started collecting information on gender demographics in our data center surveys in 2018.

More than three-quarters of operators (77%) report that they employ around 10% women or less, unchanged since 2018. Strikingly, one-fifth of respondents (20%) still do not employ any women at all in their design and operations teams, although this number is down from 26% in 2018 (see Figure 3).

diagram: Women remain underrepresented in the data center industry
Figure 3. Women remain underrepresented in the data center industry

In short, a growing number of unfilled positions coupled with the low and stagnating proportion of women workers suggests the data center industry still has much work to do to leverage the untapped potential of the female workforce.

How the inability to hire is affecting management and operations

The shortage of qualified staff is seen as a root cause of a host of data center operational issues.

Data center staff execution (36%), insufficient staff (28%), and incorrect staff processes / procedures (21%) all rank among the top four most common root causes of data center issues in Uptime’s 2022 Management and Operations Survey (see Figure 4).

diagram: Staffing – a major root cause of management and operations issues
Figure 4. Staffing – a major root cause of management and operations issues

In addition, when we asked about the key challenges experienced in the last two years, 30% of respondents describe staffing issues — by far the highest response category.

As one respondent explained: they’re seeing a “lack of available and qualified resources for both technical and coordination tasks.” A separate respondent reports that their organization has “insufficient levels of operation and maintenance personnel,” and another finds “staff turnover and inadequate training / experience” to be their company’s biggest pain point.

Will AI come to the rescue? Not in the near term

Although artificial intelligence (AI)-based components are currently being incorporated into data center power and cooling systems, it’s unclear when — or if — AI will begin to replace data center employees.

When asked about the potential impact of AI on data center staffing, only 19% of respondents believe it will reduce their operations staffing levels within the next five years — down from 29% in 2019. This decrease hints at lowered industry expectations that are more in line with the near-term capabilities of AI.

Just over half (52%) of respondents expect AI to reduce their staffing numbers in the longer term, but not in the next five years (see Figure 5).

diagram: Fewer operators expect AI to reduce staffing in the near term
Figure 5. Fewer operators expect AI to reduce staffing in the near term

In a related finding from this year’s annual survey, more than half of respondents (57%) say they would trust an adequately trained machine-learning model to make operational decisions, which is up from 49% last year.

Bottom line: staffing remains a major concern for operators

In its most recent forecast of data center workforce requirements, Uptime estimates that staffing requirement levels will grow globally from about 2.0 million full-time equivalents in 2019 to nearly 2.3 million by 2025 (see The people challenge: Global data center staffing forecast 2021-2025).

As the need for qualified staff increases, however, operators are having increasing difficulty filling critical data center roles and retaining staff in these positions. Additionally, an aging workforce in the more mature data center markets, such as North America and Western Europe, means a significant proportion of the existing workforce will retire concurrently — leaving data centers with a shortfall on both headcount and experience.

Hiring efforts are often offset by jobseekers’ poor visibility of the sector, but some employers are looking at more effective ways to attract and retain talent, such as training and mentoring programs, and improving their diversity / inclusion efforts. Staffing is a serious concern for the data center industry now and going forward. Uptime will continue to monitor the industry’s ongoing difficulties in maintaining adequate staffing levels.

Unravelling net zero

Unravelling net zero

Many digital infrastructure operators have set themselves carbon-neutral or net-zero emissions goals: some large hyperscale operators claim net-zero emissions for their current operating year. Signatories to the Climate Neutral Data Center Pact, a European organization for owners and operators, aim to be using 100% clean energy by 2030.

All these proclamations appear laudable and seem to demonstrate strong progress for the industry in achieving environmental goals. But there is a hint of greenwashing in these commitments which raises some important questions: how well defined is “net zero”? What are the boundaries of the commitments? Are these time frames aggressive or the commitments meaningful?

There are good reasons for asking these questions. Analysis by Uptime Institute Intelligence suggests most operators will struggle to meet these commitments, given the projected availability of zero-carbon energy, equipment and materials and carbon offsets in the years and decades ahead. There are three core areas of concern that critics and regulators are likely to raise with operators: the use of renewable energy certificates (RECs), guarantees of origin (GOs) and carbon offsets; the boundaries of net-zero commitments; and the industry’s current lack of consensus on the time frame for attaining true net-zero operations.

RECs, GOs and carbon offsets

RECs and GOs are tradeable certificates representing 1 MWh of zero emissions of renewable energy generation; carbon offsets are certificates representing a quantity of carbon removed from the environment. All of these tools can be used to offset operational carbon emissions. Most current net-zero claims, and the achievement of near-term commitments (i.e., 2025 to 2035), depend on the application of these tools to offset the use of fossil-fuel-based energy.

Under currently accepted climate accounting methodologies, the use of RECs and offsets is a viable and acceptable way for digital infrastructure operators to help reduce emissions globally. However, using these certificates can distract from the real and difficult work of driving infrastructure efficiency improvements to increase the workload delivered per unit of energy consumed and the procurement of consumed clean energy to reduce the carbon emissions per unit of consumed energy toward zero.

Uptime Institute believes that stakeholders, regulators, and other parties will increasingly expect, and perhaps require, operators to focus on moving towards the consumption of 100% zero-emissions energy for all operations — that is, true net-zero emissions. This objective, against which net-zero claims will ultimately be judged, is not likely to be achieved in 2030 and perhaps not even by 2040: but it is an objective that deserves the industry’s full attention, and the application of the industry’s broad and varied technical expertise.

The boundaries of the net-zero commitment

Digital infrastructure operators have not adopted consistent accounting boundaries for their net zero commitments. While all operators include Scope 1 and 2 emissions, and while some do address Scope 3 emissions in full, others are ignoring these — completely or partially. There is no clear sector-wide consensus on this topic — either on which applicable Scope 3 emissions should be included in a goal, or in the classification or allocation of emissions in colocation facilities. This can create wide discrepancies in the breadth and scope of different operator’s commitments.

For most organizations running IT and / or digital infrastructure operations, the most consequential and important Scope 3 category is Category 1: Purchased goods and services — that is, the procurement and use of colocation and cloud services. The CO2 emissions associated with these services can be quantified with reasonable certainty and assigned to the proper scope category for each of the three parties involved in these services offerings — IT operators, cloud service operators and colocation providers (see Table 1).

diagram: Table 1 Scope 2 and 3 assignments for digital infrastructure (Category 1: Purchased goods and services emissions)
Table 1 Scope 2 and 3 assignments for digital infrastructure (Category 1: Purchased goods and services emissions)

Uptime Intelligence recommends that digital infrastructure operators set a net-zero goal that addresses all Scope 1, 2 and 3 emissions associated with their IT operations and / or facilities in directly owned, colocation and cloud data centers. IT operators will need to collaborate with their cloud and colocation service providers to collect the necessary data and set collaborative emissions-reductions goals. Service providers should be required to push towards 100% renewable energy consumption at their facilities. Colocation operators, in turn, will need to collaborate with their IT tenants to promote and record improvements in workload delivered per unit of energy consumed.

Addressing the other five Scope 3 categories applicable to the data center industry (embedded carbon in equipment and building material purchases; management of waste and end-of-life equipment; fuel and other energy-related activities; business travel; and employee commuting) — necessitates a lighter touch. Emissions quantifications for these categories typically have high degrees of uncertainty: and suppliers or employees are best positioned to drive their operations to net zero emissions.

Rather than trying to create numerical Scope 3 inventories and offsetting the emissions relating to these categories, Uptime Intelligence recommends that operators require their suppliers to maintain sustainability strategies and net-zero GHG emissions-reduction goals, and provide annual reports on progress in achieving these.

Operators, in turn, should set (and execute) consequences for those companies that fail to make committed progress — up to and including their removal from approved supplier lists. Such an approach means suppliers are made responsible for delivering meaningful emissions reductions, without data center operators duplicating either emissions-reduction efforts or offsets.

Insufficient consensus on the time frame for attaining true net-zero operations

There appears to be no industry-wide consensus on the time frame for attaining true net-zero operations. Many operators have declared near-term net-zero commitments (i.e., 2025 to 2035), but these depend on the use of RECs, GOs and carbon offsets.

Achieving a true net-zero operating portfolio by 2050 will require tremendous changes and innovations in both the data center equipment and energy markets over the next 28 years. The depth and breadth of the changes required makes it impossible to fully and accurately predict the timing and technical details of this essential transformation.

The transition to zero carbon will not be achieved with old or inefficient equipment. Investments will need to be made in more efficient IT equipment, software management tools, and in clean energy generation. Rather than buying certificates to offset emissions, operators need to invest in impactful technologies that increase data centers’ workload delivered per unit of energy consumed while reducing the carbon intensity of that energy to zero.

What does all this mean for net-zero commitments? Given the difficulties involved, Uptime Intelligence recommends that data center operators establish a real net-zero commitment for their IT operations that falls between 2040 and 2050 — with five- to eight-year sequential interim goals. Operators’ current goal periods should achieve emissions reductions based on current and emerging technologies and energy sources.

After the first goal period, each subsequent interim goal should incorporate all recent advances in technologies and energy generation — on which basis, operators will be able to reach further in achieving higher workloads and lower carbon emissions per unit of energy consumed. Suppliers in other Scope 3 categories, meanwhile, should be held responsible for achieving real net-zero emissions for their products and services.

The bottom line? Instead of relying on carbon accounting, the digital infrastructure industry needs to focus investment on the industry-wide deployment of more energy-efficient facilities and IT technologies, and on the direct use of clean energy.