Critical Infrastructrure Innovation in 2021

A Surge of Innovation

Data center operators (and enterprise IT) are generally cautious adopters of new technologies. Only a few (beyond hyperscale operators) try to gain a competitive advantage through their early use of technology. Rather, they have a strong preference toward technologies that are proven, reliable and well-supported. This reduces risks and costs, even if it means opportunities to jump ahead in efficiency, agility or functionality are missed.

But innovation does occur, and sometimes it comes in waves, perhaps triggered by the opportunity for a significant leap forward in efficiency, the sudden maturing of a technology, or some external catalyst. The threat of having to close critical data centers to move workloads to the public cloud may be one such driver; the need to operate a facility without staff during a weather event, or a pandemic crisis, may be another; the need to operate with far fewer carbon emissions may be yet another. Sometimes one new technology needs another to make it more economic.

The year 2021 may be one of those standouts in which a number of emerging technologies begin to gain traction. Among the technologies on the edge of wider adoption are:

  • Storage-class memory – A long-awaited class of semiconductors with ramifications for server performance, storage strategies and power management.
  • Silicon photonics – A way of connecting microchips that may revolutionize server and data center design.
  • ARM servers – Low-powered compute engines that, after a decade of stuttering adoption, are now attracting attention.
  • Software-defined power – A way to unleash and virtualize power assets in the data center.

All of these technologies are complementary; all have been much discussed, sampled and tested for several years, but so far with limited adoption. Three of these four were identified as highly promising technologies in the Uptime Institute/451 Research Disrupted Data Center research project summarized in the report Disruptive Technologies in the Datacenter: 10 Technologies Driving a Wave of Change, published in 2017. As the disruption profile below shows, these technologies were clustered to the left of the timeline, meaning they were, at that time, not yet ready for widespread adoption.


In 2017, Uptime Institute and 451 Research identified 10 technologies that had the potential to make a big, disruptive impact on the data center. Those to the right of the chart would do so sooner than those on the left, while those nearest the top would be most likely to have a disruptive impact. The size of the circle indicates how large an impact the technology would likely make.

Now the time may be coming, with hyperscale operators particularly interested in storage-class memory and silicon photonics. But small operators, too, are trying to solve new problems — to match the efficiency of their larger counterparts, and, in some cases, to deploy highly efficient, reliable, and powerful edge data centers.

Storage-class memory

Storage-class memory (SCM) is a generic label for emerging types of solid-state media that offer the same or similar performance as dynamic random access memory or static random access memory, but at lower cost and with far greater data capacities. By allowing servers to be fitted with larger memories, SCM promises to heavily boost processing speeds. SCM is also nonvolatile or persistent — it retains data even if power to the device is lost and promises greater application availability by allowing far faster restarts of servers after reboots and crashes.

SCM can be used not just as memory, but also as an alternative to flash for high-speed data storage. For data center operators, the (widespread) use of SCM could reduce the need for redundant facility infrastructure, as well as promote higher-density server designs and more dynamic power management (software-defined power is discussed below).

However, the continuing efforts to develop commercially viable SCM have faced major technical challenges. Currently only one SCM exists with the potential to be used widely in servers. That memory was jointly developed by Intel and Micron Technology, and is now called Optane by Intel, and 3D XPoint by Micron. Since 2017, it has powered storage drives made by Intel that, although far faster than flash equivalents, have enjoyed limited sales because of their high cost. More promisingly, Intel last year launched the first memory modules powered by Optane.

Software suppliers such as Oracle and SAP are changing the architecture of their databases to maximize the benefits of the SCM devices, and major cloud providers are offering services based on Optane used as memory. Meanwhile a second generation of Optane/3D XPoint is expected to ship soon, and by reducing prices is expected to be more widely used in storage drives.

Silicon photonics

Silicon photonics enables optical switching functions to be fabricated on silicon substrates. This means electronic and optical devices can be combined into a single connectivity/processing package, reducing transceiver/switching latency, costs, size and power consumption (by up to 40%). While this innovation has uses across the electronics world, data centers are expected to be the biggest market for the next decade.

In the data center, silicon photonics allows components (such as processors, memory, input/output [I/O]) that are traditionally packaged on one motherboard or within one server to be optically interconnected, and then spread across a data hall — or even far beyond. Effectively, it has the potential to turn a data center into one big computer, or for data centers to be built out in a less structured way, using software to interconnect disaggregated parts without loss of performance. The technology will support the development of more powerful supercomputers and may be used to support the creation of new local area networks at the edge. Networking switches using the technology can also save 40% on power and cooling (this adds up in large facilities, which can have up to 50,000 switches).

Acquisitions by Intel (Barefoot Networks), Cisco (Luxtera, Acacia Communications) and Nvidia (Mellanox Networking) signal a much closer integration between network switching and processors in the future. Hyperscale data center operators are the initial target market because the technology can combine with other innovations (as well as with Open Compute Project rack and networking designs). As a result, we expect to see the construction of flexible, large-scale networks of devices in a more horizontal, disaggregated way.

ARM servers

The Intel x86 processor family is one of the building blocks of the internet age, of data centers and of cloud computing. Whether provided by Intel or a competitor such as Advanced Micro Devices, almost every server in every data center is built around this processor architecture. With its powerful (and power-hungry) cores, its use defines the motherboard and the server design and is the foundation of the software stack. Its use dictates technical standards, how workloads are processed and allocated, and how data centers are designed, powered and organized.

This hegemony may be about to break down. Servers based on the ARM processor design — the processors used in billions of mobile phones and other devices (and soon, in Apple MacBooks) — are now being used by Amazon Web Services (AWS) in its proprietary designs. Commercially available ARM systems offer dramatic price, performance and energy consumption improvements over current Intel x86 designs. When Nvidia announced its (proposed) $40 billion acquisition of ARM in early 2020, it identified the data center market as its main opportunity. The server market is currently worth $67 billion a year, according to market research company IDC (International Data Corporation).

Skeptics may point out that there have been many servers developed and offered using alternative, low-power and smaller processors, but none have been widely adopted to date. Hewlett Packard Enterprise’s Moonshot server system, initially launched using low-powered Intel Atom processors, is the best known but, due to A variety of factors, market adoption has been low.

Will that change? The commitment to use ARM chips by Apple (currently for MacBooks) and AWS (for cloud servers) will make a big difference, as will the fact that even the world’s most powerful supercomputer (as of mid-2020) uses an ARM Fujitsu microprocessor. But innovation may make the biggest difference. The UK-based company Bamboo Systems, for example, designed its system to support ARM servers from the ground up, with extra memory, connectivity and I/O processors at each core. It claims to save around 60% of the costs, 60% of the energy and 40% of the space when compared with a Dell x86 server configured for the same workload.

Software-defined power

In spite of its intuitive appeal and the apparent importance of the problems it addresses, the technology that has come to be known as “software-defined power” has to date received little uptake among operators. Software-defined power, also known as “smart energy,” is not one system or single technology but a broad umbrella term for technologies and systems that can be used to intelligently manage and allocate power and energy in the data center.

Software-defined power systems promise greater efficiency and use of capacity, more granular and dynamic control of power availability and redundancy, and greater real-time management of resource use. In some instances, it may reduce the amount of power that needs to be provisioned, and it may allow some energy storage to be sold back to the grid, safely and easily.

Software-defined power adopts some of the architectural designs and goals of software-defined networks, in that it virtualizes power switches as if they were network switches. The technology has three components: energy storage, usually lithium-ion (Li-ion) batteries; intelligently managed power switches or breakers; and, most importantly, management software that has been designed to automatically reconfigure and allocate power according to policies and conditions. (For a more detailed description, see our report Smart energy in the data center).

Software-defined power has taken a long time to break into the mainstream — and even 2021 is unlikely to be the breakthrough year. But a few factors are swinging in its favor. These include the widespread adoption of Li-ion batteries for uninterruptible power supplies, an important precondition; growing interest from the largest operators and the biggest suppliers (which have so far assessed technology, but viewed the market as unready); and, perhaps most importantly, an increasing understanding by application owners that they need to assess and categorize their workloads and services for differing resiliency levels. Once they have done that, software-defined power (and related smart energy technologies) will enable power availability to be applied more dynamically to the applications that need it, when they need it.


The full report Five data center trends for 2021 is available to members of the Uptime Institute community which can be found here.

Sustainability: More challenging, more transparent

Through 2021 and beyond, the world will begin to recover from its acute crisis — COVID-19 — and will turn its attention to other matters. Few if any of these issues will be as important as climate change, a chronic condition that will become more pressing and acute as each year passes.

In the critical digital infrastructure sector, as in all businesses, issues arising directly or indirectly from climate change will play a significant role in strategic decision-making and technical operations in the years ahead. And this is regardless of the attitude or beliefs of senior executives; stakeholders, governments, customers, lobbyists and watchdogs all want and expect to see more action. The year 2021 will be critical, with governments expected to act with greater focus and unity as the new US government rejoins the global effort.

We can group the growing impact of climate change into four areas:

  • Extreme weather/climate impact – As discussed in our report The gathering storm: Climate change and data center resiliency, extreme weather and climate change present an array of direct and indirect threats to data centers. For example, extreme heatwaves — which will challenge many data center cooling systems — are projected to occur once every three or four years, not once in 20.
  • Legislation and scrutiny – Nearly 2,000 pieces of climate-related legislation have been passed globally to date (covering all areas). Many more, along with more standards and customer mandates, can be expected in the next several years.
  • Litigation and customer losses – Many big companies are demanding rigorous standards through their supply chains — or their contracts will be terminated. Meanwhile, climate activists, often well-resourced, are filing lawsuits against technology companies and digital infrastructure operators to cover everything from battery choices to water consumption.
  • The need for new technologies – Management will be under pressure to invest more, partly to protect against weather events, and partly to migrate to cleaner technologies such as software-defined power or direct liquid cooling.

In the IT sector generally — including in data centers — it has not all been bad news to date. Led by the biggest cloud and colo companies, and judged by several metrics, the data center sector has made good progress in curtailing carbon emissions and wasteful energy use. According to the Carbon Trust, a London-based body focused on reducing carbon emissions, the IT sector is on course to meet its science-based target for 2030 — a target that will help keep the world to 1.5 degrees Celsius (34.7 degrees Fahrenheit) warming (but still enough warming to create huge global problems). Its data shows IT sector carbon emissions from 2020 to 2030 are on a trajectory to fall significantly in five key areas – data centers, user devices, mobile networks, and fixed and enterprise networks. Overall, the IT sector needs to cut carbon emissions by 50% from 2020 to 2030.

Data centers are just a part of this, accounting for more carbon emissions than mobile, fixed or enterprise networks, but significantly less than all the billions of user devices. Data center energy efficiency has been greatly helped by facility efficiencies, such as economizer cooling, improvements in server energy use, and greater utilization through virtualization and other IT/software improvements. Use of renewables has also helped: According to Uptime Institute data (our 2020 Climate Change Survey) over a third of operators now largely power their data centers using renewable energy sources or offset their carbon use (see figure below). Increasing availability of renewable power in the grid will help to further reduce emissions.



But there are some caveats to the data center sector’s fairly good performance. First, the reduction in carbon emissions achieved to date is contested by many who think the impact of overall industry growth on energy use and carbon emissions has been understated (i.e., energy use/carbon emissions are actually quite a lot higher than widely accepted models suggest — a debatable issue that Uptime Institute continues to review). Second, at an individual company or data center level, it may become harder to achieve carbon emissions reductions in the next decade than it has been in the past decade — just as the level of scrutiny and oversight, and the penalty for not doing enough, ratchets up. Why? There are many possible reasons, including the following:

  • Many of the facilities-level improvements in energy use at data centers have been achieved already — indeed, average industry power usage effectiveness values show only marginal improvements over the last five years. Some of these efficiencies may even go into reverse if other priorities, such as water use or resiliency, take precedence (economizers may have to be supplemented with mechanical chillers to reduce water use, for example).
  • Improvements in IT energy efficiency have also slowed — partly due to the slowing or even ending of Moore’s Law (i.e., IT performance doubling every two years) — and because the easiest gains in IT utilization have already been achieved.
  • Some of the improvements in carbon emissions over the next decade require looking beyond immediate on-site emissions, or those from energy supplies. Increasingly, operators of critical digital infrastructure — very often under external pressure and executive mandate — must start to record the embedded carbon emissions (known as Scope 3 emissions) in the products and services they use. This requires skills, tools and considerable administrative effort.

The biggest operators of digital infrastructure — among them Amazon, Digital Realty, Equinix, Facebook, Google and Microsoft — have made ambitious and specific commitments to achieve carbon neutrality in line with science-based targets within the next two decades. That means, first, they are setting standards that will be difficult for many others to match, giving them a competitive advantage; and second, these companies will put pressure on their supply chains — including data center partners — to minimize emissions.


The full report Five data center trends for 2021 is available to members of Uptime Institute which can be obtained here.

Edge Computing – The Next Frontier

One of the most widely anticipated trends in IT and infrastructure is significant new demand for edge computing, fueled by technologies such as 5G, IoT and AI. To date, net new demand for edge computing — processing, storing and integrating data close to where it is generated — has built slowly. As a result, some suppliers of micro data center and edge technologies have had to lower their investors’ expectations.

This slow build-out, however, does not mean that it will not happen. Demand for decentralized IT will certainly grow. There will be more workloads that need low latency, such as healthcare tech, high performance computing (notably more AI), critical IoT, and virtual and augmented reality, as well as more traffic from latency-sensitive internet companies (as Amazon famously said 10 years ago, every 100 milliseconds of latency costs them one percent in sales). There will also be more data generated by users and “things” at the edge, which will be too expensive to transport across long distances to large, centralized data centers (the “core”).

For all these reasons, new edge data center and connectivity capacity will be needed, and we expect a wave of new partnerships and deals in 2021. Enterprises will connect to clouds via as-a-service (on-demand, software-driven) interconnections at the edge, and the internet will extend its reach with new exchange points. Just as the internet is a network of tens of thousands of individual networks connected together, the edge will require not just new capacity but also a new ecosystem of suppliers working together. The year 2021 will likely see intense activity — but the long-expected surge in demand may have to wait.

The edge build-out will be uneven, in part because the edge is not a monolith. Different edge workloads need different levels of latency, bandwidth and resiliency, as shown in the data center schema below. Requirements for data transit and exchanges will also vary. Edge infrastructure service providers will need to rely on many partners, including specialist vendors that will serve different customer requirements. Enterprise customers will become increasingly dependent on third-party connections to different services.



So far, much attention has been focused on the local edge, where connectivity and IT capacity are sited within a kilometer or so from devices and users. In urban areas, where 5G is (generally) expected to flourish, and in places where a lot of IoT data is generated, such as factories and retail stores, we are slowly seeing more micro data centers being deployed. These small facilities can act either as private connections or internet exchange points (or both), handing off wireless data to a fiber connection and creating new “middle-mile” connections.

We expect that edge micro data centers will be installed both privately and as shared infrastructure, including for cloud providers, telcos and other edge platform providers, to reduce latency and keep transit costs in check. To get closer to users and “things,” fiber providers will also partner with more wireless operators.

In 2021, most of the action is likely to be one step further back from the edge, in regional locations where telcos, cloud providers and enterprises are creating — or consuming — new interconnections in carrier-neutral data centers such as colo and wholesale facilities. All major cloud providers are increasingly creating points of presence (PoPs) in more colos, creating software-defined WANs of public (internet) and private (enterprise) connections. Colo customers are then able to connect to various destinations, depending on their business needs, via software, hardware and networks that colos are increasingly providing. These interconnections are making large leased facilities a preferred venue for other suppliers to run edge infrastructure-as-a-service offerings, including for IoT workloads. For enterprises and suppliers alike, switching will become as important as power and space.

We expect more leased data centers will be built (and bought) in cities and suburbs in 2021 and beyond. Large and small colos alike will place more PoPs in third-party facilities. And more colos will provide more software-driven interconnection platforms, either via internal development, partnerships or acquisitions.

At the same time, CDNs that already have large edge footprints will further exploit their strong position by offering more edge services on their networks directly to enterprises. We’re also seeing more colos selling “value-add” IT and infrastructure-as-a-service products — and we expect they will extend further up the IT stack with more compute and storage capabilities.

The edge build-out will clearly lead to increased operational complexity, whereby suppliers will have to manage hundreds of application program interfaces and multiple service level agreements. For these reasons, the edge will need to become increasingly software-defined and driven by AI. We expect investment and partnerships across all these areas.

How exactly it will play out remains unclear; it is simply too early. Already we have seen major telco and data center providers pivot their edge strategies, including moving from partnerships to acquisitions.

One segment we are watching particularly closely is the big internet and cloud companies. Having built significant backbone infrastructure, they have made little or only modest investments to date at the edge. With their huge workloads and deep pockets, their appetite for direct ownership of edge infrastructure is not yet known but could significantly shape the ecosystem around them.


The full report Five data center trends for 2021 is available to members of Uptime Institute, guest membership can be found here.

Accountability – the “new” imperative

Outsourcing the requirement to own and operate data center capacity is the cornerstone of many digital transformation strategies, with almost every large enterprise spreading their workloads across their own data centers, colocation sites and public cloud. But ask any regulator, any chief executive, any customer: You can’t outsource responsibility — for incidents, outages, security breaches or even, in the years ahead, carbon emissions.

Chief information officers, chief technology officers and other operational heads knew this three or four decades ago (and many have learned the hard way since). That is why data centers became physical and logical fortresses, and why almost every component and electrical circuit has some level of redundancy.

In 2021, senior executives will grapple with a new iteration of the accountability imperative. Even the most cautious enterprises now want to make more use of the public cloud, while the use of private clouds is enabling greater choices of third-party venue and IT architecture. But this creates a problem: cloud service operators, software-as-a-service (SaaS) providers and even some colos are rarely fully accountable or transparent about their shortcomings — and they certainly do not expect to be held financially accountable for consequences of failures. Investors, regulators, customers and partners, meanwhile, want more oversight, more transparency and, where possible, more accountability.

This is forcing many organizations to take a hard look at which workloads can be safely moved to the cloud and which cannot. For some, such as the European financial services sector, regulators will require an assessment of the criticality of workloads — a trend that is likely to spread and grow to other sectors over time. The most critical applications and services will either have to stay in-house, or enterprise executives will need to satisfy themselves and their regulators that these services are run well by a third-party provider, and that they have full visibility into the operational practices and technical infrastructure of their provider.

The data suggests this is a critical period in the development of IT governance. The shift of enterprise IT workloads from on-premises data center to cloud and hosted services is well underway. But there is a long way to go, and some of the issues around transparency and accountability have arisen only recently as more critical and sensitive data and functionality is considered for migration to the cloud.

The first tranche of workloads moving to third parties often did not include the most critical or sensitive services. For many organizations, a public cloud is (or was initially) the venue of choice for specific types of workloads, such as application test and development; big-data processing, such as AI; and new applications that are cloud-native. But as more IT departments become familiar with the tool sets from cloud providers, such as for application development and deployment orchestration, more types of workloads have moved into public clouds only recently, with more critical applications to follow (or perhaps not). High-profile, expensive public cloud outages, increased regulatory pressures and an increasingly uncertain macroeconomic outlook will force many enterprises to assess — or reassess — where workloads should actually be running (a process that has been called “The Big Sort”).

Uptime Institute believes that many mission-critical workloads are likely to remain in on-premises or colo data centers — at least for many years to come: More than 70% of IT and critical infrastructure operators we surveyed in 2020 do not put any critical workloads in a public cloud, with over a quarter of this group (21% of the total sample) saying the reason is a lack of visibility/accountability about resiliency. And over a third of those who do place critical applications in a public cloud also say they do not have enough visibility (see chart below). Clearly, providers’ assurances of availability and of adherence to best practices are not enough for mission-critical workloads. (These results were almost identical when we asked the same question in our 2019 annual survey.)



The issues of transparency, reporting and governance are likely to ripple through the cloud, SaaS and hosting industries, as customers seek assurances of excellence in operations — especially when financial penalties for failures by third parties are extremely light. While even the largest cloud and internet application providers operate mostly concurrently maintainable facilities, experience has shown that unaudited (“mark your own homework”) assurances frequently lead to poor outcomes.

Creeping criticality

There is an added complication. While the definitions and requirements of criticality in IT are dictated by business requirements, they are not fixed in time. Demand patterns and growing IT dependency mean many workloads/services have become more critical — but the infrastructure and processes supporting them may not have been updated (“creeping criticality”). This is a particular concern for workloads subject to regulatory compliance (“compliance drift”).

COVID-19 may have already caused a reassessment of the criticality or risk profile of IT; extreme weather may provide another. When Uptime Institute recently asked over 250 on-premises and colo data center managers how the pandemic would change their operations, two-thirds said they expect to increase the resiliency of their core data center(s) in the years ahead. Many said they expected their costs to increase as a result. One large public cloud company recently asked their leased data center providers to upgrade their facilities to N+1 redundancy, if they were not already.

But even before the pandemic, there was a trend toward higher levels of redundancy for on-premises data centers. There is also an increase in the use of active-active availability zones, especially as more workloads are designed using cloud or microservices architectures. Workloads are more portable, and instances are more easily copied than in the past. But we see no signs that this is diminishing the need for site-level resiliency.

Colos are well-positioned to provide both site-level resiliency (which is transparent and auditable) and outsourced IT services, such as hosted private clouds. We expect more colos will offer a wider range of IT services, in addition to interconnections, to meet the risk (and visibility) requirements of more mission-critical workloads. The industry, it seems, has concluded that more resiliency at every level is the least risky approach — even if it means some extra expense and duplication of effort.

Uptime Institute expects that the number of enterprise (privately owned/on-premises) data centers will continue to dwindle but that enterprise investment in site-level resiliency will increase (as will investment in data-driven operations). Data centers that remain in enterprise ownership will likely receive more investment and continue to be run to the highest standards.


The full report Five data center trends for 2021 is available to members of the Uptime Institute Inside Track community here.

Five Trends for 2021: accountability, automation, edge, sustainability, innovation

What we can expect for mission-critical digital infrastructure in 2021?

Each autumn Uptime Institute, like many other organizations, puts together a list of some of the big trends and themes for the year ahead. This time, we have focused on five big trends that might not have been so obvious 12 months ago.

Heading into 2021, during a macroeconomic downturn, the critical digital infrastructure sector itself continues to expand and to attract enviable levels of new investment. The ongoing build-out of new data centers and networks is largely being driven by cloud, hosted, and “as-a-service” workloads, as more enterprises seek to outsource more of their IT and/or data center capacity. However, for many managers, the COVID-19 pandemic has forced a reassessment — of working practices and, in particular, of risk. The global economy’s dependence on IT is growing, and this is catching the attention of an increasing number of customers, governments and watchdogs.

The coming year (and beyond) also holds new opportunities: Edge computing, artificial intelligence (AI) and new innovations in hardware and software technologies promise greater efficiencies and agility.

Here are Uptime Institute’s five trends for 2021, summarized:

1. Accountability — the “new” imperative

Enterprises want more cloud and greater agility, but they can’t outsource responsibility — for incidents, outages, security breaches or even, in the years ahead, carbon emissions. In 2021, hybrid IT, with workloads running in both on- and off-premises data centers, will continue to dominate, but investments will increasingly be constrained and shaped by the need for more transparency, oversight and accountability. More will be spent on cloud and other services, as well as in on-premises data centers.

2. Smarter, darker data centers

Following a scramble to effectively staff data centers during a pandemic, many wary managers are beginning to see remote monitoring and automation systems in a more positive light, including those driven by AI. An adoption cycle that has been slow and cautious will accelerate. But it will take more than just investment in software and services before the technology reduces staffing requirements.

3. Edge — the next frontier

Significant new demand for edge computing, fueled by technologies such as 5G, the internet of things and AI, is likely to build slowly — but the infrastructure preparation is underway. Expect new alliances and investments across enterprise, mobile and wireline networks, and for a wide range of edge data centers, small and large. Smart and automated software-defined networks and interconnections will become as important as the physical infrastructure.

4. Sustainability: More challenging, more transparent

For years, operators could claim environmental advances based on small, incremental and relatively inexpensive steps — or by adopting new technologies that would pay for themselves anyway. But the time of easy wins and greenwashing is ending: Regulators, watchdogs, customers and others will increasingly expect operators of digital infrastructure to provide hard and detailed evidence of carbon reductions, water savings and significant power savings — all while maintaining, if not improving, resiliency.

5. A surge of innovation

Data center operators (and enterprise IT) are mostly cautious, if not late, adopters of new technologies. Few beyond hyperscale operators can claim to have gained a competitive advantage through technology. However, several new technologies are maturing at the same time, promising advances in the performance and manageability of data centers and IT. Storage-class memory, silicon photonics, ARM servers and software-defined power are ready for greater adoption.

This is a summary of the full report on new Trends for 2021. The full report is available to members of the Uptime Institute. Click to learn more.

Rack Density is Rising

Density is rising

The power density per rack (kilowatts [kW] per cabinet) is a critical number in data center design, capacity planning, and cooling and power provisioning. There have been industry warnings about a meteoric rise in IT equipment rack power density for the past decade (at least). One reason for this prediction is the proliferation of compute-intensive workloads (e.g., AI, IoT, cryptocurrencies, and augmented and virtual reality), all of which drive the need for high-density racks.

Our recent annual surveys found that racks with densities of 20 kW and higher are becoming a reality for many data centers (we asked about highest rack density) — but not to the degree forewarned. Year-over-year, most respondents said their highest density racks were in 10-19 kW range, which is not enough to merit wholesale technical changes. When rack densities are higher than 20-25 kW, direct liquid cooling and precision air cooling becomes more economical and efficient. According to what we see in the field, such high densities are not pervasive enough to have an impact on most data centers.

This does not mean that the trend should be ignored. It is clear from our latest research that average mean rack density in data centers is rising steadily, as the figure below shows. Eliminating respondents with above 30 kW as high-performance outliers, the mean average density in our 2020 survey sample was 8.4 kW/rack. This is consistent with other industry estimates and safely within the provisioned range of most facilities.

PUE over the Years

In our 2020 survey, we asked about the most common (modal average) SERVER rack density, which is perhaps a better metric than overall average density. More than two-thirds (71%) reported a modal average of below 10 kW/rack, with just 16% widely deploying 20 kW or higher rack densities (Figure 7). The most common density was 5-9 kW/rack. Overprovisioning of power/cooling is probably a more common issue than underprovisioning due to rising rack densities.

The modal average power consumption, rack density

Assuming a trend that rack units will increasingly be filled with higher-powered servers that are well utilized, we anticipate that the modal average kW/rack will increase over time. Figure 8 shows that for most organizations — roughly half of those surveyed — average density is increasing, albeit only slowly.

Rack density changing…

We expect density to keep rising. Our research shows that the use of virtualization and software containers pushes IT utilization up, in turn requiring more power and cooling. With Moore’s law slowing down, improvements in IT can require more multi-core processors and, consequently, more power consumption per operation, especially if utilization is low. Even setting aside new workloads, increases in density can be regarded a long-term trend.

But, as our 2020 survey findings demonstrate, the expectation for 20 kW racks throughout the industry has not manifested. We believe that many compute-intensive workloads — those that will significantly push up power use, rack density and heat — currently reside across a relatively small group of hyperscale cloud data centers and are consumed by organizations as a service.

Want to know more about this and other data center trends and strategies? Download a copy of our complete 2020 survey.