Technology Refresh

Optimizing server refresh cycles with an aging Moore’s law

Hardware refresh is the process of replacing older, less efficient servers with newer, more efficient ones with more compute capacity. However, there is a complication to the refresh cycle that is relatively recent: the slowing down of Moore’s law. There is still a very strong case for savings in energy when replacing servers that are up to nine years old. However, the case for refreshing more recent servers — say, up to three years old — may be far less clear, due to the stagnation witnessed in Moore’s law over the past few years.

Moore’s law refers to the observation made by Gordon Moore (co-founder of Intel) that the transistor count on microchips would double every two years. This implied that transistors would become smaller and faster, while drawing less energy. Over time, the doubling in performance per watt was observed to happen around every year and a half.

It is this doubling in performance per watt that underpins the major opportunity for increasing compute capacity while increasing efficiency through hardware refresh. But in the past five years, it has been harder for Intel (and immediate rival AMD) to maintain the pace of improvement. This raises the question: Are we still seeing these gains from recent and forthcoming generation of central processing units (CPUs)? If not, the hardware refresh case will be undermined … and suppliers are unlikely to be making that point too loudly.

To answer this question, Uptime Institute Intelligence analyzed performance data from the Standard Performance Evaluation Corporation (SPEC; https://www.spec.org/). The SPECpower dataset used contains energy performance results from hundreds of servers, based on the SPECpower server energy performance benchmark. To be able to track trends and eliminate potential outlier bias in reported servers (e.g., high-end servers versus volume servers), only dual-socket servers were considered in our analysis, for trend consistency. The dataset was then broken down into 18-month intervals (based on the published date of release of servers in SPECpower) and the performance averaged for each period. The results (server performance per watt) are shown in Figure 1, along with the trend line (polynomial, order 3).

The figure above shows how performance increases have started to plateau, particularly over the past two periods. The data suggests upgrading a 2015 server in 2019 might provide only a 20% boost in processing power for the same number of watts. In contrast, upgrading a 2008/2009 server in 2012 might have given a boost of 200% to 300%.

To further understand the reason behind this, we charted the way CPU technology (lithography) has evolved over time, along with performance and idle power consumption (see Figure 2).

 

Note: The color-coded vertical bars represent generations — lithography — of processor technology (usually, Intel). For each generation of around three to four years, hundreds of servers are released. The steeper the rise of the orange line (compute performance per watt), the better. For the blue line — power consumption at idle — the steeper the decline, the better.

Figure 2 reveals some interesting insights. During the beginning of the decade, the move from one CPU lithography to another, e.g., 65 nanometers (nm) to 45 nm, 45 nm to 32 nm, etc., presented major performance per watt gains (orange line), as well as substantial reduction in idle power consumption (blue line), thanks to the reduction in transistor size and voltage.

However, it is also interesting to see that the introduction of a larger number of cores to maintain performance gains produced a negative impact on idle power consumption. This can be seen briefly during the 45 nm lithography and very clearly in recent years with 14 nm.

Over the past few years, while lithography stagnated at 14 nm, the increase in performance per watt (when working with a full load) has been accompanied by a steady increase in idle power consumption (perhaps due to the increase in core count to achieve performance gains). This is one reason why the case for hardware refresh for more recent kit has become weaker: Servers in real-life deployments tend to spend a substantial part of their time in idle mode — 75% of the time, on average. As such, the increase in idle power may offset energy gains from performance.

This is an important point that will likely have escaped many buyers and operators: If a server spends a disproportionate amount of time in active idle mode — as is the case for most — the focus should be on active idle efficiency (e.g., choosing servers with lower core count) rather than just on higher server performance efficiency, while satisfying overall compute capacity requirements.

It is, of course, a constantly moving picture. The more recent introduction of the 7 nm lithography by AMD (Intel’s main competitor) should give Moore’s law a new lease of life for the next couple of years. However, it has become clear that we are starting to reach the limits of the existing approach to CPU design. Innovation and efficiency improvements will need to be based on new architectures, entirely new technologies and more energy-aware software design practices.


The full report Beyond PUE: Tackling IT’s wasted terawatts is available to members of the Uptime Institute Network here.

Outages drive authorities and businesses to act

Big IT outages are occurring with growing regularity, many with severe consequences. Executives, industry authorities and governments alike are responding with more rules, calls for more transparency and a more formal approach to end-to-end, holistic resiliency.

Creeping criticality

IT outages and data center downtime can cause huge disruption. That is hardly news: veterans with long memories can remember severe IT problems caused by power outages, for example, back in the early 1990s.

Three decades on, the situation is vastly different. Almost every component and process in the entire IT supply chain has been engineered, re-engineered and architected for the better, with availability a prime design criterion. Failure avoidance and management, business continuity and data center resiliency has become a discipline, informed by proven approaches and supported by real-time data and a vast array of tools and systems.

But there is a paradox: The very success of IT, and of remotely delivered services, has created a critical dependency on IT in almost every business and for almost every business process. This dependency has radically increased in recent years. Many more outages — and there are more of them — have a more immediate, wider and bigger impact than in the past.

A particular issue that has affected many high-profile organizations, especially in industries such as air transport, finance and retail, is “asymmetric criticality” or “creeping criticality.” This refers to a situation in which the infrastructure and processes have not been upgraded or updated to reflect the growing criticality of the applications or business processes they support. Some of the infrastructure has a 15-year life cycle, a timeframe out of sync with the far faster pace of innovation and change in the IT market.

While the level of dependency on IT is growing, another big set of changes is still only partway through: the move to cloud and distributed IT architectures (which may or may not involve the public cloud). Cloud and distributed applications enable the move, in part or whole, to a more distributed approach to resiliency. This approach involves replicating data across availability zones (regional clusters of three or more data centers) and using a variety of software tools and approaches, distributed databases, decentralized traffic and workload management, data replication and disaster recovery as a service.

These approaches can be highly effective but bring two challenges. First are complexity and cost — these architectures are difficult to set up, manage and configure, even for a customer with no direct responsibility for the infrastructure (Uptime Institute data suggests that difficulties with IT and software contribute to ever more outages). And second, for most customers, is a loss of control, visibility and accountability. This loss of visibility is now troubling regulators, especially in the financial services sector, which now plan to exercise more oversight in the United States (US), Europe, the United Kingdom (UK) and elsewhere.


 

 


Will outages get worse?

Are outages becoming more common or more damaging? The answer depends on the exact phrasing of the question: neither the number nor the severity of outages is increasing as a proportion of the level of IT services being deployed — in fact, reliability and availability is probably increasing, albeit perhaps not significantly.

But the absolute number of outages is clearly increasing. In both our 2018 and 2019 global annual surveys, half (almost exactly 50%) said their organization had a serious data center or IT outage in the past three years – and it is known that the number of data centers has risen significantly during this time. Our data also shows the impact of these outages is serious or severe in almost 20% of cases, with many industry sectors, including public cloud and colocation, suffering problems.

What next?

The industry is now at an inflection point; whatever the overall rate of outages, the impact of outages at all levels has become more public, has more consequential effects, and is therefore more costly. This trend will continue for several years, as networks, IT and cloud services take time to mature and evolve to meet the heavy availability demands put upon them. More high-profile outages can be expected, and more sectors and governments will start examining the nature of critical infrastructure.

 


 

 


This has already started in earnest: In the UK, the Bank of England is investigating large banks’ reliance on cloud as part of a broader risk-reduction initiative for financial digital services. The European Banking Authority specifically states that an outsourcer/cloud operator must allow site inspections of data centers. And in the US, the Federal Reserve has conducted a formal examination of at least one Amazon Web Services (AWS) data center, in Virginia, with a focus on its infrastructure resiliency and backup systems. More site visits are expected.

Authorities in the Netherlands, Sweden and the US have also been examining the resiliency of 911 services after a series of failures. And in the US, the General Accounting Office published an analysis to determine what could be done about the impact and frequency of IT outages at airlines. Meanwhile, data centers themselves will continue to be the most resilient and mature component (and with Uptime Institute certification, can be shown to be designed and operated for resiliency). There are very few signs that any sector of the market (enterprise, colocation or cloud) plans on downgrading physical infrastructure redundancy.

As a result of the high impact of outages, a much greater focus on resiliency can be expected, with best practices and management, investment, technical architectures, transparency and reporting, and legal responsibility all under discussion.


The full report Ten data center industry trends in 2020 is available to members of the Uptime Institute Network here.

Data center energy use goes up and up and up

Energy use by data centers and IT will continue to rise, putting pressure on energy infrastructure and raising questions about carbon emissions. The drivers for more energy use are simply too great to be offset by efficiency gains.

Drivers

Demand for digital services has seen sustained, exceptional growth over the past few years — and with it, the energy consumption of the underlying infrastructure has risen steadily as well. This has given rise to concerns about the ability of the energy industry to effectively supply data centers in some geographies — and the continuing worries about the sector’s growing carbon footprint.

Although there is a shortage of reliable, comprehensive data about the industry’s use of energy, it is likely that some models have underestimated energy data and carbon emissions and that the issue will become more critical in the years ahead.

There are some standout examples of IT energy use. Bitcoin mining, for example, is reliably estimated to have consumed over 73 terawatt-hour (TWh) of energy in 2019. This equates to the electricity use of 6.8 million average US households, or 20 million UK households. This is one cryptocurrency — of over 1,500 — and just one application area of blockchains.

Social media provides another example of uncontrolled energy use. Research by Uptime Intelligence shows that every time an image is posted on Instagram by the Portuguese soccer star Cristiano Ronaldo (who at the time of writing had the most followers on the platform), his more than 188 million followers consume over 24 megawatt-hours (MWh) of energy to view it.

Media streaming, which represents the biggest proportion of global traffic and which is rising steadily and globally, has become the energy guzzler of the internet. According to our analysis, streaming a 2.5 hour high definition (HD) movie consumes 1 kilowatt-hour (kWh) of energy. But for 4K (Ultra HD) streaming — expected to become more mainstream in 2020 — this will be closer to 3 kWh, a three-fold increase.

Data from the most developed countries shows what can be expected elsewhere. In the UK, which has more than 94% internet penetration, annual household broadband data consumption increased from 17 gigabyte (GB) in 2011 to 132 GB in 2016, according to official Ofcom data — a sustained 50% increase year-on-year for five years. (The growth figure is much higher in other parts of the world such as Asia and Africa.) Internet penetration, standing at 58% globally in 2019, is expected to increase by 10% in 2020.

This increase in demand is a big driver — although not the only one — for more infrastructure and more energy consumption in cloud, colocation and some enterprise data centers. But a new factor has yet to kick in: 5G.

While it will take a few years for 5G to further mature and become widespread, it is widely expected that the rollout of 5G from 2020 will substantially accelerate the data growth trends, with many new types of digital services in domains such as smart cities, IoT and transportation, among many others. The increased bandwidth compared with 4G will lead to increased demand for higher resolution content and richer media formats (e.g., virtual reality) as soon as late 2020 and rising more steeply, along with energy consumption, after that.

The role of blockchain (of which Bitcoin is just an example) and its impact on energy consumption is still to be fully determined, but if the takeup is on a large scale, it can only be an upward force. Most analysts in this area have predicted a dramatic rise in blockchain adoption beyond cryptocurrency in 2020, helped by new offerings such as the AWS blockchain service. Not all blockchain models are the same, but it inherently means a decentralized architecture, which requires extensive infrastructure to accommodate the replication of data. This consumes more energy than traditional centralized architectures.

Bitcoin is an example of a blockchain that uses Proof of Work as a consensus mechanism — and such models are extremely energy-intensive, requiring multiple parties to solve complex mathematical problems. While alternatives to this model (e.g., Proof of Stake) are likely to gain widespread commercial adoption, the uptake to date has been slow.

Energy consumption and global IT

Several reports have been published in recent years on IT energy consumption and its predicted growth rates. An International Energy Agency (IEA) report published in 2019 noted that workloads and internet traffic will double, but it also forecast that data center energy demand will remain flat to 2021, due to efficiency trends. It cited various references for the basic research.

But Uptime Institute Intelligence is wary of this prediction and intends to collaborate with various parties in 2020 to research this further. There are very strong factors driving up IT energy consumption, and some of the existing data on IT energy use contradicts the IEA figures. The IEA report, for example, stated that global data center energy consumption was 197.8 TWh in 2018 and is expected to drop slightly by 2021. However, research by the European Union’s (EU’s) EURECA (EU Resource Efficiency Coordination Action) Project found that European data centers consumed 130 TWh in 2017, whereas Greenpeace put energy consumption by the Chinese data center industry at 160 TWh in 2018. This suggests an annual total for China and Europe alone in the neighborhood of 290 TWh, far higher than the IEA global figures.

It is true that the explosive increase in IT demand will not translate directly into the same rate of growth for infrastructure energy consumption (due to increased IT energy efficiency). However, given the exponential rate of growth, it is likely that demand will substantially outpace the gains from efficiency practices over the next five years.

In US data centers, the law of diminishing returns may begin to limit the impact of energy savings. For example, at the data center level, best practices such as hot/cold aisle containment, installation of blanking plates and raising set point temperature have already been widely deployed; this can be seen in the substantial drop in power usage effectiveness (PUE) between 2011 and 2014. However, since 2014, PUE has not dropped much, and in 2019, we noticed a slight increase in the average annual PUE reported by respondents to our global data center survey. Similarly, with IT hardware, Moore’s law has slowed down, and newer servers are not maintaining the same efficiency improvements seen in the past.

Uptime Institute expects the strong growth in the IT sector to be sustained over the next five years, given the well-understood demand patterns and the existing technologies coming into large-scale adoption. Our preliminary research suggests that IT energy consumption will rise steadily too, by as much as 10% in 2020, but further research will be conducted to develop and validate these forecasts.


The full report Ten data center industry trends in 2020 is available to members of the Uptime Institute Network here.

Data Center Investment

Capital inflow boosts the data center market

Data centers are no longer a niche or exotic investment among mainstream institutional buyers, which are swarming to the sector. There is now a buyer for almost every type of data center — including traditional-infrastructure investors and sovereign wealth funds. How might this new mix of capital sources change the broader data center sector?

Traditional buyers will remain active

The number of data center acquisitions in 2019 will likely end up being a record high. Among the most active buyers, historically and to date, are data center companies, such as cloud, colocation and wholesale providers, as well as data center real estate investment trusts (REITs). They will continue to buy and sell, typically for strategic reasons — which means they tend to hold onto acquired assets.

Most are buying to expand their geographic footprint — not just to reach more customers, but also to be more appealing to large cloud providers, which are prized tenants because they attract additional customers and also tend to be long-term tenants. (Even if a cloud provider builds its own data center in a region, it will usually continue to lease space, provided costs are low, to avoid migration costs and challenges.) Facilities with rich fiber connectivity and/or in locations with high demand but limited space will be targets, including in Amsterdam, Frankfurt, Paris, northern Virginia and Singapore. Many will seek to expand in Asia, although to date there has been limited opportunity. They will also continue to sell off nonstrategic, third-tier assets or those in need of expensive refurbishment.

For many years, data center companies and REITs have competed with private equity firms for deals. Private equity companies have tended to snap up data centers, with a view toward selling them relatively quickly and at a profit. This has driven up the number of acquisition deals in the sector, and in some markets has driven up valuations.

More long-term capital sources move in

So, what is changing? More recently, firms such as traditional infrastructure investors and sovereign wealth funds have been acquiring data centers. Traditional infrastructure investors, which historically have focused on assets ranging from utility pipelines to transportation projects, have been the most active among new buyers.

The newcomers are similar in that they tend to have longer return periods (that is, longer investment timelines) and lower return thresholds than private equity investors. Many are buying data centers that can be leased, including wholesale for a single large cloud customer and colocation for multiple tenants.

Traditional infrastructure investors, in particular, will likely continue to take over data centers, which they now include as part of their definition of infrastructure. This means they view them as long-term assets that are needed regardless of macroeconomic changes and that provide a steady return. These investors include infrastructure funds, traditional (that is, non-data center) REITS and real estate investors.

In addition to being attracted to the sector’s high yields and long-term demand outlook, new investors are also simply responding to demand from enterprises looking to sell their data centers or to switch from owning to leasing. As discussed in the section “Pay-as-you-go model spreads to critical components” in our Trends 2020 report, the appetite of enterprises for public cloud, colo and third-party IT services continues to grow.

An influx of buyers is matching the needs of the sellers. As enterprises outsource more workloads, they need less capacity in their own data centers. Many are (or are in the process of) consolidating their owned footprints by closing smaller and/or regional data centers and moving mission-critical IT into centralized, often larger, facilities. Frequently, the data centers they’re exiting are in prime locations, such as cities, where demand for lease space is high, or in secondary regional markets that are under-served by colos. All types of investors are buying these enterprise data centers and converting them into multi-tenant operations.

Also common are sales with leaseback provisions (“leasebacks”), whereby an enterprise sells its data center and then leases it back from the new owner, either partially or wholly. This enables enterprises to maintain operational control over their IT but shed the long-term commitment and risk of ownership. This arrangement also affords flexibility, as the enterprise often only leases a portion of the data center. Small or regional colos are also seeking to sell their data centers, realizing that they lack the economies of scale to effectively compete with multi-national competitors.

More joint ventures

Another trend has been an increase in joint ventures (JVs), particularly by data center REITs, with these new types of investors. We expect more JVs will form, including with more infrastructure funds, to back more very large leased data centers for large cloud providers, which are struggling to expand fast and seek more leasing options (particularly build-to-suit facilities). At the same time, more enterprise data centers will be sold — increasingly to investors with long-term horizons — and converted into multi-tenant facilities.

The table below highlights some of the non-traditional capital sources that acquired data centers in 2019.

As more of this type of long-term-horizon money enters the sector, the portion of facilities that are owned by short-term-horizon private equity investors will be diluted.

Overall, the new types of capital investors in data centers, with their deep pockets and long return timelines, could boost the sector. They are likely, for example, to make it easier for any enterprise wishing to give up data center ownership to do so.


The full report Ten data center industry trends in 2020 is available to readers here.

Surveillance Capitalism and DCIM

In her book “Surveillance Capitalism,” the Harvard scholar Shoshana Zuboff describes how some software and service providers have been collecting vast amounts of data, with the goal of tracking, anticipating, shaping and even controlling the behavior of individuals. She sees it as a threat to individual freedom, to business and to democracy.

Zuboff outlines the actions, strategies and excesses of Facebook, Google and Microsoft in some detail. Much of this is well-known, and many legislators have been grappling with how they might limit the activities of some of these powerful companies. But the intense level of this surveillance extends far beyond these giants to many other suppliers, serving many markets. The emergence of the internet of things (IoT) accelerates the process dramatically.

Zuboff describes, for example, how a mattress supplier uses sensors and apps to collect data on sleepers’ habits, even after they have opted out; how a doll listens to and analyzes snippets of children’s conversations; and, nearer to home for many businesses, how Google’s home automation system Nest is able to harvest and exploit data about users’ behavior (and anticipated behavior) from their use of power, heating and cooling. Laws such as Europe’s General Data Protection Regulation offer theoretical protection but are mostly worked around by fine print: Privacy policies, Zuboff says, should be renamed surveillance policies.

All this is made possible, of course, because of ubiquitous connected devices; automation; large-scale, low-cost compute and storage; and data centers. And there is an irony — because many data center operators are themselves concerned at how software, service and equipment suppliers are collecting (or hope to collect) vast amounts of data about the operation of their equipment and their facilities. At one recent Uptime Institute customer roundtable with heavy financial services representation, some attendees strongly expressed the view that suppliers should (and would) not be allowed to collect and keep data regarding their data center’s performance. Others, meanwhile, see the suppliers’ request to collect data, and leverage the insights from that data, as benign and valuable, leading to better availability. If the supplier benefits in the process, they say, it’s a fair trade.

Of all the data center technology suppliers, Schneider Electric has moved furthest and fastest on this front. Its EcoStruxure for IT service is a cloud-based data center infrastructure management (DCIM) product (known as data center management as a service, or DMaaS) that pulls data from its customers’ many thousands of devices, sensors and monitoring systems and pools it into data lakes for analysis. (By using the service, customers effectively agree to share their anonymized data.) With the benefit of artificial intelligence (AI) and other big-data techniques, it is able to use this anonymized data to build performance models, reveal hitherto unseen patterns, make better products and identify optimal operational practices. Some of the insights are shared back with customers.

Schneider acknowledges that some potential customers have proven to be resistant and suspicious, primarily for security reasons (some prefer an air gap, with no direct internet connections for equipment). But they also say that take-up of their DCIM/DMaaS products have risen sharply since they began offering the low-cost, cloud-based monitoring services. Privacy concerns are not so great that they deter operators from taking advantage of a service they like.

Competitors are also wary. Some worry about competitive advantage, that a big DMaaS company will have the ability to see into a data center as surely as if its staff were standing in the aisles — indeed, somewhat better. And it is true: a supplier with good data and models could determine, with a fairly high degree of certainty, what will likely happen in a data center tomorrow and probably next year — when it might reach full capacity, when it might need more cooling, when equipment might fail, even when more staff are needed. That kind of insight is hugely valuable to the customer — and to any forewarned supplier.

To be fair, these competitive concerns aren’t exactly new: services companies have always had early access to equipment needs, for example, and remote monitoring and software as a service are now common in all industries. But the ability to pool data, divine new patterns, to predict and even shape decisions, almost without competition … this is a newer trend and arguably could stifle competition and create vendor lock-ins in a way not see before. With the benefit of AI, a supplier may know when cooling capacity will need to be increased even before the customer has thought about it.

Uptime has discussed the privacy (surveillance?) issues with executives at several large suppliers. Unsurprisingly, those who are competitively at most risk are most concerned. For others, their biggest concern is simply that they don’t have enough data to do this effectively themselves.

Schneider, which has a big market share but is not, arguably, in a position of dominance, says that it addressed both privacy and security fears when it designed and launched EcoStruxure. It says that the way data is collected and used is fully under customer control. The (encrypted) machine data that is collected by the EcoStruxure DMaaS is seen only by a select number of trusted developers, all of whom are under nondisclosure agreements. Data is tagged to a particular customer via a unique identifier to ensure proper matching, but it is fully segregated from other customers’ data and anonymized when used to inform analytics. These insights from the anonymized models may be shared with all customers, but neither Schneider nor anyone else can identify particular sites.

Using Schneider’s services, customers can see their own data, and see it in context — they see how their data center or equipment compares with the aggregated pool of data, providing valuable insights. But still, outside of the small number of trusted developers, no one but the customer sees it — unless, that is, the customer appoints a reseller, advisor or Schneider to look at the data and give advice. At that point, the supplier does have an advantage, but the duty is on the customer to decide how to take that advice.

None of this seems to raise any major privacy flags, but it is not clear that Zuboff would be entirely satisfied. For example, it might be argued that the agreement between data center operators and their suppliers is similar the common practice of the “surveillance capitalists.” These giant consumer-facing companies offer a superior service/product in exchange for a higher level of data access, which they can use as they like; anyone who denies access to the supplier is simply denied use of the product. Very few people ever deny Google or Apple access to their location, for example, because doing so will prevent many applications from working.

While DCIM is unusually wide in its scope and monitoring capability, this is not just about software tools. Increasingly, a lot of data center hardware, such as such as uninterruptible power supplies, power distribution units or blade servers, requires access to the host for updates and effective monitoring. Denying this permission reduces the functionality — probably to the point where it becomes impractical.

And this raises a secondary issue that is not well covered by most privacy laws: Who owns what data? It is clear that the value of a suppliers’ AI services grows significantly with more customer data. The relationship is symbiotic, but some in the data center industry are questioning the balance. Who, they ask, benefits the most? Who should be paying whom?

The issue of permissions and anonymity can also get muddy. In theory, an accurate picture of a customer (a data center operator) could be built (probably by an AI-based system) using data from utilities, cooling equipment, network traffic and on-site monitoring systems — without the customer ever actually owning or controlling any of that data.

Speaking on regulation and technology at a recent Uptime meeting held in the United States, Alex Howard, a Washington-based technology regulation specialist, advised that the customers could not be expected to track all this, and that more regulation is required. In the meantime, he advised business to take vigilant stance.

Uptime’s advice, reflecting client’s concerns, is that DMaaS provides a powerful and trusted service — but operators should always consider worse cases and how a dependency might be reversed. Big data provides many tempting opportunities, while anonymized data in theory can be breached and sometimes de-anonymized. Businesses, like governments, can change over time, and armed with powerful tools and data, they may cut corners or breach walls if the situation — or if a new owner or government — demands it. This is a now a reality in all business.

For specific recommendations on using DMaaS and related data center cloud services, see our report Very smart data centers: How artificial intelligence will power operational decisions.

Data centers without diesel generators: The groundwork is being laid…

In 2012, Microsoft announced that it planned to eliminate engine generators at its big data center campus in Quincey, Oregon. Six years later the same group, with much the same aspirations, filed for permission to install 72 diesel generators, which have an expected life of at least a decade. This example illustrates clearly just how essential engine generators are to the operation of medium and large data centers. Few — very few — can even contemplate operating production environments without diesel generators.

Almost every operator and owner would like to eliminate generators and replace them with a more modern, cleaner technology. Diesel generators are dirty — they emit both carbon dioxide and particulates, which means regulation and operating restrictions; they are expensive to buy; they are idle most of the time; and they have an operational overhead in terms of testing, regulatory conformity and fuel management (i.e., quality, supply and storage logistics).

But to date, no other technology so effectively combines low operating costs, energy density, reliability, local control and, as long as fuel can be delivered, open-ended continuous power.

Is this about to change? Not wholly, immediately or dramatically — but yes, significantly. The motivation to eliminate generators is becoming ever stronger, especially at the largest operators (most have eliminated reportable carbon emissions from their grid supply, leaving generators to account for most of the rest). And the combination of newer technologies, such as fuel cells, lithium-ion (Li-ion) batteries and a lot of management software, is beginning to look much more effective. Even where generators are not eliminated entirely, we expect more projects from 2020 onward will involve less generator cover.

Four Areas of Activity

There are four areas of activity in terms of technology, research and deployment that could mean that in the future, in some situations, generators will play a reduced role — or no role at all.

Fuel cells and on-site continuous renewables

The opportunity for replacing generators with fuel cells has been intensively explored (and to a lesser extent, tried) for a decade. At least three suppliers — Bloom Energy (US), Doosan (South Korea) and SOLIDPower (Germany) — have some data center installations. Of these, Bloom’s success with Equinix is best known. Fuel cells are arguably the only technology, after generators, that can provide reliable, on-site, continuous power at scale.

The use of fuel cells for data centers is controversial and hotly debated. Some, including the city of Santa Clara in California, maintain that fuel cells, like generators, are not clean and green, because most use fossil fuel-based gas (or hydrogen, which usually requires fossil fuel-based energy to isolate). Others say that using grid-supplied or local storage of gas introduces risks to availability and safety.

These arguments are possibly easily overcome, given the reliability of gas and the fact that very few safety issues ever occur. But fuel cells have two other disadvantages: first, they cost more than generators on a kilowatt-hour (kWh) per dollar($) basis and have mostly proven economic only when supported by grants; and second, they require a continuous, steady load (depending on the fuel cell architecture). This causes design and cost complications.

The debate will continue but even so, fuel cells are being deployed: a planned data center campus in Connecticut (owner/operator currently confidential) will have 20 MW of Doosan fuel cells, Equinix is committing to more installations, and Uptime Institute is hearing of new plans elsewhere. The overriding reason is not cost or availability, but rather the ability to achieve a dramatic reduction in carbon dioxide and other emissions and to build architectures in which the equipment is not sitting idle.

The idea of on-site renewables as a primary source of at-scale energy has gained little traction. But Uptime Institute is seeing one trend gathering pace: the colocation of data centers with local energy sources such as hydropower (or, in theory, biogas). At least two projects are being considered in Europe. Such data centers would draw from two separate but local sources, providing a theoretical level of concurrent maintainability should one fail. Local energy storage using batteries, pumped storage and other technologies would provide additional security.

Edge data centers

Medium and large data centers have large power requirements and, in most cases, need a high level of availability. But this is not always the case with smaller data centers, perhaps below 500 kilowatt (kW), of which there are expected to be many, many more in the decade ahead. Such data centers may more easily duplicate their loads and data to similar data centers nearby, may participate in distributed recovery systems, and may, in any case, cause fewer problems if they suffer an outage.

But above all, these data centers can deploy batteries (or small fuel calls) to achieve a sufficient ride-through time while the network redeploys traffic and workloads. For example, a small shipping container-sized 500 kWh Li-ion battery could provide all uninterruptible power supply (UPS) functions, feed power back to the grid and provide several hours of power to a small data center (say, 250 kW) in the event of a grid outage. As the technology improves and prices drop, such deployments will become commonplace. Furthermore, when used alongside a small generator, these systems could provide power for extended periods.

Cloud-based resiliency

When Microsoft, Equinix and others speak of reducing their reliance on generators, they are mostly referring to the extensive use of alternative power sources. But the holy grail for the hyperscale operators, and even smaller clusters of data centers, is to use availability zones, traffic switching, replication, load management and management software to rapidly re-configure if a data center loses power.

Such architectures are proving effective to a point, but they are expensive, complex and far from fail-safe. Even with full replication, the loss of an entire data center cannot but cause performance problems. For this reason, all the major operators continue to build data centers with concurrent maintainability and on-site power at the data center level.

But as software improves and Moore’s law continues to advance, will this change? Based on the state of the art in 2019 and the plans for new builds, the answer is categorically “not yet.” But in 2019, at least one major operator conducted tests to determine its resiliency using these technologies. The likely goal would not be to eliminate generators altogether, but to reduce the portion of the workload that would need generator cover.

Li-ion and smart energy

For the data center designer, one of the most significance advances of the past several years is the maturing — technically and economically — of the Li-ion battery. From 2010 to 2018, the cost of Li-ion batteries (in $ per kWh) fell 85%, according to Bloomberg-NEF (New Energy Finance). Most analysts expect prices to continue to fall steadily for the next five years, with large-scale manufacturing being the major reason. While this is no Moore’s law, it is creating an opportunity to introduce a new form of energy storage in new ways — including the replacement of some generators.

It is early days, but major operators, manufacturers and startups alike are all looking at how they can use Li-ion storage, combined with multiple forms of energy generation, to reduce their reliance on generators. Perhaps this should not be seen as the direct replacement of generators with Li-ion storage, since this is not likely to be economic for some time, but rather the use of Li-ion storage not just as a standard UPS, but more creatively and more actively. For example, combined with load shifting and closing down applications according to their criticality, UPS ride-throughs can be dramatically extended and generators will be turned on much later (or not all). Some may even be eliminated. Trials and pilots in this area are likely to be initiated or publicized in 2020 or soon after.

(Alternative technologies that could compete with lithium-ion batteries in the data center include sodium-ion batteries based on Prussian blue electrodes.)


The full report Ten data center industry trends in 2020 is available to members of the Uptime Institute Network here.