Data center operators cautiously support nuclear

Data center operators cautiously support nuclear

The value, role and safety of nuclear power has strongly divided opinion, both in favor of and against, since the 1950s. This debate has now, in 2022, reached a critical point again as energy security and prices cause increasing concern globally (particularly in geographies such as Europe) and as the climate crisis requires energy producers to shift toward either non- or low-carbon sources — including nuclear.

At the beginning of 2022, Uptime Institute Intelligence forecast that data center operators, in their search for low-carbon, firm (non-intermittent) power sources, would increasingly favor — and even lobby for — nuclear power. The Uptime Institute Global Data Center Survey 2022 shows that data center operators / owners in major data center economies around the world are cautiously in favor of nuclear power. There are, however, significant regional differences (see Figure 1).

diagram: Nuclear is needed, say operators in most regions
Figure 1 Nuclear is needed, say operators in most regions

In both North America and Europe, about three-quarters of data center operators believe nuclear should either play a core long-term role in providing grid power or is necessary for a period of transition. However, Europeans are more wary, with 35% saying nuclear should only play a temporary or transitional role (compared with just 23% in North America).

In Europe, attitudes to nuclear power are complex and politicized. Following the Chernobyl and Fukushima nuclear accidents, green parties in Europe lobbied strongly against nuclear power, with Germany eventually deciding to close all its nuclear power plants. More recently, the Russian invasion of Ukraine has exposed Germany’s over-reliance on energy imports, and many have called for a halt to this nuclear shutdown.

In the US, there is greater skepticism among the general population about climate change being caused by humans, and consequently surveys record lower levels of concern about carbon emissions. Uptime Intelligence’s survey appears to show data center operators in North America also have lower levels of concern about nuclear safety, given their greater willingness for nuclear to play a core role (Figure 1). As the issues of climate change and energy security intensify, this gap in opinion between the US and Europe is likely to close in the years ahead.

In China, not a single respondent thought nuclear power should be phased out — perhaps reflecting both its government’s stance and a strong faith in technology. China, more than most countries, faces major challenges meeting energy requirements and simultaneously reducing carbon emissions.

In Latin America, and Africa and the Middle East, significantly lower proportions of data center operators think nuclear power should play a key role. This may reflect political reality: there is far less nuclear power already in use in those regions, and concerns about political stability and nuclear proliferation (and cost) will likely limit even peaceful nuclear use. In practice, data center operators will not have a major impact on the use (or non-use) of nuclear power. Decisions will primarily be made by grid-scale investors and operators and will be steered by government policy. However, large-scale energy buyers can make investments more feasible — and existing plants more economic — if they choose to class nuclear power as a renewable (zero-carbon) energy source and include nuclear in power purchase agreements. They can also benefit by siting their data centers in regions where nuclear is a major energy source. Early-stage discussions around the use of small modular reactors (SMRs) for large data center campuses (see Data center operators ponder the nuclear option) are, at present, just that — exploratory discussions.

Vegetable oil promises a sustainable alternative to diesel

Vegetable oil promises a sustainable alternative to diesel

The thousands of gallons of diesel that data centers store on-site to fuel backup generators in the event of a grid outage are hard to ignore — and do little to assist operators’ drive toward tougher sustainability goals.

In the past two years, an alternative to diesel made from renewable materials has found support among a small number of data center operators: hydrotreated vegetable oil (HVO). HVO is a second-generation biofuel, chemically distinct from “traditional” biodiesel. It can be manufactured from waste food stocks, raw plant oils, used cooking oils or animal fats; and either blended with petroleum diesel or used in 100% concentrations.

This novel fuel reduces carbon dioxide (CO2) emissions by up to 90%, particulate matter by 10% to 30% and nitrogen oxides by 6% to 15%, when compared with petroleum diesel.

Operators that have deployed, or are testing, 100% HVO in their data centers include colocation provider Kao Data, Ark Data Centres, and Datum Datacentres in the UK, Compass Datacenters in the US, Stack Infrastructure in Canada and Interxion in France. The largest colocation provider Equinix said in its most recent 2021 sustainability report that it was piloting HVO at multiple sites and investigating its supply chain for a “transition away from diesel.” Microsoft is using a blend of petroleum diesel and at least 50% HVO in its cloud data centers in Sweden, which were launched at the end of 2021.

There are several benefits to using 100% HVO in addition to lowered carbon footprint (see Table 1). It can be stored for up to 10 years without major changes to its properties, unlike petroleum diesel, which can only be stored for up to a year. HVO is free from fatty acid methyl ester (FAME), which means it is not susceptible to “diesel bug” (a microbial contamination that grows in diesel) and does not require fuel polishing. HVO also has better cold-flow properties than petroleum diesel that simplify operation in colder climates and delivers slightly more power to the engine due to its high cetane value.

Table 1. Considerations when replacing diesel with hydrotreated vegetable oil
Table 1. Considerations when replacing diesel with hydrotreated vegetable oil

Importantly, HVO serves as a drop-in replacement for diesel: it can be stored in existing tanks, used by most (if not all) existing generators and mixed into existing diesel stocks without changes to parts or processes. Generator vendors that have tested and approved 100% HVO for use with their equipment include Rolls-Royce’s mtu, Cummins, Aggreko, Caterpillar and Kohler; others are likely to follow suit.

However, HVO comes at a price premium: in Europe, it costs about 30% more than traditional diesel fuels.

Another issue is HVO’s currently limited supply. Europe’s first HVO refinery was opened in Finland by oil refiner Neste in 2007; the company remains the world’s largest HVO producer, with four plants in operation.

In the US, there were five commercial HVO plants as of 2020, with a combined capacity of over 590 million gallons per year, according to the US Department of Energy. Most of this capacity was headed to California, due to the economic benefits under the state’s low-carbon fuel standard. For comparison, refineries in the US produced around 69.7 billion gallons of ultra-low-sulfur diesel in 2020.

Production of HVO has been expanding rapidly, owing to its potential as a sustainable aviation fuel. Biofuel broker Greenea expects the number of pure and co-processing HVO plants to triple in the EU, triple in Asia, and rise six-fold in the US between 2020 and 2025.

There is also a potential issue relating to source materials: while HVO can be made from used cooking oils, it can easily be made from palm oil – which can contribute to deforestation and other environmental issues. Large HVO producers are phasing out palm oil from their supply chains; the largest, Neste, plans to achieve this by the end of 2023. Visibility into manufacturing practices and distribution will be a key consideration if HVO is part of data center sustainability strategies.

What is the scale of the potential environmental impact of the use of HVO on data center emissions? While highly resilient data centers are often designed to rely on gensets as their primary source of power, they are rarely used for this purpose in mature data center regions, such as the US and Western Europe, because the electrical grids are relatively stable. Generator testing is, therefore, the main application for HVO in most data centers.

A data center might run each of its generators for only 2 hours of testing per month (24 hours / year). A typical industrial grade, 1-megawatt (MW)-output diesel generator consumes around 70 gallons of fuel per hour at full load. This means a 10 MW facility that has 10 1-MW generators requires up to 16,800 gallons of diesel per year for testing purposes, emitting about 168 metric tons of carbon.

Uptime Institute estimates that for a data center located in Virginia (US) operating at 75% utilization, such a testing regimen would represent just 0.8% of a facility’s Scope 1 and 2 emissions.

The impact on emissions would be greater if the generators were used for longer periods of time. If the same facility used generators to deliver 365 hours of backup power per year, in addition to testing, their contribution to overall Scope 1 and 2 emissions would increase to 8%. In this scenario, a switch to HVO would deliver higher emission reductions.

The relative environmental benefits of switching to HVO would be higher in regions where the grid is most dependent on fossil fuels. However, the impact on the share of emissions produced by generators would be much more pronounced in regions that rely on low-carbon energy sources and have a lower grid emissions factor.

A key attraction of using HVO is that operators do not have to choose one type of fuel over another: since it is a drop-in replacement, HVO can be stored on-site for generator testing and supply contracts for petroleum diesel can be arranged for instances when generators need to be operated for longer than 12 hours at a time. Looking ahead, it remains to be seen whether data center operators are willing to pay a premium for HVO to back up their sustainability claims.

Is navigating cloud-native complexity worth the hassle?

Is navigating cloud-native complexity worth the hassle?

Last month 7,000 developers traveled to Valencia to attend the combined KubeCon and CloudNativeCon Europe 2022 conference, the event for Kubernetes and cloud-native software development. A further 10,000 developers joined the conference online. The event is organized by the Cloud Native Computing Foundation (CNCF), part of the non-profit Linux Foundation. The CNCF supports and promotes over 1,200 projects, products and companies associated with developing and facilitating innovative cloud-native practices. All CNCF projects are open source – free and accessible to all – and its members work together to create scalable and adaptable applications.

The tools and services discussed at the event aim to solve the technical complexity of cloud-native practices, but there is a new complexity in the vast choice of tools and services now available. Organizations face a difficult choice when choosing which projects and products will best meet their needs and then designing an application using these tools to meet large-scale requirements. A fuller explanation of cloud-native principles is available in the Uptime Institute report Cloud scalability and resiliency from first principles.

Kubernetes — one of CNCF’s core projects — is a management platform for software containers. Containers, one of the key technologies of cloud-native IT, offer a logical packaging mechanism so that workloads can be abstracted from the physical venues in which they run.

A core value of containers is agility:

  • Containers are small and can be created in seconds.
  • Rather than needing to scale up a whole application, a single function can scale up rapidly through the addition of new containers to suit changing requirements.
  • Containers can run on multiple operating systems, reducing lock-in and aiding portability.
  • Containers are also updatable: rather than having to update and rebuild a whole application, individual containers can be updated with new code and patched independently from the rest of the application.

Containers underpin microservices architecture, whereby an application is decomposed into granular components (via containers) that can be managed independently, enabling applications to respond rapidly to changing business requirements. Cloud-native practices are the array of tools and protocols used to keep track of microservices and operate them efficiently, by managing communication, security, resiliency and performance.

Organizations developing cloud-native applications face two sets of challenges, both related to complexity. First, containers and microservices architectures increase the complexity of applications by decomposing them into more parts to track and manage. Thousands of containers may be running across hundreds of servers across multiple venues in a complex application. Keeping track of these containers is only the first step. They must be secured, connected, load balanced, distributed and optimized for application performance. Once operating, finding and fixing faults across such a large number of resources can be challenging. Many of the CNCFs projects relate to managing this complexity.

Ironically, the CNCF’s vast ecosystem of cloud-native projects, products and services creates the second set of challenges. The CNCF provides a forum and framework for developing open source, cloud-native libraries, of which there are now over 130. It does not, however, recommend which approach is best for what purpose.

Of the 1,200 projects, products, and companies associated with the CNCF, many are in direct competition. Although the common theme among all of them is open source, companies in the cloud-native ecosystem want to upsell support services or a curated set of integrated tools that elevate the free, open-source code to be a more robust, easy-to-use and customizable enabler of business value.

Not all these 1,200 projects, products and companies will survive – cloud-native is an emerging and nascent sector. The market will dictate which ones thrive and which ones fail. This means that users also face a challenge in choosing a partner that will still exist in the mid-term. The open-source nature of cloud-native projects means the user still can access and support the code, even if a vendor chooses not to – but it is far from ideal.

There is currently a lack of clarity on how to balance risk and reward and cost versus benefit. What projects and technologies should enterprises take a chance on? Which ones will be supported and developed in the future? How much benefit will be realized for all the overhead in managing this complexity? And what applications should be prioritized for cloud-native redevelopment?

Attendees at KubeCon appear confident that the value of cloud-native applications is worth the effort, even if quantifying the value is more complex. Companies are willing to invest time and money into cloud-native development as demonstrated by the fact that 65% of attendees had not attended a KubeCon conference previously and CNCF certifications have increased 216% since last year. It isn’t only IT companies driving cloud native: Boeing has been announced as a new Platinum sponsor of the CNCF at the event. The aerospace multinational said it wants to use cloud-native architectures for high-integrity, high-availability applications.

Cloud-native software development needs to be at the forefront of cloud architectures. But users shouldn’t rush: they should work with vendors and providers they already have a relationship with and focus on new applications rather than rebuilding old ones. Time can be well-spent on applications, where the value in scalability is clear.

The complexity of cloud native is worth navigating, but only for applications that are likely to grow and develop. For example, a retail website that continues to take orders during an outage will likely derive significant value — for an internal Wiki used by a team of five it probably isn’t worth the hassle.

Sustainability laws set to drive real change

Sustainability laws set to drive real change

Nearly 15 years ago – in 2008 – Uptime Institute presented a paper titled “The gathering storm.” The paper was about the inevitability of a struggle against climate change and how this might play out for the power-hungry data center and IT sector. The issues were explored in more detail in the 2020 Uptime Institute report, The gathering storm: Climate change and data center resiliency.

The original presentation has proven prescient, discussing the key role of certain technologies, such as virtualization and advanced cooling, an increase in data center power use, the possibility of power shortages in some cities, and growing legislative and stakeholder pressure to reduce emissions. Yet, all these years later, it is clear that the storm is still gathering and — for the most part — the industry remains unprepared for its intensity and duration.

This may be true both literally – a lot of digital infrastructure is at risk not just from storms but gradual climate change – and metaphorically. The next decade will see demands for data center operators to become ever more sustainable from legislators, planning authorities, investors, partners, suppliers, customers and the public. Increasingly, many of these stakeholders will expect to see verified data to support claims of “greenness”, and for organizations to be held accountable for any false or misleading statements.

If this sounds unlikely, then consider two different reporting or legal initiatives, which are both in relatively advanced stages:

  • Taskforce on Climate-related Financial Disclosures (TCFD). A climate reporting initiative created by the international Financial Stability Board. TCFD reporting requirements will soon become part of financial reporting for public companies in the US, UK, Europe and at least four other jurisdictions in Asia and South America. Reports must include all financial risks associated with mitigating and adapting to climate change. In the digital infrastructure area, this will include remediating infrastructure risks (including, for example, protecting against floods, reduced availability of water for cooling or the need to invest in addressing higher temperatures), risks to the equipment or service providers, and (critically) any potential exposure to financial or legal risks resulting from a failure to meet stated and often ambitious carbon goals.
  • European Energy Efficiency Directive (EED) recast. This is set to be passed into European Union law in 2022 and to be enacted by member states by 2024 (for the 2023 reporting year). As currently drafted, this will mandate that all organizations with more than approximately 100 kilowatts of IT load in a data center must report their data center energy use, data traffic storage, efficiency improvements and various other facility data, and they must perform and publicly report periodic energy audits. Failure to show improvement may result in penalties.

While many countries, and some US states, may lag in mandatory reporting, the storm is global and legislation similar to the TCFD and EED is likely to be widespread before long.

As shown in Figure 1, most owners and operators of data centers and digital infrastructure in Uptime’s 2021 annual survey have some way to go before they are ready to track and report such data — let alone demonstrate the kind of measured improvements that will be needed. The standout number is that only about a third calculate carbon emissions for reporting purposes.

Diagram: Carbon emissions and IT efficiency not widely reported
Figure 1 Carbon emissions and IT efficiency not widely reported

All organizations should have a sustainability strategy to achieve continuous, measurable and meaningful improvement in operational efficiency and environmental performance of their digital infrastructure (including enterprise data centers and IT in colocation and public cloud data centers). Companies without a sustainability strategy should take immediate action to develop a plan if they are to meet the expectations or requirements of their authorities, as well as their investors, executives and customers.

Developing an effective sustainability strategy is neither a simple reporting or box-ticking exercise, nor a market-led flag-waving initiative. It is a detailed, comprehensive playbook that requires executive management commitment and the operational funding, capital and personnel resources to execute the plan.

For a digital sustainability strategy to be effective, there needs to be cross-disciplinary collaboration, with the data center facilities (owned, colocation and cloud) and IT operations teams working together, alongside other departments such as procurement, finance and sustainability.

Uptime Institute has identified seven areas that a comprehensive sustainability strategy should address: Greenhouse gas emissions; energy use (conservation, efficiency and reduction); renewable energy use; IT equipment efficiency; water use (conservation and efficiency); facility siting and construction; and disposal or recycling of waste and end-of-life equipment.

Effective metrics and reporting relating to the above areas are critical. Metrics to track sustainability and key performance indicators must be identified, and data collection and analysis systems put in place. Defined projects to improve operational metrics, with sufficient funding, should be planned and undertaken.

Many executives and managers have yet to appreciate the technical, organizational and administrative/political challenges that implementing good sustainability strategies will likely entail. Selecting and assessing the viability of technology-based projects is always difficult and will involve forward-looking calculations of costs, energy and carbon risks.

For all operators of digital infrastructure, however, the first big challenge is to acknowledge that sustainability has now joined resiliency as a top-tier imperative.

Equipment shortages may ease soon — but not for good reasons

Equipment shortages may ease soon — but not for good reasons

When Uptime Institute Intelligence surveyed data center infrastructure operators about supply chain issues in August 2021, more than two-thirds of respondents had experienced some shortages in the previous 18 months. Larger operations bore the brunt of disruptions, largely due to shortages or delays in sourcing major electrical equipment (such as switchgear, engine generators and uninterruptible power supplies) and cooling equipment. Smaller technical organizations more commonly saw issues around getting IT hardware on time, rather than mechanical or power systems.

A shared gating factor across the board was the scarcity of some key integrated circuits, particularly embedded controllers (including microprocessors and field-programmable gate arrays, or FPGAs) and power electronics of all sizes. These components are omnipresent in data center equipment. On balance, respondents expected shortages to gradually ease but persist for the next two to three years.

There are now signs that supply chains, at least in some areas, may regain balance sooner than had been expected. However, this is not because of sudden improvements in supply. True, manufacturers and logistics companies have been working for more than two years now to overcome supply issues — and with some success. The semiconductor industry, for example, committed billions of dollars in additional capital expenditure, not only to meet seemingly insatiable demand but also in response to geopolitical concerns in the US, Europe and Japan about exposure to IT supply chain concentration in and around China.

Instead, the reason for the expected improvement in supply is less positive: unforeseen weaknesses in demand are helping to alleviate shortages. Some major chipmakers, including Intel, Samsung Electronics and Micron, are expecting a soft second half to 2022, with worsening visibility. The world’s largest contract chipmaker TSMC (Taiwan Semiconductor Manufacturing Company) also warned of a build-up of inventories with its customers.

There are multiple reasons for this fall in demand, including:

  • Concerns about runaway energy prices, and the availability of natural gas in Europe, due to Russia’s geopolitical weaponization of its energy exports has contributed to renewed economic uncertainty — forcing businesses to preserve cash rather than spend it.
  • Market research firms and manufacturers agree that consumers are spending less on personal computers and smartphones (following a post-Covid high) resulting from cost of living pressures.
  • China’s part in making a bad situation worse for vendors earlier in 2022. By enforcing its zero-tolerance Covid-19 policy, resultant severe lockdowns markedly reduced domestic demand for semiconductors — even as it dislocated supply of other components.

All of this means production capacity and components can be freed up to meet demand elsewhere. A slowdown in demand for electronics should help availability of many types of products. Even though components made for consumer electronics are not necessarily suitable for other end products, the easing of shipments in those categories will help upstream suppliers reallocate capacity to meet a backlog of orders for other products.

Chipmakers that operated at capacity will shift some of their wafers and electronics manufacturers will refocus their production and logistics on matching demand for power components needed elsewhere, including data center equipment and IT hardware. Supply chains in China, reeling from a prolonged lockdown in Shanghai, are also recovering and this should help equipment vendors close gaps in their component inventories.

In any case, the availability of key raw materials, substrates and components for the production of chips, circuit boards and complete systems is about to improve — if it hasn’t already. It will, however, take months for a rebalancing of supply and demand to propagate through supply chains to reach end products, and it will probably not be enough to reverse recent, shortage-induced price increases. These price increases are also due to rising energy and commodity input costs but lead times for IT hardware and data center equipment products (barring any further shocks) should see an improvement in the short term.

Even if supply-demand reaches a balance relatively soon, the long-term outlook is murkier. The outlines of greater risks take shape as the world enters a period of increased geopolitical uncertainty. In light of this, the US, Europe, China and other governments are pouring tens of billions to re-structure supply chains for increased regional resilience. How effective this will be remains to be seen.

Who will win the cloud wars?

Who will win the cloud wars?

Technology giants such as Microsoft or IBM didn’t view Amazon as a threat when it launched its cloud business unit Amazon Web Services (AWS) in 2006. As a result, AWS has a significant first to market advantage with more services, more variations in more regions and more cloud revenue than any other cloud provider. Today, industry estimates suggest AWS has about 33% market share of the global infrastructure as a service and platform as a service market, followed by Microsoft with about 20%, Google with 10%, Alibaba with 5% and IBM with 4%. Will AWS continue to dominate and, if so, what does this mean for cloud users?

Amazon has been successful by applying the same principles it uses in its retail business to its cloud business. AWS, just like the retailer Amazon.com, provides users with an extensive range of products to fit a wide range of needs that are purchased easily with a credit card and delivered rapidly.

Amazon.com has made big investments in automation and efficiency, not only to squeeze costs so its products are competitively priced, but also to meet consumer demand. Considering the range of products for sale on Amazon.com, the many regions it operates and the number of customers it serves, the company needs to operate efficiently at scale to deliver a consistent user experience — regardless of demand. AWS gives users access to many of the same innovations used to power Amazon.com so they can build their own applications that operate effectively at scale, with a simple purchase process and quick delivery.

AWS is sticking with this strategy of domination by aiming to be a one-stop-shop for all IT products and services and the de facto choice for enterprise technology needs. Amazon’s vast empire, however, is also its most significant barrier. The global brand has interests in retail, video streaming, entertainment, telecom, publishing, supermarkets and, more recently, healthcare. Many competitors in a range of diverse industries don’t want to rely on Amazon’s cloud computing brand to deliver its mission-critical applications.

Microsoft was relatively slow to see Amazon looming but launched its own cloud service Azure in 2010. The advantage Microsoft Azure has over AWS is its incumbent position in enterprise software and its business focus. Few organizations have no Microsoft relationship due to the popularity of Windows and Microsoft 365 (formerly Office 365). Existing Microsoft customers represent a vast engaged audience for upselling cloud computing.

Arguably, Microsoft isn’t particularly innovative in cloud computing, but it is striving to keep pace with AWS. By integrating its cloud services with software (e.g., Microsoft 365 and OneDrive), Azure wants to be the obvious choice for users already using Microsoft products and services. The company has a natural affinity for hybrid deployments, being a supplier of on-premises software and cloud services, which should be able to work better together.

Microsoft, unlike its biggest rivals in cloud, can offer financial benefits to its enterprise software users. For example, it allows porting of licenses between on-premises and cloud environments. With the aggregate effect of software and cloud spending, Microsoft can also offer large discounts to big spenders. Its strategy is to remove barriers to adoption by way of integration and licensing benefits, and to sell through existing relationships.

Like Amazon, Google has a core web business (search and advertising) that operates at huge scale. Like Amazon also, is Google’s need for a consistent user experience regardless of demand. This requirement to operate effectively at scale drives Google’s innovation, which often provides the basis for new cloud services. Google has a reputation for open-source and cloud-native developments, a key example being Kubernetes, the now de facto container orchestration platform. Its open-source approach wins favor from developers.

However, Google remains primarily a consumer business with little relationship management experience with organizations. Its support and professional services reputation has yet to be fully established. To the chagrin of many of its cloud customers, it has turned off services and increased prices. While winning some big-name brands as cloud customers over the past few years has helped it be perceived as more enterprise focused, Google Cloud Platform’s relationship management is still a work in progress.

Alibaba has a natural incumbent position in China. It has also expanded its data centers beyond Chinese borders to allow Chinese-based companies to expand into other regions more easily. As an online retailer that now offers cloud services, Alibaba’s approach has many similarities with AWS’ — but is targeted primarily toward Chinese-based customers.

Much like a systems integrator, IBM wants to be a trusted advisor that combines hardware, software and cloud services for specific customer use-cases and requirements. It has strong relationships with enterprises and government bodies, and credentials in meeting complex requirements. In practice, though, IBM’s vast range of products (new and legacy) is difficult to navigate. The story around its range is not clear or joined up. However, its acquisition of Red Hat in 2019 is helping the company develop its hybrid cloud story and open-source credentials.

How will the market change in the future? Estimates of cloud market share are highly variable, with one of the biggest challenges being that different providers report different products and services in the “cloud” revenue category. As a result, exact figures and longitude changes need to be treated with skepticism. Microsoft Azure and Google Cloud, however, are likely to take market share from AWS simply because AWS has had such a leadership position with relatively little competition for so long.

The cloud market is estimated to continue growing, raising the revenue of all cloud providers regardless of rank. Some estimates put global annual cloud revenue up by a third compared with last year. The rising tide of cloud adoption will raise all boats. AWS is likely to dominate for the foreseeable future, not only in revenue but also in users’ hearts and minds due to its huge head start over its competitors.