Is navigating cloud-native complexity worth the hassle?

Is navigating cloud-native complexity worth the hassle?

Last month 7,000 developers traveled to Valencia to attend the combined KubeCon and CloudNativeCon Europe 2022 conference, the event for Kubernetes and cloud-native software development. A further 10,000 developers joined the conference online. The event is organized by the Cloud Native Computing Foundation (CNCF), part of the non-profit Linux Foundation. The CNCF supports and promotes over 1,200 projects, products and companies associated with developing and facilitating innovative cloud-native practices. All CNCF projects are open source – free and accessible to all – and its members work together to create scalable and adaptable applications.

The tools and services discussed at the event aim to solve the technical complexity of cloud-native practices, but there is a new complexity in the vast choice of tools and services now available. Organizations face a difficult choice when choosing which projects and products will best meet their needs and then designing an application using these tools to meet large-scale requirements. A fuller explanation of cloud-native principles is available in the Uptime Institute report Cloud scalability and resiliency from first principles.

Kubernetes — one of CNCF’s core projects — is a management platform for software containers. Containers, one of the key technologies of cloud-native IT, offer a logical packaging mechanism so that workloads can be abstracted from the physical venues in which they run.

A core value of containers is agility:

  • Containers are small and can be created in seconds.
  • Rather than needing to scale up a whole application, a single function can scale up rapidly through the addition of new containers to suit changing requirements.
  • Containers can run on multiple operating systems, reducing lock-in and aiding portability.
  • Containers are also updatable: rather than having to update and rebuild a whole application, individual containers can be updated with new code and patched independently from the rest of the application.

Containers underpin microservices architecture, whereby an application is decomposed into granular components (via containers) that can be managed independently, enabling applications to respond rapidly to changing business requirements. Cloud-native practices are the array of tools and protocols used to keep track of microservices and operate them efficiently, by managing communication, security, resiliency and performance.

Organizations developing cloud-native applications face two sets of challenges, both related to complexity. First, containers and microservices architectures increase the complexity of applications by decomposing them into more parts to track and manage. Thousands of containers may be running across hundreds of servers across multiple venues in a complex application. Keeping track of these containers is only the first step. They must be secured, connected, load balanced, distributed and optimized for application performance. Once operating, finding and fixing faults across such a large number of resources can be challenging. Many of the CNCFs projects relate to managing this complexity.

Ironically, the CNCF’s vast ecosystem of cloud-native projects, products and services creates the second set of challenges. The CNCF provides a forum and framework for developing open source, cloud-native libraries, of which there are now over 130. It does not, however, recommend which approach is best for what purpose.

Of the 1,200 projects, products, and companies associated with the CNCF, many are in direct competition. Although the common theme among all of them is open source, companies in the cloud-native ecosystem want to upsell support services or a curated set of integrated tools that elevate the free, open-source code to be a more robust, easy-to-use and customizable enabler of business value.

Not all these 1,200 projects, products and companies will survive – cloud-native is an emerging and nascent sector. The market will dictate which ones thrive and which ones fail. This means that users also face a challenge in choosing a partner that will still exist in the mid-term. The open-source nature of cloud-native projects means the user still can access and support the code, even if a vendor chooses not to – but it is far from ideal.

There is currently a lack of clarity on how to balance risk and reward and cost versus benefit. What projects and technologies should enterprises take a chance on? Which ones will be supported and developed in the future? How much benefit will be realized for all the overhead in managing this complexity? And what applications should be prioritized for cloud-native redevelopment?

Attendees at KubeCon appear confident that the value of cloud-native applications is worth the effort, even if quantifying the value is more complex. Companies are willing to invest time and money into cloud-native development as demonstrated by the fact that 65% of attendees had not attended a KubeCon conference previously and CNCF certifications have increased 216% since last year. It isn’t only IT companies driving cloud native: Boeing has been announced as a new Platinum sponsor of the CNCF at the event. The aerospace multinational said it wants to use cloud-native architectures for high-integrity, high-availability applications.

Cloud-native software development needs to be at the forefront of cloud architectures. But users shouldn’t rush: they should work with vendors and providers they already have a relationship with and focus on new applications rather than rebuilding old ones. Time can be well-spent on applications, where the value in scalability is clear.

The complexity of cloud native is worth navigating, but only for applications that are likely to grow and develop. For example, a retail website that continues to take orders during an outage will likely derive significant value — for an internal Wiki used by a team of five it probably isn’t worth the hassle.

Sustainability laws set to drive real change

Sustainability laws set to drive real change

Nearly 15 years ago – in 2008 – Uptime Institute presented a paper titled “The gathering storm.” The paper was about the inevitability of a struggle against climate change and how this might play out for the power-hungry data center and IT sector. The issues were explored in more detail in the 2020 Uptime Institute report, The gathering storm: Climate change and data center resiliency.

The original presentation has proven prescient, discussing the key role of certain technologies, such as virtualization and advanced cooling, an increase in data center power use, the possibility of power shortages in some cities, and growing legislative and stakeholder pressure to reduce emissions. Yet, all these years later, it is clear that the storm is still gathering and — for the most part — the industry remains unprepared for its intensity and duration.

This may be true both literally – a lot of digital infrastructure is at risk not just from storms but gradual climate change – and metaphorically. The next decade will see demands for data center operators to become ever more sustainable from legislators, planning authorities, investors, partners, suppliers, customers and the public. Increasingly, many of these stakeholders will expect to see verified data to support claims of “greenness”, and for organizations to be held accountable for any false or misleading statements.

If this sounds unlikely, then consider two different reporting or legal initiatives, which are both in relatively advanced stages:

  • Taskforce on Climate-related Financial Disclosures (TCFD). A climate reporting initiative created by the international Financial Stability Board. TCFD reporting requirements will soon become part of financial reporting for public companies in the US, UK, Europe and at least four other jurisdictions in Asia and South America. Reports must include all financial risks associated with mitigating and adapting to climate change. In the digital infrastructure area, this will include remediating infrastructure risks (including, for example, protecting against floods, reduced availability of water for cooling or the need to invest in addressing higher temperatures), risks to the equipment or service providers, and (critically) any potential exposure to financial or legal risks resulting from a failure to meet stated and often ambitious carbon goals.
  • European Energy Efficiency Directive (EED) recast. This is set to be passed into European Union law in 2022 and to be enacted by member states by 2024 (for the 2023 reporting year). As currently drafted, this will mandate that all organizations with more than approximately 100 kilowatts of IT load in a data center must report their data center energy use, data traffic storage, efficiency improvements and various other facility data, and they must perform and publicly report periodic energy audits. Failure to show improvement may result in penalties.

While many countries, and some US states, may lag in mandatory reporting, the storm is global and legislation similar to the TCFD and EED is likely to be widespread before long.

As shown in Figure 1, most owners and operators of data centers and digital infrastructure in Uptime’s 2021 annual survey have some way to go before they are ready to track and report such data — let alone demonstrate the kind of measured improvements that will be needed. The standout number is that only about a third calculate carbon emissions for reporting purposes.

Diagram: Carbon emissions and IT efficiency not widely reported
Figure 1 Carbon emissions and IT efficiency not widely reported

All organizations should have a sustainability strategy to achieve continuous, measurable and meaningful improvement in operational efficiency and environmental performance of their digital infrastructure (including enterprise data centers and IT in colocation and public cloud data centers). Companies without a sustainability strategy should take immediate action to develop a plan if they are to meet the expectations or requirements of their authorities, as well as their investors, executives and customers.

Developing an effective sustainability strategy is neither a simple reporting or box-ticking exercise, nor a market-led flag-waving initiative. It is a detailed, comprehensive playbook that requires executive management commitment and the operational funding, capital and personnel resources to execute the plan.

For a digital sustainability strategy to be effective, there needs to be cross-disciplinary collaboration, with the data center facilities (owned, colocation and cloud) and IT operations teams working together, alongside other departments such as procurement, finance and sustainability.

Uptime Institute has identified seven areas that a comprehensive sustainability strategy should address: Greenhouse gas emissions; energy use (conservation, efficiency and reduction); renewable energy use; IT equipment efficiency; water use (conservation and efficiency); facility siting and construction; and disposal or recycling of waste and end-of-life equipment.

Effective metrics and reporting relating to the above areas are critical. Metrics to track sustainability and key performance indicators must be identified, and data collection and analysis systems put in place. Defined projects to improve operational metrics, with sufficient funding, should be planned and undertaken.

Many executives and managers have yet to appreciate the technical, organizational and administrative/political challenges that implementing good sustainability strategies will likely entail. Selecting and assessing the viability of technology-based projects is always difficult and will involve forward-looking calculations of costs, energy and carbon risks.

For all operators of digital infrastructure, however, the first big challenge is to acknowledge that sustainability has now joined resiliency as a top-tier imperative.

Equipment shortages may ease soon — but not for good reasons

Equipment shortages may ease soon — but not for good reasons

When Uptime Institute Intelligence surveyed data center infrastructure operators about supply chain issues in August 2021, more than two-thirds of respondents had experienced some shortages in the previous 18 months. Larger operations bore the brunt of disruptions, largely due to shortages or delays in sourcing major electrical equipment (such as switchgear, engine generators and uninterruptible power supplies) and cooling equipment. Smaller technical organizations more commonly saw issues around getting IT hardware on time, rather than mechanical or power systems.

A shared gating factor across the board was the scarcity of some key integrated circuits, particularly embedded controllers (including microprocessors and field-programmable gate arrays, or FPGAs) and power electronics of all sizes. These components are omnipresent in data center equipment. On balance, respondents expected shortages to gradually ease but persist for the next two to three years.

There are now signs that supply chains, at least in some areas, may regain balance sooner than had been expected. However, this is not because of sudden improvements in supply. True, manufacturers and logistics companies have been working for more than two years now to overcome supply issues — and with some success. The semiconductor industry, for example, committed billions of dollars in additional capital expenditure, not only to meet seemingly insatiable demand but also in response to geopolitical concerns in the US, Europe and Japan about exposure to IT supply chain concentration in and around China.

Instead, the reason for the expected improvement in supply is less positive: unforeseen weaknesses in demand are helping to alleviate shortages. Some major chipmakers, including Intel, Samsung Electronics and Micron, are expecting a soft second half to 2022, with worsening visibility. The world’s largest contract chipmaker TSMC (Taiwan Semiconductor Manufacturing Company) also warned of a build-up of inventories with its customers.

There are multiple reasons for this fall in demand, including:

  • Concerns about runaway energy prices, and the availability of natural gas in Europe, due to Russia’s geopolitical weaponization of its energy exports has contributed to renewed economic uncertainty — forcing businesses to preserve cash rather than spend it.
  • Market research firms and manufacturers agree that consumers are spending less on personal computers and smartphones (following a post-Covid high) resulting from cost of living pressures.
  • China’s part in making a bad situation worse for vendors earlier in 2022. By enforcing its zero-tolerance Covid-19 policy, resultant severe lockdowns markedly reduced domestic demand for semiconductors — even as it dislocated supply of other components.

All of this means production capacity and components can be freed up to meet demand elsewhere. A slowdown in demand for electronics should help availability of many types of products. Even though components made for consumer electronics are not necessarily suitable for other end products, the easing of shipments in those categories will help upstream suppliers reallocate capacity to meet a backlog of orders for other products.

Chipmakers that operated at capacity will shift some of their wafers and electronics manufacturers will refocus their production and logistics on matching demand for power components needed elsewhere, including data center equipment and IT hardware. Supply chains in China, reeling from a prolonged lockdown in Shanghai, are also recovering and this should help equipment vendors close gaps in their component inventories.

In any case, the availability of key raw materials, substrates and components for the production of chips, circuit boards and complete systems is about to improve — if it hasn’t already. It will, however, take months for a rebalancing of supply and demand to propagate through supply chains to reach end products, and it will probably not be enough to reverse recent, shortage-induced price increases. These price increases are also due to rising energy and commodity input costs but lead times for IT hardware and data center equipment products (barring any further shocks) should see an improvement in the short term.

Even if supply-demand reaches a balance relatively soon, the long-term outlook is murkier. The outlines of greater risks take shape as the world enters a period of increased geopolitical uncertainty. In light of this, the US, Europe, China and other governments are pouring tens of billions to re-structure supply chains for increased regional resilience. How effective this will be remains to be seen.

Who will win the cloud wars?

Who will win the cloud wars?

Technology giants such as Microsoft or IBM didn’t view Amazon as a threat when it launched its cloud business unit Amazon Web Services (AWS) in 2006. As a result, AWS has a significant first to market advantage with more services, more variations in more regions and more cloud revenue than any other cloud provider. Today, industry estimates suggest AWS has about 33% market share of the global infrastructure as a service and platform as a service market, followed by Microsoft with about 20%, Google with 10%, Alibaba with 5% and IBM with 4%. Will AWS continue to dominate and, if so, what does this mean for cloud users?

Amazon has been successful by applying the same principles it uses in its retail business to its cloud business. AWS, just like the retailer Amazon.com, provides users with an extensive range of products to fit a wide range of needs that are purchased easily with a credit card and delivered rapidly.

Amazon.com has made big investments in automation and efficiency, not only to squeeze costs so its products are competitively priced, but also to meet consumer demand. Considering the range of products for sale on Amazon.com, the many regions it operates and the number of customers it serves, the company needs to operate efficiently at scale to deliver a consistent user experience — regardless of demand. AWS gives users access to many of the same innovations used to power Amazon.com so they can build their own applications that operate effectively at scale, with a simple purchase process and quick delivery.

AWS is sticking with this strategy of domination by aiming to be a one-stop-shop for all IT products and services and the de facto choice for enterprise technology needs. Amazon’s vast empire, however, is also its most significant barrier. The global brand has interests in retail, video streaming, entertainment, telecom, publishing, supermarkets and, more recently, healthcare. Many competitors in a range of diverse industries don’t want to rely on Amazon’s cloud computing brand to deliver its mission-critical applications.

Microsoft was relatively slow to see Amazon looming but launched its own cloud service Azure in 2010. The advantage Microsoft Azure has over AWS is its incumbent position in enterprise software and its business focus. Few organizations have no Microsoft relationship due to the popularity of Windows and Microsoft 365 (formerly Office 365). Existing Microsoft customers represent a vast engaged audience for upselling cloud computing.

Arguably, Microsoft isn’t particularly innovative in cloud computing, but it is striving to keep pace with AWS. By integrating its cloud services with software (e.g., Microsoft 365 and OneDrive), Azure wants to be the obvious choice for users already using Microsoft products and services. The company has a natural affinity for hybrid deployments, being a supplier of on-premises software and cloud services, which should be able to work better together.

Microsoft, unlike its biggest rivals in cloud, can offer financial benefits to its enterprise software users. For example, it allows porting of licenses between on-premises and cloud environments. With the aggregate effect of software and cloud spending, Microsoft can also offer large discounts to big spenders. Its strategy is to remove barriers to adoption by way of integration and licensing benefits, and to sell through existing relationships.

Like Amazon, Google has a core web business (search and advertising) that operates at huge scale. Like Amazon also, is Google’s need for a consistent user experience regardless of demand. This requirement to operate effectively at scale drives Google’s innovation, which often provides the basis for new cloud services. Google has a reputation for open-source and cloud-native developments, a key example being Kubernetes, the now de facto container orchestration platform. Its open-source approach wins favor from developers.

However, Google remains primarily a consumer business with little relationship management experience with organizations. Its support and professional services reputation has yet to be fully established. To the chagrin of many of its cloud customers, it has turned off services and increased prices. While winning some big-name brands as cloud customers over the past few years has helped it be perceived as more enterprise focused, Google Cloud Platform’s relationship management is still a work in progress.

Alibaba has a natural incumbent position in China. It has also expanded its data centers beyond Chinese borders to allow Chinese-based companies to expand into other regions more easily. As an online retailer that now offers cloud services, Alibaba’s approach has many similarities with AWS’ — but is targeted primarily toward Chinese-based customers.

Much like a systems integrator, IBM wants to be a trusted advisor that combines hardware, software and cloud services for specific customer use-cases and requirements. It has strong relationships with enterprises and government bodies, and credentials in meeting complex requirements. In practice, though, IBM’s vast range of products (new and legacy) is difficult to navigate. The story around its range is not clear or joined up. However, its acquisition of Red Hat in 2019 is helping the company develop its hybrid cloud story and open-source credentials.

How will the market change in the future? Estimates of cloud market share are highly variable, with one of the biggest challenges being that different providers report different products and services in the “cloud” revenue category. As a result, exact figures and longitude changes need to be treated with skepticism. Microsoft Azure and Google Cloud, however, are likely to take market share from AWS simply because AWS has had such a leadership position with relatively little competition for so long.

The cloud market is estimated to continue growing, raising the revenue of all cloud providers regardless of rank. Some estimates put global annual cloud revenue up by a third compared with last year. The rising tide of cloud adoption will raise all boats. AWS is likely to dominate for the foreseeable future, not only in revenue but also in users’ hearts and minds due to its huge head start over its competitors.

Costlier new cloud generations increase lock-in risk

Costlier new cloud generations increase lock-in risk

Cloud providers tend to adopt the latest server technologies early, often many months ahead of enterprise buyers, to stay competitive. Providers regularly launch new generations of virtual machines with identical quantities of resources (such as core counts, memory capacity, network and storage bandwidths) as the previous generation but powered by the latest technology.

Usually, the newer generations of virtual machines are also cheaper, incentivizing users to move to server platforms that are more cost-efficient for the cloud provider. Amazon Web Services’ (AWS’) latest Graviton-based virtual machines buck that trend by being priced higher than the previous generation. New generation seven (c7g) virtual machines, based on AWS’ own ARM-based Graviton3 chips, come at a premium of around 7% compared with the last c6g generation.

A new generation doesn’t replace an older version; the older generation is still available to purchase. The user can migrate their workloads to the newer generation if they wish, but it is their responsibility to do so.

Cloud operators create significant price incentives so that users gravitate towards newer generations (and, increasingly, between server architectures). In turn, cloud providers reap the benefits of improved energy efficiency and lower cost of compute. Newer technology will also be supportable by the cloud provider for longer, compared with technology that is already close to its end of life.

AWS’ higher price isn’t unreasonable, considering Graviton3 offers higher per core performance and is built on TSMC’s (Taiwan Semiconductor Manufacturing Company, the world’s largest semiconductor foundry) bleeding edge 5 nanometer wafers, which carry a premium. The virtual machines also come with DDR5 memory, which has 1.5 times the bandwidth of the DDR4 memory used in the sixth generation equivalent but cost slightly more. With more network and storage backend bandwidth as well, AWS claims performance improvements of around 25% over the previous generation.

Supply chain difficulties, rising energy costs and political issues are raising data center costs, so AWS may not have the margin to cut prices from the previous generation to the latest. However, this complicates the business case for AWS customers, who are used to both lower prices and better performance from a new generation. With the new c7g launch, better performance alone is the selling point. Quantifying this performance gain can be challenging because it strongly depends on the application, but Figure 1 shows how two characteristics — network and storage performance — are decreasing in cost per unit due to improved capability (based on “medium”-sized virtual machine). This additional value isn’t reflected in the overall price increase of the virtual machine from generation six to seven.

diagram: Price and unit costs for AWS’ c6g and c7g virtual machines
Figure 1 Price and unit costs for AWS’ c6g and c7g virtual machines

A 7% increase in price seems fair if it does drive a performance improvement of 25%, as AWS claims — even if the application gains only half of that, price-performance improves.

Because more performance costs more money, the user must justify why paying more will benefit the business. For example, will it create more revenue, allow consolidation of more of the IT estate or aid productivity? These impacts are not so easy to quantify. For many applications, the performance improvements on the underlying infrastructure won’t necessarily translate to valuable application performance gains. If the application already delivers on quality-of-service requirements, more performance might not drive more business value.

Using Arm’s 64-bit instruction set, Graviton-based virtual machines aren’t as universal in their software stack as those based on x86 processors. Most of the select applications built for AWS’ Graviton-based range would have been architected and explicitly optimized to run best on AWS and Graviton systems for better price-performance. As a result, most existing users will probably migrate to the newer c7g virtual machine to gain further performance improvements, albeit more slowly than if it was cheaper.

Brand new applications will likely be built for the latest generation, even if more expensive than before. We expect AWS to reduce the price of c7g Graviton-based virtual machines as its rollout ramps up, input costs gradually become less and sixth generation systems become increasingly out-of-date and costlier to maintain.

It will be interesting to see if this pattern repeats with x86 virtual machines, complicating the business case for migration. With next-generation platforms from both Intel and AMD launching soon, the next six to 12 months will confirm whether generational price increases form a trend, even if only transitionary.

Any price hikes could create challenges down the line. If a generation becomes outdated and difficult to maintain, will cloud providers then force users to migrate to newer virtual machines? If they do and the cost is higher, will buyers – now invested and reliant on the cloud provider — be forced to pay more or will they move elsewhere? Or will cloud providers increase prices of older, legacy generations to continue support for those willing to pay, just as many software vendors charge enterprise support costs? Moving to another cloud provider isn’t trivial. A lack of standards and commonality in cloud provider services and application programming interfaces mean cloud migration can be a complex and expensive task, with its own set of business risks. Being locked-in to a cloud provider isn’t really a problem if the quality of service remains acceptable and prices remain flat or reduce. But if users are being asked to pay more, they may be stuck between a rock and a hard place — pay more to move to the new generation or pay more to move to a different cloud provider.

Extreme heat stress-tests European data centers – again

Extreme heat stress-tests European data centers – again

An extreme heat wave swept across much of Western Europe on July 18 and 19, hitting some of the largest metropolitan areas such as Frankfurt, London, Amsterdam and Paris — which are also global data center hubs with hundreds of megawatts of capacity each. In the London area, temperatures at Heathrow Airport surpassed 40°C / 104°F to set an all-time record for the United Kingdom.

Other areas in the UK and continental Europe did not see new records only because of the heatwave in July 2019 that caused historical highs in Germany, France and the Netherlands, among other countries. Since most data centers in operation were built before 2019, this most recent heatwave either approached or surpassed design specifications for ambient operating temperatures.

Extreme heat stresses cooling systems by making components, such as compressors, pumps and fans, work harder than usual, which increases the likelihood of failures. Failures happen not only because of increased wear on the cooling equipment, but also due to lack of maintenance which includes regular cleaning of heat exchange coils. Most susceptible are air-cooled mechanical compressors in direct-expansion (DX) units and water chillers without economizers. DX cooling systems are more likely to rely on the ambient air for heat ejection as they tend to be relatively small in scale, and often installed in buildings that do not lend themselves to larger cooling infrastructure that is required for evaporative cooling units.

Cooling is not the only component at risk of exposure to extreme heat. Any external power equipment, such as backup power generators, is also susceptible. If a utility grid falters amid extreme temperature and the generator needs to take the load, it may not be able to deliver full nameplate power and it may even shut down to avoid damage from overheating.

Although some cloud service providers, reportedly, saw disruption due to thermal events in recent weeks, most data centers likely made it through the heatwave without a major incident. Redundancy in power and cooling, combined with good upkeep of equipment, should nearly eliminate the chances of an outage even if some components fail. Many operators have additional cushion because data centers typically don’t run at full utilization – tapping into extra cooling capacity that is normally in reserve can help maintain acceptable data hall temperatures during the peak of a heatwave. In contrast, cloud providers tend to drive their IT infrastructure harder, leaving less margin for error.

As the effects of climate change become more pronounced, making extreme weather events more likely, operators may need to revisit the climate resiliency of their sites. Uptime Institute recommends reassessing climatic conditions for each site regularly. Design conditions against which the data center was built may be out of date — in some cases by more than a decade. This can adversely impact a data center’s ability to support the original design load, even if there are no failures. And a loss of capacity may mean losing some redundancy in the event of an equipment failure. Should that coincide with high utilization of the infrastructure, a data center may not have sufficient reserves to maintain the load.

Operators have several potential responses to choose from, depending on the business case and the technical reality of the facility. One is to derate the maximum capacity of the facility with the view that its original sizing will not be needed. Operators can also decide to increase target supply air temperature or allow to rise temporarily wherever there is headroom (e.g., from 70°F / 21°C to 75°F / 24°C), to reduce the load on cooling systems and maintain full capacity. This could also involve elevating chilled water temperatures if there is sufficient pumping capacity. Another option is to tap into some of the redundancy to bring more cooling capacity online, including the option to operate at N capacity (no redundancy) on a temporary basis.

A lesson from recent heatwaves in Europe is that temperature extremes did not coincide with high humidity levels. This means that evaporative (and adiabatic) cooling systems remained highly effective throughout, and within design conditions for wet bulb (the lowest temperature of ambient air when fully saturated with moisture). Adding sprinkler systems to DX and chiller units, or evaporative economizers to the heat rejection loop will be attractive for many.

Longer term, the threat of climate change will likely prompt further adjustments in data centers. The difficulty (or even impossibility) of modeling future extreme weather events mean that hardening infrastructure against them may require operators to reconsider siting decisions, and / or adopt major changes to their cooling strategies, such as heat rejection into bodies of water and the transition to direct liquid cooling of IT systems.


Comment from Uptime Institute:

Uptime’s team of global consultants has inspected and certified thousands of enterprise-grade data center facilities worldwide, thoroughly evaluating considerations around extreme outside temperature fluctuations and hundreds of other risk areas at every site. Our Data Center Risk Assessment brings this expertise directly to owners and operators, resulting in a comprehensive review of your facility’s infrastructure, mechanical systems and operations protocols. Learn more here.