Uptime Institute has spent years analyzing the roots causes for data center and service outages, surveying thousands of IT professionals throughout the year on this topic. According to the data, the vast majority of data center failures are caused by human error. Some industry experts report numbers as high as 75%, but Uptime Institute generally reports about 70% based on the wealth of data we gather continuously. That assumption immediately raises an important question: Just how preventable are most outages?
Certainly, the number of outages remains persistently high, and the associated costs of these outages are also high. Uptime Institute data from the past two years demonstrates that almost one-third of data center owners and operators experienced a downtime incident or severe degradation of service in the past year, and half in the previous three years. Many of these incidents had severe financial consequences, with 10% of the 2019 respondents reporting that their most recent incident cost more than $1 million.
These findings, and others related to the causes of outages, are perhaps not unexpected. But more surprisingly, in Uptime Institute’s April 2019 survey, 60% of respondents believed that their most recent significant downtime incident could have been prevented with better management/processes or configuration. For outages that cost greater than $1 million, this figure jumped to 74%, and then leveled out around 50% as the outage costs increased to more than $40 million. These numbers remain persistently high, given the existing knowledge available on the causes and sources of downtime incidents and the costs of many downtime incidents.
Data center owners and operators know that on-premises power failures continue to cause the most outages (33%), with network and connectivity issues close behind (31%). Additional failures attributed to colocation providers could also have been prevented by the provider.
These findings should be alarming to everyone in the digital infrastructure business. After years of building data centers, and adding complex layers of features and functionality, not to mention dynamic workload migration and orchestration, the industry’s report card on actual service delivery performance is less than stellar. And while these sorts of failures should be very rare in concurrently maintainable and fault tolerant facilities when appropriate and complete procedures are in place, what we are finding is the operational part of the story falls flat. Simply put, if humans worked harder to MANAGE the well-designed and constructed facilities better, we would have fewer outages..
Uptime Institute consultants have underscored the critically important role procedures play in data center operations. They remind listeners that having and maintaining appropriate and complete procedures is essential to achieving performance and service availability goals. These same procedures can also help data centers meet efficiency goals, even in conditions that exceed planned design days. Among other benefits, well conceived procedures and the extreme discipline to follow these procedures helps operators cope with strong storms, properly perform maintenance and upgrades, manage costs and, perhaps most relevant, restore operations quickly after an outage.
So why, then, does the industry continue to experience downtime incidents, given that the causes have been so well pinpointed, the costs are so well-known and the solution to reducing their frequency (better processes and procedures) is so obvious? We just don’t try hard enough.
When asking our constituents about the causes for their outages, there are perhaps as many explanations as there are respondents. Here are just a few questions to consider when looking internal at your own risks and processes:
Does the complexity of your infrastructure, especially the distributed nature of it, increase the risk that simple errors will cascade into a service outage?
Is your organization expanding critical IT capacity faster than it can attract and apply the resources to manage that infrastructure?
Has your organization started to see any staffing and skills shortage, which may be starting to impair mission-critical operations?
Do your concerns about cyber vulnerability and data security outweigh concerns about physical infrastructure?
Does your organization shortchange training and education programs when budgeting?
Does your organizations under-invest in IT operations, management, and other business management functions?
Does your organization truly understand change management, especially when many of your workloads may already be shared across multiple servers, in multiple facilities or in entirely different types of IT environments including co-location and the cloud?
Does you organization consider the needs at the application level when designing new facilities or cloud adoptions?
Perhaps there is simply a limit to what can be achieved in an industry that still relies heavily on people to perform many of the most basic and critical tasks and thus is subject to human error, which can never be completely eliminated. However, a quick survey of the issues suggests that management failure — not human error — is the main reason that outages persist. By under-investing in training, failing to enforce policies, allowing procedures to grow outdated, and underestimating the importance of qualified staff, management sets the stage for a cascade of circumstances that leads to downtime. If we try harder, we can make progress. If we leverage the investments in physical infrastructure by applying the right level of operational expertise and business management, outages will decline.
We just need to try harder.
More information on this and similar topics is available to members of the Uptime Institute which can be initiated here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/09/GettyImages-966689448-aspect2.7.jpg8162218Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2019-09-23 06:15:002019-09-11 13:44:41How to avoid outages: Try harder!
In the recently published 2019 Uptime Institute supplier survey, participants told us they are witnessing higher than normal data center spending patterns. This is in line with general market trends, driven by the demand for data and digital services. It is also a welcome sign for those suppliers who witnessed a downturn two to three years ago, as public cloud began to take a bite.
The increase in spending is not only by hyperscalers known to be designing for 100x scalability and building for 10x growth. Smaller facilities (under 20 MW) are also seeing continued investment, including in higher levels of redundancy at primary sites (a trend that may have surprised some).
However, this growth continues to raise concerns. In this year’s survey, the top challenge operators face, as identified by suppliers, is forecasting future data center capacity requirements. This is followed by the need to maintain competitive and cost-efficient operations compared with cloud/colocation. Managing different data center environments dropped to fourth place, after coming second in last year’s supplier survey. This finding agrees with the results of our 2019 operator survey (of around 1,000 data centers operators around the world). In that survey, our analysis attributed the change to the advancement in tools and market maturity.
The figure below shows the top challenges operators faced in 2018 and 2019, as reported by their suppliers:
Forecasting data center capacity is a long-standing issue. Rapid changes in technology and the difficulty of anticipating future workload growth at a time when there are so many choices complicate matters. Over-provisioning capacity, the most commonly adopted strategy, leads to inefficiencies in operations (and unnecessary upfront investment). Against this, under-provisioning capacity is an operational risk and could also mean facilities reach their limit before their planned investment life-cycle.
Depending on the sector and type of workload, many organizations have adopted modular data center designs, which can be an effective way to alleviate the expense of over-provisioning. Where appropriate, some operators also move highly unpredictable or the most easily/economically transported workloads to public cloud environments. These strategies, plus various other factors driving the uptake of mixed IT infrastructures, mean more organizations are accumulating expertise in managing hybrid environments. This may explain why the challenge of managing different data center environments dropped to fourth place in our survey this year. Additionally, cloud computing suppliers are offering more effective tools to help customers better manage their costs when running cloud services.
The adoption of cloud-first policies by many operators means managers are having to demonstrate cost-effectiveness more than ever. This means that understanding the true cost of maintaining in-house facilities versus the cost of cloud/colocation venues is becoming more important, as the survey results above show.
The 2019 Uptime Institute operator survey also reflects this. Forty percent of participants indicated that they are not confident in their organization’s ability to compare costs between in-house versus cloud/colocation environments. Indeed, this is not a straightforward exercise. On the one hand, the structure of some enterprises (e.g., how budgets are split) makes calculating the cost of running owned sites tricky. On the other hand, calculating the true cost of moving to the cloud is also not straightforward. There may be costs inherent in the transition related to application re-engineering, potential repatriation or network upgrades for example (and there is now a vast choice of cloud offerings that require careful costing and management). Among other issues, such as vendor lock-in, this complexity is now driving many to change their policies to be more about cloud appropriateness, rather than cloud-first.
Want to know more details? The full report Uptime Institute data center supply-side survey 2019 is available to members of the Uptime Institute Network which can be found here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/09/GettyImages-973035578-crop-blog.jpg22135996Rabih Bashroushhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngRabih Bashroush2019-09-16 07:00:302019-08-30 14:38:43Troubling for operators: Capacity forecasting and maintaining cost competitiveness
With the recent expansion of the American Society of Heating, Refrigerating and Air-Conditioning Engineers’ (ASHRAE’s) acceptable data center operating temperature and humidity ranges — taken as an industry-standard best practice by many operators — the case for free air cooling has become much stronger. Free air cooling is an economical method of using low external air temperature to cool server rooms.
In the 2019 Uptime Institute Supply-side Survey (available to member of the Uptime Institute Network) we asked over 500 data center vendors, consultants and engineers about their customers’ adoption of free air economizer cooling (the use of outside air or a combination of water and air to supplement mechanical cooling) using the following approaches:
Indirect air: Outside air passes through a heat exchanger that separates the air inside the data center from the cooler outside air. This approach prevents particulates from entering the white space and helps control humidity levels.
Direct air: Outside air passes through an evaporative cooler and is then directed via filters to the data center cold aisle. When the temperature outside is too cold, the system mixes the outside air with exhaust air to achieve the correct inlet temperature for the facility.
Findings from the survey show that free air cooling economization projects continue to gain traction, with indirect free air cooling being slightly more popular than direct air. In our survey, 84% said that at least some of their customers are deploying indirect air cooling (74% for direct air). Only 16% of participants said that none of their customers are deploying indirect free air cooling (26% for direct air), as shown in the figure below.
The data suggests that there is more momentum behind direct free air cooling in North America than in other parts of the world. Among North American respondents, 70% indicated that some of their customers are deploying direct air cooling (compared with 63% indirect air). As shown in the figure below, this was not the case in Europe or Asia-Pacific, where suppliers reported that more customers were deploying indirect air. This perhaps could be linked to the fact that internet giants represent a bigger data center market share in North America than in other parts of the world — internet giants are known to favor direct free air cooling when deploying at scale.
The continued pressure to increase cost-efficiency, as well as the rising awareness and interest in environmental impact, is likely to continue driving uptake of free air cooling. Compared with traditional compressor-based cooling systems, free air cooling requires less upfront capital investment and involves lower operational expenses, while having a lower environmental impact (e.g., no refrigerants, low embedded carbon and a higher proportion of recyclable components).
Yet, some issues hampering free air cooling uptake will likely continue in the short term. These include the upfront retrofit investment required for existing facilities; humidity and air quality constraints (which are less of a problem for indirect air cooling); lack of reliable weather models in some areas (and the potential impact of climate change); and restrictive service level agreements, particularly in the colocation sector.
Moreover, a lack of understanding of the ASHRAE standards and clarity around IT equipment needs is driving some operators to design to the highest common denominator, particularly when hosting legacy or mixed IT systems. The opportunity to take advantage of free air cooling is missed as a result, due to the perceived need to adopt lower operating temperatures.
Going forward, at least in Europe, this problem might be partially addressed by the introduction of the new European EcoDesign legislation for servers and online storage devices, which will take effect from March 2020. The new legislation will require IT manufacturers to declare the operating condition classes and thermal performance of their equipment. This, in turn, will help enterprise data centers better optimize their operations by segregating IT equipment based on ambient operating requirements.
The full report Uptime Institute data center supply-side survey 2019 is available to members of the Uptime Institute Network. You can become a member or request guest access by looking here or contacting any member of the Uptime Institute team.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/09/BLOG-Energy-Arrows.jpg8012202Rabih Bashroushhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngRabih Bashroush2019-09-09 06:30:542019-08-19 15:17:00Data Center Free Air Cooling Trends
Uptime Institute has long argued that, although it may take many years, the long-term trend is toward a high level of automation in the data center, covering many functions that most managers currently would not trust to machines or outside programmers.
Our data center management maturity model shows this long-term evolution.
In our model, we have mapped different levels of operating efficiency to different stages of deployment of data center infrastructure management (DCIM) software. For any manager who is looking to buy DCIM or has already implemented the software and seeks expanded features or functions, we encourage them to consider their short- and long-term automation goals.
Today, most DCIM deployments fall into Level 2 or Level 3 of the model. A growing number of organizations are targeting Level 3 by integrating DCIM data with IT, cloud service and other non-facility data, as discussed in the report Data center management software and services: Effective selection and deployment (co-authored with Andy Lawrence).
The advent of AI-driven, cloud-based services will, we believe, drive greater efficiencies and, when deployed in combination with on-premises DCIM software, enable more data centers to reach Level 4 (and, over time, Level 5).
Although procurement decisions today may be only minimally affected by current automation needs, a later move toward greater automation should be considered, especially in terms of vendor choice/lock-in and integration.
Integration capabilities, as well as the use and integration of AI (including AI-driven cloud services), are important factors in both the overall strategic decision to deploy DCIM and the choice of a particular supplier/platform.
The full report Data center management software and services: Effective selection and deployment is available to members of the Uptime Institute Network here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/09/BLOG-MM.jpg8012202Rhonda Ascierto, Vice President, Research, Uptime Institutehttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngRhonda Ascierto, Vice President, Research, Uptime Institute2019-09-02 08:00:322019-08-19 15:29:41The Evolving Data Center Management Maturity Model, A Quick Update
Today, the role that the physical data center plays in software-defined data centers, particularly facility design and operational management, is often overlooked. However, this is likely to change.
As more networking and compute becomes virtualized and flexible, so too must data center resources, in order to achieve maximum agility and efficiency. To virtualize only IT and networking resources is to optimize only the top layers of the stack; the supply of underlying physical data center resources — power, cooling, space — must also be tightly coupled to IT demand and resources, and automated accordingly.
This is where data center infrastructure management (DCIM) software comes into play. Leading DCIM platforms enable not just the operational management of data centers, but also the automation of key resources, such as power and cooling. For dynamic resource management, integration with IT and other non-facility data is key.
By integrating DCIM, organizations can tightly couple demand for the virtualized and logical resources (IT and networking) with the supply of physical facility resources (power, cooling and space). Doing so enables cost efficiencies and reduces the risk of service interruption due to under-provisioning.
Integrating DCIM also enables more informed decision-making around best-execution venues (internally and for colocation customers), taking into account the cost and availability of IT, connectivity and data center resources.
While integration is typically a “phase two” strategy (i.e., following the full deployment of a DCIM suite), integration goals should be established early on. The figure below is a simplified view of some of the data sources from systems spanning the IT stack that DCIM can integrate with. Uptime Institute Intelligence’s report Data center management software and services: Effective selection and deployment provides a better understanding of what is required.
Which processes are likely to require multisystem integration? Here are some examples:
Monitoring capacity across clouds and on-premises data centers (enterprise and colocation)
Possible software integrations: Cloud service application programming interfaces (APIs) for cloud monitoring, virtual machine (VM) management, DCIM suite, IT service management (ITSM).
Adjusting or moving workloads according to availability or energy costs/reliability, or to reduce risk during maintenance
Possible software integrations: DCIM suite, VM management, cloud service APIs for cloud monitoring and hybrid cloud management, ITSM/IT asset management, maintenance management, service catalog.
Colocation portal providing key data to customers
Possible software integrations: service level agreement (SLA) management, customer relationship management (CRM), DCIM, ITSM/IT asset management, interconnection management.
Data center service-based costing (real-time, chargeback)
Possible software integrations: CRM, service management, financial management, DCIM power monitoring, VM/IT resource use. Also useful for carbon/energy tracking/reporting.
Cloud-based resiliency/disaster recovery
Possible software integrations: DCIM, IT monitoring, workload management, capacity management, storage management, VM management, cloud service APIs for cloud monitoring and hybrid cloud management, disaster recovery/backup.
Unified incident/problem management
Possible software integrations: DCIM, ITSM, maintenance management, work-order system.
Identifying and eliminating underused/comatose servers
Possible software integrations: DCIM monitoring, ITSM utilization, IT asset/capacity management, VM management.
End-to-end financial planning
Possible software integrations: Financial planning, DCIM capacity planning.
Possible software integrations: DCIM monitoring, CRM, financial management/planning, IT asset management.
We are seeing more organizations, including large enterprises and colos, invest in DCIM integrations for higher levels of visibility with a goal of end-to-end automation.
Ultimately, by integrating DCIM with IT and other systems, organizations can more effectively plan for data center capacity investments. Using DCIM to optimize the use of existing facilities could also mean enterprises and colos may need fewer or smaller facilities in the future.
The full report Data center management software and services: Effective selection and deployment is available to members of the Uptime Institute Network here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/08/DCIM-BLOG.jpg6931874Rhonda Ascierto, Vice President, Research, Uptime Institutehttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngRhonda Ascierto, Vice President, Research, Uptime Institute2019-08-26 09:00:482019-08-19 15:24:53DCIM as a Hub: Integrations Make all the difference
Uptime Institute’s Annual outage analysis, published early this year, called attention to the persistent problem of IT service and data center outages. Coupled with our annual survey data on outages, the analysis explains, to a degree, why investments to date have not greatly reduced the outage problem — at least from an end-to-end service view.
Gathering outage data is a challenge: there is no centralized database of outage reports in any country (that we are aware of) and, short of mandatory rules, there probably won’t be. Uptime Institute’s outage analysis relied on reports in the media, which skews the findings, and on survey data, which has its own biases. Other initiatives have similar limitations.
The US government also struggles to get an accurate accounting of data center/IT outages, even in closely watched industries with a public profile. The US General Accounting Office (GAO) recently issued a report (GAO-19-514) in which it documents 34 IT outages from 2015 through 2017 that affected 11 of the 12 selected (domestic US) airlines included in the report. The GAO believes that about 85% of the outages resulted in some flight delays or cancellations and 14% caused a ground stop of several hours or more. And directly related, Uptime Institute identified 10 major outages affecting the airline industry worldwide in the period since January 2016.
The Uptime Institute data is drawn from media reports and other more direct sources. It is not expected to be comprehensive. Many, many outages are kept as quiet as possible and the parties involved do their best to downplay the impact. The media-based approach provides insights, but probably understates the extent of the outage problem — at least in the global airline industry.
Government data is not complete either. The GAO explicitly notes many circumstances in which information about airline IT outages is unavailable to it and other agencies, except in unusual cases. These circumstances might involve smaller airlines and airports that don’t get attention. The GAO also notes that delays and cancellations can have multiple causes, which can reduce the number of instances in which an IT outage is blamed. The GAO’s illustration below provides examples of potential IT outage effects.
The report further notes: “No government data were available to identify IT outages or determine how many flights or passengers were affected by such outages. Similarly, the report does not describe the remedies given to passengers or their costs.” We do know, of course, that some airlines — Delta and United are two examples — have faced significant outage-related financial consequences.
Consumer complaints stemming from IT outages accounted for less than one percent of all complaints received by the US Department of Transportation from 2015 through June 2018, according to agency officials. These complaints raised concerns similar to those resulting from more common causes of flight disruption, such as weather. It is likely that all these incidents bring reputation costs to airlines that are greater than the operational costs the incidents incur.
The GAO did not have the mandate to identify the causes of outages it identified. The report describes possible causes in general terms. These include aging and legacy systems, incompatible systems, complexity, inter-dependencies, and a transition to third-party and cloud systems. Other issues included hardware failures, software outages or slowdowns, power or telecommunications failures, and network connectivity.
The GAO said, “Representatives from six airlines, an IT expert, and four other aviation industry stakeholders pointed to a variety of factors that could contribute to an outage or magnify the effect of an IT disruption. These factors ranged from under-investment in IT systems after years of poor airline profitability, increasing requirements on aging systems or systems not designed to work together, and the introduction of new customer-oriented platforms and services.” All of this is hardly breaking news to industry professionals, and many of these issues have been discussed in Uptime Institute meetings and in our 2016 Airline outages FAQ.
The report cites prevention efforts that reflect similarly standard themes, with five airlines moving to hybrid models (spreading workloads and risk, in theory) and two improving connectivity by using multiple telecommunications network providers. Stakeholders interviewed by the GAO mentioned contingency planning, recovery strategies and routine system testing; the use of artificial intelligence (although it is not clear for what functions); and outage drills as means for avoiding and minimizing system disruptions. (The Uptime Institute Digital Infrastructure Resiliency Assessment helps organizations better understand where their strengths and weaknesses lie.)
In short, the GAO was able to throw some light on a known problem but was not able to generate a complete record of outages in the US airline industry, provide an estimate of direct or indirect costs, explain their severity and impact, or pinpoint their causes. As a result, each airline is on its own to determine whether it will investigate outages, identify causes or invest in remedies. There is little information sharing; Uptime Institute’s Abnormal Incident Reporting System examines causes for data center-specific events, but it is not industry specific and would not capture many network or IT-related events. Although there have been some calls for greater sharing, within industries and beyond, there is little sign that most operators are willing to openly discuss causes and failures owing to the dangers of further reputation damage, lawsuits and exploitation by competition.
Access to our complete annual outages reports, data center survey results, abnormal Incident reporting data, energy efficiency in the data center and a wealth of other topics is available to members of Uptime Institute Network. Want to know more about this organization? Check out the complete benefits and request a trial of membership in the community here.
https://journal.uptimeinstitute.com/wp-content/uploads/2019/08/GOA-chart-2.7aspect.jpg10002700Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2019-08-19 09:00:032019-08-19 15:20:06IT Outages in the Airline Industry, A New Report by the GAO
How to avoid outages: Try harder!
/in Executive, Operations/by Kevin HeslinUptime Institute has spent years analyzing the roots causes for data center and service outages, surveying thousands of IT professionals throughout the year on this topic. According to the data, the vast majority of data center failures are caused by human error. Some industry experts report numbers as high as 75%, but Uptime Institute generally reports about 70% based on the wealth of data we gather continuously. That assumption immediately raises an important question: Just how preventable are most outages?
Certainly, the number of outages remains persistently high, and the associated costs of these outages are also high. Uptime Institute data from the past two years demonstrates that almost one-third of data center owners and operators experienced a downtime incident or severe degradation of service in the past year, and half in the previous three years. Many of these incidents had severe financial consequences, with 10% of the 2019 respondents reporting that their most recent incident cost more than $1 million.
These findings, and others related to the causes of outages, are perhaps not unexpected. But more surprisingly, in Uptime Institute’s April 2019 survey, 60% of respondents believed that their most recent significant downtime incident could have been prevented with better management/processes or configuration. For outages that cost greater than $1 million, this figure jumped to 74%, and then leveled out around 50% as the outage costs increased to more than $40 million. These numbers remain persistently high, given the existing knowledge available on the causes and sources of downtime incidents and the costs of many downtime incidents.
Data center owners and operators know that on-premises power failures continue to cause the most outages (33%), with network and connectivity issues close behind (31%). Additional failures attributed to colocation providers could also have been prevented by the provider.
These findings should be alarming to everyone in the digital infrastructure business. After years of building data centers, and adding complex layers of features and functionality, not to mention dynamic workload migration and orchestration, the industry’s report card on actual service delivery performance is less than stellar. And while these sorts of failures should be very rare in concurrently maintainable and fault tolerant facilities when appropriate and complete procedures are in place, what we are finding is the operational part of the story falls flat. Simply put, if humans worked harder to MANAGE the well-designed and constructed facilities better, we would have fewer outages..
Uptime Institute consultants have underscored the critically important role procedures play in data center operations. They remind listeners that having and maintaining appropriate and complete procedures is essential to achieving performance and service availability goals. These same procedures can also help data centers meet efficiency goals, even in conditions that exceed planned design days. Among other benefits, well conceived procedures and the extreme discipline to follow these procedures helps operators cope with strong storms, properly perform maintenance and upgrades, manage costs and, perhaps most relevant, restore operations quickly after an outage.
So why, then, does the industry continue to experience downtime incidents, given that the causes have been so well pinpointed, the costs are so well-known and the solution to reducing their frequency (better processes and procedures) is so obvious? We just don’t try hard enough.
When asking our constituents about the causes for their outages, there are perhaps as many explanations as there are respondents. Here are just a few questions to consider when looking internal at your own risks and processes:
Perhaps there is simply a limit to what can be achieved in an industry that still relies heavily on people to perform many of the most basic and critical tasks and thus is subject to human error, which can never be completely eliminated. However, a quick survey of the issues suggests that management failure — not human error — is the main reason that outages persist. By under-investing in training, failing to enforce policies, allowing procedures to grow outdated, and underestimating the importance of qualified staff, management sets the stage for a cascade of circumstances that leads to downtime. If we try harder, we can make progress. If we leverage the investments in physical infrastructure by applying the right level of operational expertise and business management, outages will decline.
We just need to try harder.
More information on this and similar topics is available to members of the Uptime Institute which can be initiated here.
Troubling for operators: Capacity forecasting and maintaining cost competitiveness
/in Executive, Operations/by Rabih BashroushIn the recently published 2019 Uptime Institute supplier survey, participants told us they are witnessing higher than normal data center spending patterns. This is in line with general market trends, driven by the demand for data and digital services. It is also a welcome sign for those suppliers who witnessed a downturn two to three years ago, as public cloud began to take a bite.
The increase in spending is not only by hyperscalers known to be designing for 100x scalability and building for 10x growth. Smaller facilities (under 20 MW) are also seeing continued investment, including in higher levels of redundancy at primary sites (a trend that may have surprised some).
However, this growth continues to raise concerns. In this year’s survey, the top challenge operators face, as identified by suppliers, is forecasting future data center capacity requirements. This is followed by the need to maintain competitive and cost-efficient operations compared with cloud/colocation. Managing different data center environments dropped to fourth place, after coming second in last year’s supplier survey. This finding agrees with the results of our 2019 operator survey (of around 1,000 data centers operators around the world). In that survey, our analysis attributed the change to the advancement in tools and market maturity.
The figure below shows the top challenges operators faced in 2018 and 2019, as reported by their suppliers:
Forecasting data center capacity is a long-standing issue. Rapid changes in technology and the difficulty of anticipating future workload growth at a time when there are so many choices complicate matters. Over-provisioning capacity, the most commonly adopted strategy, leads to inefficiencies in operations (and unnecessary upfront investment). Against this, under-provisioning capacity is an operational risk and could also mean facilities reach their limit before their planned investment life-cycle.
Depending on the sector and type of workload, many organizations have adopted modular data center designs, which can be an effective way to alleviate the expense of over-provisioning. Where appropriate, some operators also move highly unpredictable or the most easily/economically transported workloads to public cloud environments. These strategies, plus various other factors driving the uptake of mixed IT infrastructures, mean more organizations are accumulating expertise in managing hybrid environments. This may explain why the challenge of managing different data center environments dropped to fourth place in our survey this year. Additionally, cloud computing suppliers are offering more effective tools to help customers better manage their costs when running cloud services.
The adoption of cloud-first policies by many operators means managers are having to demonstrate cost-effectiveness more than ever. This means that understanding the true cost of maintaining in-house facilities versus the cost of cloud/colocation venues is becoming more important, as the survey results above show.
The 2019 Uptime Institute operator survey also reflects this. Forty percent of participants indicated that they are not confident in their organization’s ability to compare costs between in-house versus cloud/colocation environments. Indeed, this is not a straightforward exercise. On the one hand, the structure of some enterprises (e.g., how budgets are split) makes calculating the cost of running owned sites tricky. On the other hand, calculating the true cost of moving to the cloud is also not straightforward. There may be costs inherent in the transition related to application re-engineering, potential repatriation or network upgrades for example (and there is now a vast choice of cloud offerings that require careful costing and management). Among other issues, such as vendor lock-in, this complexity is now driving many to change their policies to be more about cloud appropriateness, rather than cloud-first.
Want to know more details? The full report Uptime Institute data center supply-side survey 2019 is available to members of the Uptime Institute Network which can be found here.
Data Center Free Air Cooling Trends
/in Design, Executive/by Rabih BashroushWith the recent expansion of the American Society of Heating, Refrigerating and Air-Conditioning Engineers’ (ASHRAE’s) acceptable data center operating temperature and humidity ranges — taken as an industry-standard best practice by many operators — the case for free air cooling has become much stronger. Free air cooling is an economical method of using low external air temperature to cool server rooms.
In the 2019 Uptime Institute Supply-side Survey (available to member of the Uptime Institute Network) we asked over 500 data center vendors, consultants and engineers about their customers’ adoption of free air economizer cooling (the use of outside air or a combination of water and air to supplement mechanical cooling) using the following approaches:
Findings from the survey show that free air cooling economization projects continue to gain traction, with indirect free air cooling being slightly more popular than direct air. In our survey, 84% said that at least some of their customers are deploying indirect air cooling (74% for direct air). Only 16% of participants said that none of their customers are deploying indirect free air cooling (26% for direct air), as shown in the figure below.
The data suggests that there is more momentum behind direct free air cooling in North America than in other parts of the world. Among North American respondents, 70% indicated that some of their customers are deploying direct air cooling (compared with 63% indirect air). As shown in the figure below, this was not the case in Europe or Asia-Pacific, where suppliers reported that more customers were deploying indirect air. This perhaps could be linked to the fact that internet giants represent a bigger data center market share in North America than in other parts of the world — internet giants are known to favor direct free air cooling when deploying at scale.
The continued pressure to increase cost-efficiency, as well as the rising awareness and interest in environmental impact, is likely to continue driving uptake of free air cooling. Compared with traditional compressor-based cooling systems, free air cooling requires less upfront capital investment and involves lower operational expenses, while having a lower environmental impact (e.g., no refrigerants, low embedded carbon and a higher proportion of recyclable components).
Yet, some issues hampering free air cooling uptake will likely continue in the short term. These include the upfront retrofit investment required for existing facilities; humidity and air quality constraints (which are less of a problem for indirect air cooling); lack of reliable weather models in some areas (and the potential impact of climate change); and restrictive service level agreements, particularly in the colocation sector.
Moreover, a lack of understanding of the ASHRAE standards and clarity around IT equipment needs is driving some operators to design to the highest common denominator, particularly when hosting legacy or mixed IT systems. The opportunity to take advantage of free air cooling is missed as a result, due to the perceived need to adopt lower operating temperatures.
Going forward, at least in Europe, this problem might be partially addressed by the introduction of the new European EcoDesign legislation for servers and online storage devices, which will take effect from March 2020. The new legislation will require IT manufacturers to declare the operating condition classes and thermal performance of their equipment. This, in turn, will help enterprise data centers better optimize their operations by segregating IT equipment based on ambient operating requirements.
The full report Uptime Institute data center supply-side survey 2019 is available to members of the Uptime Institute Network. You can become a member or request guest access by looking here or contacting any member of the Uptime Institute team.
The Evolving Data Center Management Maturity Model, A Quick Update
/in Operations/by Rhonda Ascierto, Vice President, Research, Uptime InstituteUptime Institute has long argued that, although it may take many years, the long-term trend is toward a high level of automation in the data center, covering many functions that most managers currently would not trust to machines or outside programmers.
Recent advances in artificial intelligence (AI) have made this seem more likely. (For more on data center AI, see our report Very smart data centers: How artificial intelligence will power operational decisions.)
Our data center management maturity model shows this long-term evolution.
In our model, we have mapped different levels of operating efficiency to different stages of deployment of data center infrastructure management (DCIM) software. For any manager who is looking to buy DCIM or has already implemented the software and seeks expanded features or functions, we encourage them to consider their short- and long-term automation goals.
Today, most DCIM deployments fall into Level 2 or Level 3 of the model. A growing number of organizations are targeting Level 3 by integrating DCIM data with IT, cloud service and other non-facility data, as discussed in the report Data center management software and services: Effective selection and deployment (co-authored with Andy Lawrence).
The advent of AI-driven, cloud-based services will, we believe, drive greater efficiencies and, when deployed in combination with on-premises DCIM software, enable more data centers to reach Level 4 (and, over time, Level 5).
Although procurement decisions today may be only minimally affected by current automation needs, a later move toward greater automation should be considered, especially in terms of vendor choice/lock-in and integration.
Integration capabilities, as well as the use and integration of AI (including AI-driven cloud services), are important factors in both the overall strategic decision to deploy DCIM and the choice of a particular supplier/platform.
The full report Data center management software and services: Effective selection and deployment is available to members of the Uptime Institute Network here.
DCIM as a Hub: Integrations Make all the difference
/in Operations/by Rhonda Ascierto, Vice President, Research, Uptime InstituteToday, the role that the physical data center plays in software-defined data centers, particularly facility design and operational management, is often overlooked. However, this is likely to change.
As more networking and compute becomes virtualized and flexible, so too must data center resources, in order to achieve maximum agility and efficiency. To virtualize only IT and networking resources is to optimize only the top layers of the stack; the supply of underlying physical data center resources — power, cooling, space — must also be tightly coupled to IT demand and resources, and automated accordingly.
This is where data center infrastructure management (DCIM) software comes into play. Leading DCIM platforms enable not just the operational management of data centers, but also the automation of key resources, such as power and cooling. For dynamic resource management, integration with IT and other non-facility data is key.
By integrating DCIM, organizations can tightly couple demand for the virtualized and logical resources (IT and networking) with the supply of physical facility resources (power, cooling and space). Doing so enables cost efficiencies and reduces the risk of service interruption due to under-provisioning.
Integrating DCIM also enables more informed decision-making around best-execution venues (internally and for colocation customers), taking into account the cost and availability of IT, connectivity and data center resources.
While integration is typically a “phase two” strategy (i.e., following the full deployment of a DCIM suite), integration goals should be established early on. The figure below is a simplified view of some of the data sources from systems spanning the IT stack that DCIM can integrate with. Uptime Institute Intelligence’s report Data center management software and services: Effective selection and deployment provides a better understanding of what is required.
Which processes are likely to require multisystem integration? Here are some examples:
Monitoring capacity across clouds and on-premises data centers (enterprise and colocation)
Adjusting or moving workloads according to availability or energy costs/reliability, or to reduce risk during maintenance
Colocation portal providing key data to customers
Data center service-based costing (real-time, chargeback)
Cloud-based resiliency/disaster recovery
Unified incident/problem management
Identifying and eliminating underused/comatose servers
End-to-end financial planning
Automated services (provisioning, colocation customer onboarding, audits, etc.)
We are seeing more organizations, including large enterprises and colos, invest in DCIM integrations for higher levels of visibility with a goal of end-to-end automation.
Ultimately, by integrating DCIM with IT and other systems, organizations can more effectively plan for data center capacity investments. Using DCIM to optimize the use of existing facilities could also mean enterprises and colos may need fewer or smaller facilities in the future.
The full report Data center management software and services: Effective selection and deployment is available to members of the Uptime Institute Network here.
IT Outages in the Airline Industry, A New Report by the GAO
/in Executive, Operations/by Kevin HeslinUptime Institute’s Annual outage analysis, published early this year, called attention to the persistent problem of IT service and data center outages. Coupled with our annual survey data on outages, the analysis explains, to a degree, why investments to date have not greatly reduced the outage problem — at least from an end-to-end service view.
Gathering outage data is a challenge: there is no centralized database of outage reports in any country (that we are aware of) and, short of mandatory rules, there probably won’t be. Uptime Institute’s outage analysis relied on reports in the media, which skews the findings, and on survey data, which has its own biases. Other initiatives have similar limitations.
The US government also struggles to get an accurate accounting of data center/IT outages, even in closely watched industries with a public profile. The US General Accounting Office (GAO) recently issued a report (GAO-19-514) in which it documents 34 IT outages from 2015 through 2017 that affected 11 of the 12 selected (domestic US) airlines included in the report. The GAO believes that about 85% of the outages resulted in some flight delays or cancellations and 14% caused a ground stop of several hours or more. And directly related, Uptime Institute identified 10 major outages affecting the airline industry worldwide in the period since January 2016.
The Uptime Institute data is drawn from media reports and other more direct sources. It is not expected to be comprehensive. Many, many outages are kept as quiet as possible and the parties involved do their best to downplay the impact. The media-based approach provides insights, but probably understates the extent of the outage problem — at least in the global airline industry.
Government data is not complete either. The GAO explicitly notes many circumstances in which information about airline IT outages is unavailable to it and other agencies, except in unusual cases. These circumstances might involve smaller airlines and airports that don’t get attention. The GAO also notes that delays and cancellations can have multiple causes, which can reduce the number of instances in which an IT outage is blamed. The GAO’s illustration below provides examples of potential IT outage effects.
The report further notes: “No government data were available to identify IT outages or determine how many flights or passengers were affected by such outages. Similarly, the report does not describe the remedies given to passengers or their costs.” We do know, of course, that some airlines — Delta and United are two examples — have faced significant outage-related financial consequences.
Consumer complaints stemming from IT outages accounted for less than one percent of all complaints received by the US Department of Transportation from 2015 through June 2018, according to agency officials. These complaints raised concerns similar to those resulting from more common causes of flight disruption, such as weather. It is likely that all these incidents bring reputation costs to airlines that are greater than the operational costs the incidents incur.
The GAO did not have the mandate to identify the causes of outages it identified. The report describes possible causes in general terms. These include aging and legacy systems, incompatible systems, complexity, inter-dependencies, and a transition to third-party and cloud systems. Other issues included hardware failures, software outages or slowdowns, power or telecommunications failures, and network connectivity.
The GAO said, “Representatives from six airlines, an IT expert, and four other aviation industry stakeholders pointed to a variety of factors that could contribute to an outage or magnify the effect of an IT disruption. These factors ranged from under-investment in IT systems after years of poor airline profitability, increasing requirements on aging systems or systems not designed to work together, and the introduction of new customer-oriented platforms and services.” All of this is hardly breaking news to industry professionals, and many of these issues have been discussed in Uptime Institute meetings and in our 2016 Airline outages FAQ.
The report cites prevention efforts that reflect similarly standard themes, with five airlines moving to hybrid models (spreading workloads and risk, in theory) and two improving connectivity by using multiple telecommunications network providers. Stakeholders interviewed by the GAO mentioned contingency planning, recovery strategies and routine system testing; the use of artificial intelligence (although it is not clear for what functions); and outage drills as means for avoiding and minimizing system disruptions. (The Uptime Institute Digital Infrastructure Resiliency Assessment helps organizations better understand where their strengths and weaknesses lie.)
In short, the GAO was able to throw some light on a known problem but was not able to generate a complete record of outages in the US airline industry, provide an estimate of direct or indirect costs, explain their severity and impact, or pinpoint their causes. As a result, each airline is on its own to determine whether it will investigate outages, identify causes or invest in remedies. There is little information sharing; Uptime Institute’s Abnormal Incident Reporting System examines causes for data center-specific events, but it is not industry specific and would not capture many network or IT-related events. Although there have been some calls for greater sharing, within industries and beyond, there is little sign that most operators are willing to openly discuss causes and failures owing to the dangers of further reputation damage, lawsuits and exploitation by competition.
—————————————————————————————————————————————————-
Access to our complete annual outages reports, data center survey results, abnormal Incident reporting data, energy efficiency in the data center and a wealth of other topics is available to members of Uptime Institute Network. Want to know more about this organization? Check out the complete benefits and request a trial of membership in the community here.