Cogeneration powers South Africa’s first Tier III Certified data center

MTN’s new facility makes use of Kyoto Wheel

By Olu Soluade, Robin Sloan, Willem Weber, and Philip Young

01-MTN-Centurion-Site

Figure 1. MTN Centurion Site

MTN’s new data center in Centurion, Johannesburg, South Africa, includes a 500-square-meter (m2) space to support MTN’s existing Pretoria Switch. MTN provides cellular telecommunications services, hosted data space, and operations offices via a network of regional switches. The Centurion Switch data center is a specialist regional center serving a portion of the smallest but most densely populated province of South Africa, Gauteng. The operational Centurion Switch Data Center provides energy efficient and innovative service to the MTN regional network (See Figure 1).

As part of the project, MTN earned Uptime Institute Tier III Design and Facility Certifications and the Carbon Credit application and approval by the Department of Energy-South Africa. Among other measures, MTN even deployed Novec 1230 fire-suppression gas to gain carbon credits from the United Nations Framework Convention on Climate Change (UNFCC). MTN Centurion is the first Uptime Institute Tier III Certified Design and Facility in South Africa. In addition, the facility became the first in South Africa to make use of the Kyoto Wheel to help it achieve its low PUE and energy-efficiency operations goals.

A modular design accommodates the 500-m2 white space and provides auxiliary services and functions to ensure a data center that meets MTN’s standards and specifications.

Space was also allocated for the future installation of:

  • Radio mast
  • RF room
  • Tri-generation plant
  • Solar systems
  • Wind banks

Electrical Services

The building is divided into 250 m2 of transmission space and 250 m2 of data space. Both spaces were designed to the following specifications.

  • Data cabinets at 6 kilowatt (kW)/cabinet
  • Transmission cabinets at 2.25 kW/cabinet
  • Maximum 12 cabinets per row
  • Primary backup power with rotary UPS run on biodiesel
  • In row dc PDUs (15 kW)
  • In row ac PDUs (40 kW)
  • Utility supply from Tshwane (CAI applied and got a connection of 8 mega volt-amperes)
  • 25 percent of all energy consumed to be generated from on site renewable resources
External-chiller-plant

Figure 2. External chiller plant

 

Heating, Ventilation, Air-conditioning

A specific client requirement was to build a facility that is completely off grid. As a result the design team conducted extensive research and investigated various types of refrigeration plants to determine which system would be the most efficient and cost effective.

The final technologies for the main areas include (see Figures 2-5):

  1. Air-cooled chillers
  2. Kyoto Wheel in main switch room
  3. Chilled water down-blow air handling units in other rooms
  4. Hot Aisle containment

The design for the data center facility began as a Tier IV facility, but the requirement for autonomous control caused management to target Tier III instead. However, the final plans incorporate many features that might be found in a Fault Tolerant facility.

Tables 1 and 2 describe the facility’s electrical load in great detail.

Green Technologies

Stainless-steel-cladded-chilled-water-pipework

Figure 3. Stainless steel cladded chilled water pipework.

The horizontal mounting of the coil of the Kyoto Wheel (See Figure 5 a-b) at MTN Centurion is a one of a kind. The company paid strict attention to installation details and dedicated great effort to the seamless architectural integration of the technology.
MTN chose the Kyoto Wheel (enthalpy wheel) to transfer energy between hot indoor returning air from the data center and outdoor air because the indirect heat exchange between hot return air and cooler outdoor air offers:

  • Free cooling/heat recovery
  • Reduced load on major plant
  • Lower running costs
  • Low risk of dust transfer

    Ducted-return-air

    Figure 4. Ducted return air from cabinets in data space

Although the use of an enthalpy wheel in South Africa is rare (MTN Centurion is one of two installations known to the author), Southern African temperature conditions are very well suited to the use of air-side economizers. Nonetheless the technology has not been widely accepted in South Africa because of:

  • Aversion to technologies untested in the African market
  • Risk mitigation associated with dust ingress to the data center
  • Historically low data center operating temperatures (older equipment)
  • Historically low local energy costs

The tri-generation plant is one of the other green measures for the Centurion Switch. The tri-generation meets the base load of the switch.

Kyoto-Wheel-installation

Figure 5a. Kyoto Wheel installation

Kyoto-Wheel-installation

Figure 5b. Kyoto Wheel installation

MTN first employed a tri-generation plant at its head office of about four years ago (see Figures 6-8).

The data center also incorporates low-power, high-efficiency lighting, which is controlled by occupancy sensors and photosensors (see Figure 9).

Design Challenges

Table 1

Table 1. Phase 1 Building 950 W/data cabinet and 1,950 W/switch rack at 12 cabinets/row 600-kW maximum per floor

MTN Centurion Switch experienced several challenges during design and construction and ultimately applied solutions that can be used on future projects:

  • The original Kyoto Wheel software was developed for the Northern Hemisphere. For this project, several changes were incorporated into the software for the Southern Hemisphere.
  • Dust handling in Africa varies from the rest world. Heavy dust requires heavy washable pre-filters and a high carrying capacity in-filtration media.

    Table 2

    Table 2. Phase 1 Building SF – 2,050 W/data cabinet at 12 cabinets/row, 800 kW maximum per floor

The design team also identified three steps to encourage further use of airside economizers in South Africa:

  • Increased education to inform operators about the benefits of higher operating temperatures
  • Increased publicity to increase awareness of air-side economizers
  • Better explanations to promote understanding of dust risks and solutions

Innovation

Many features incorporated in the MTN facility are tried-and-true data center solutions. However, in addition to the enthalpy wheel, MTN employed modularity and distributed PDU technology for the first time in this project.

In addition, the Kyoto Wheel is used throughout HVAC design, but rarely at this scale and in this configuration. The use of this system, in this application, and the addition of the chilled water coils and water spray were the first within the MTN network and the first in South Africa.

Conclusion

MTN tirelessly pursues energy efficiency and innovation in all its data center designs. The MTN Centurion site is the first Tier III Certified Constructed Facility in South Africa and the first for MTN.

The future provision for tri-generation, photovoltaic, and wind installations are all items that promise to increase the sustainability of this facility.

Figure 6. Tri-generation plant room at MTN Head Office

Figure 6. Tri-generation plant room at MTN Head Office

Figure 7. Tri-generation gas engines at MTN Head Office

Figure 7. Tri-generation gas engines at MTN Head Office

 

Figure 8. Tri-generation schematics at MTN Head Office

Figure 8. Tri-generation schematics at MTN Head Office

12-Typical-light-installation-with-occupancy-sensing

Figure 10. Data rack installation

Figure 10. Data rack installation

Figure 11. Power control panels

Figure 11. Power control panels

Figure 12. External chilled water plant equipment

Figure 12. External chilled water plant equipment


olu-soluade

Olu Soluade started AOS Consulting Engineers in 2008. He holds a Masters degree in Industrial Engineering and a BSc. Hons. degree with Second Class upper in Mechanical Engineering. He is a professional engineer and professional construction project manager with 21 years of experience in the profession.

 

robin-sloanRobin Sloan is Building Services Manager at AOS Consulting Engineers. He is a mechanical engineer with 7 years of experience In education, healthcare, commercial, residential, retail and transportation building projects. His core competencies include project management, design works of railway infrastructure, education, commercial, and health-care projects from concept through to hand-over, HVAC systems, mechanical and natural ventilation, drainage, pipework services (gas, water and compressed air), control systems, and thermal modelling software.

willem-weberWillem Weber is Senior Manager: Technical Infrastructure for MTN South Africa, the largest cellular operator in Africa. Mr. Weber was responsible for the initiation and development of the first methane tri-generation plant in South Africa, the first CSP cooling system using the Fresnel technology, the first Tier III Design and Constructed Facility certified by Uptime Institute in South Africa, utilizing the thermal energy wheel technology for cooling and tri-generation.

philip-young

 

Philip Young is Building Services Manager at AOS Consulting Engineers and a professional electronic and mechanical engineer registered with ECSA (P Eng) with 10 years experience. Previously he was a project manager & engineer at WSP Group Africa (Pty) Ltd. Mr. Young is involved in design, feasibility studies, multidisciplinary technical and financial evaluations, Building Management Systems, and renewable energy.

Russia’s First Tier IV Certification of Design Documents

Next Step: Preparing for Facility Certification

By Alexey Karpov

Mordovia Republic-Technopark Mordovia Data Center (Technopark Data Center) is one of the most significant projects in Mordovia (see Figure 1). The facility is a mini-city that includes research organizations, industry facilities, business centers, exhibition centers, schools, a residential village, and service facilities. One of the key parts of the project is a data center intended to provide information, computing, and telecommunication services and resources to residents of Technopark-Mordovia, public authorities, business enterprises of the region, and the country as a whole. The data processing complex will accommodate institutions primarily engaged in software development, as well as companies whose activities are connected with the information environment and the creation of information resources and databases using modern technologies.

mordovia

Figure 1. Map of Mordovia

The data center offers colocation and hosting services, hardware maintenance, infrastructure as a service (IaaS) through a terminal access via open and secure channels, and access to Groupware software based on a SaaS model. As a result, Technopark Data Center will minimize the residents’ costs to conduct research, manage general construction and design projects, and interact with consumers in the early stages of production through outsourcing of information and telecommunication functions and collective use of expensive software and hardware complexes. Mordovia created and helped fund the project to help enterprises develop and promote innovative products and technologies. About 30 leading science and technology centers cooperate with Technopark-Mordovia, conduct research, and introduce into production new and innovative technologies, products, and materials because of the support of the Technopark Data Center (see Figure 2).

technopark-datacenter-renderings

Figures 2 (a-b) Above. Renderings of the Technopark Data Center show both elevated and street-level views.

Why Design Certification?
Technopark Data Center is the largest and most powerful computing center in Mordovia. Its designers understood that the facility would eventually serve many of the government’s most significant social programs. In addition, the data center would also be used to test and run Electronic Government programs, which are currently in development. According to Alexey Romanov, Director of Gosinform, the state operator of Technopark-Mordovia, “Our plan is to attract several groups of developers to become residents. They will use the computing center as a testing ground for developing programs such as Safe City, medical services for citizens, etc. Therefore, we are obliged to provide the doctors with round the clock online access to clinical records, as well as provide the traffic police with the same access level to the management programs of the transport network in the region.”

To meet these requirements, Technoserv followed the provisions of Uptime Institute requirements for engineering infrastructure (Data Center Site Infrastructure Tier Standard: Topology). As a result, all engineering systems are designed to fully meet requirements for Uptime Institute Tier IV Certification of Design Documents for redundancy, physical separation, and maintenance of equipment and distribution lines (see Figures 3 and 4).

Figure 3. One-line diagram shows Technopark Data Center’s redundant power paths.

Figure 3. One-line diagram shows Technopark Data Center’s redundant power paths.

technopark-processing-area

Figure 4. Technopark Data Center’s processing area.

Meeting these requirements enables Mordovia to achieve significant savings, as the Technopark Data Center makes possible an overall data center plan that makes use of lower reliability regional centers. Though not Tier Certified by the Uptime Institute, these regional data centers are built to follow redundant components requirements, which reduces capital costs. Meanwhile, the central data center provides backup in case one of the regional data centers experiences downtime.

The Technopark Data Center is the core of all IT services in Mordovia. The regional data centers are like “access terminals” in this environment, so the government reasoned that it was not necessary to build them to meet high reliability requirements.

The Data Center Specification
The Technopark Data Center is a 1,900-kW facility that can house about 110 racks, with average consumption of 8-9 kW per rack. Power is supplied from four independent sources: two independent feeds from the city’s electricity system and diesel generator sets with 2N redundancy.

Main characteristics:
(See Table 1)

The data center building is a multi-story structure. Servers occupy the first floor: computing resources are placed in three areas, and various types of IT equipment (basic computing, telecommunications, and storage systems) are placed in different rooms. The administrative block and call-center are on the second floor.

Chillers, pumping stations, chilled water storage tanks, and UPS batteries, etc. are located in the basement and technical floors. Transformers and diesel generators are located in a separate area adjoining the data center. Diesel fuel tanks are located in two deepened areas at opposite sides of the building.

The data center design includes several energy-saving technologies, which enables the facility to be very energy efficient by Russian standards (PUE <1.45). For example, the cooling system includes a free-cooling mode, and all power and cooling equipment operate in modes intended to provide maximum efficiency. Other energy efficiency details include:

  • Computing equipment is installed in a Cold Aisle/Hot Aisle configuration, with containment of the Hot Aisles. In-row cooling further improves energy efficiency.
  • The cooling system utilizes efficient chillers with screw compressors and water-cooled condensers. The dry cooling towers installed on the roof refrigerate the condensers of the chillers in the summer. In the winter, these cooling towers help provide free cooling. Calculations for the design of the cooling system and air conditioning were performed according to ASHRAE standards.
  • All elements of the engineered systems, as well as the systems themselves, are integrated into a single BMS. This BMS controls all the necessary functions of the equipment and interconnected subsystems and quickly localizes faults and limits the consequences of emergencies. Technoserv utilizes a distributed architecture in which each component has a dedicated controller that feeds information back to a single BMS. If the BMS servers fail, the individual controllers maintain autonomous control of the facility.The BMS also collects and processes exhaustive amounts of information about equipment, issues reports, and archives data. A control room is provided at the facility for operators, where they can monitor the operation of all elements of the engineering infrastructure.
Table 1. The Technopark Data Center is designed to be Fault Tolerant. Plans are being made to begin the Tier Certification for Constructed Facility.

Table 1. The Technopark Data Center is designed to be Fault Tolerant. Plans are being made to begin the Tier Certification for Constructed Facility.

From a security standpoint, the data center is organized into three access levels:

  • Green areas provide open admission for users and to the showroom.
  • Blue areas are restricted to Technopark Data Center residents performing their own IT projects.
  • Red areas are open only to data center staff.

Three independent fiberoptic lines, each having a capacity of 10 Gbits per second, ensure uninterrupted and high capacity data transmission to users of Technopark Data Center’s network infrastructure. Russia’s key backbone operators (Rostelecom, Transtelekom, and Megaphone) were selected as Technopark Data Center’s telecom partners because of their well-connected and powerful infrastructure in Russia.

The data center also includes a monitoring and dispatching system. The system is based on three software products: EMC Ionix (monitoring the availability of all components of the IT infrastructure), EMC APG (accumulation of statistics and performance analysis), VMware vCenter Operations Enterprise (intelligent performance monitoring and capacity of objects the virtual environments VMware), and integration modules specially designed by Technoserv.

Challenges

Figure 5. Inside a data hall.

Figure 5. Inside a data hall.

As noted previously, the data center was designed to achieve the highest levels of reliability. There are some data centers in Russia that perform critical national tasks, but none of those facilities require the highest levels of reliability. This reality made the task seem more daunting to everyone who worked on it. Technoserv had to do something that had never been done in Russia and do so in a limited time. Technoserv managed to accomplish this feat in less than two years.

During the Uptime Institute’s Design Certification process, Technoserv stayed in close contact with Uptime Institute subject matter experts. As a result, Technoserv was able to develop solutions as problems emerged. The company is also proud of the qualifications of Technoserv specialists, who have extensive experience in designing and building data centers and who provided the basis for the successful completion of this project.

The technical challenge was also significant. Meeting Tier IV Design Documents requirements can require a large number of redundant elements, the close relationship of mechanical and electrical systems, and testing to demonstrate that emergencies can be addressed without human intervention or damage to IT equipment.

It was necessary to account for all developments in the space and then properly develop BMS hardware that would meet these potential challenges. In addition, the automation system should also work with no loss of functionality in the event of a fault of the BMS system. Design and implementation of algorithms for the BMS demanded involvement of the automation division of Technoserv and almost 6 months of hard work.

It was important to limit the noise from the engineering equipment, as the data center is located in a residential area. Noise insulation measures required examination of the normative and regulatory documents. Knowledge of local codes was key!

Lessons Learned
Technoserv also learned again that there no minor details in a high-tech data center. For example, a topcoat applied to the floor during construction caused the floor to oxidize actively. Only after numerous measurements and testing did Technoserv find that the additive in the coating composition had entered into an electrochemical reaction with the metal supports that formed sulfuric acid and caused an electric potential on the racks of the raised floor.

The data center is currently operational. Technoserv plans to complete the Tier IV Certification of Constructed Facility process.

alexey-karpovAlexey Karpov is head of the Data Center Construction Department at Technoserv. Having more than 10 years experience in designing and building data centers, Mr. Karpov is an Accredited Tier Designer, Certified Data Centre Design Professional, and Certified Data Centre Management Professional. VTB Bank, recognized as the largest infrastructure project in Russia in 2010, and the data center for Bashneft are two large-scale projects completed under his guidance. Technoserv, Russia’s largest system integrator, was founded in 1992.Technoserve installs, develops, and outsources IT infrastructure and develops communications, engineering, and information security systems as well as power systems and application platforms. According to RA Expert, a leading Russian analytical agency, Technoserv is a leader in providing IT services in Russia. Business volumes confirm the company’s leadership in the Russian IT market; total revenues for the entire Technoserv group of companies exceeded 43 billion rubles in fiscal year 2012.

Executive Perspectives on the Colocation and Wholesale Markets

An interview with CenturyLink’s David Meredith and Drew Leonard

By Matt Stansberry

Through our survey data and interactions with global Network members, Uptime Institute has noted large enterprise companies that have gone from running their own data centers exclusively to augmenting with some form of outsourced infrastructure. Does this match your experience? Do you see large enterprises extending their data centers into third-party sites that might not have been doing it three to five years ago.

David Meredith: Absolutely, we definitely see that trend. There is a migration path for enterprises, and it starts with making the decision to move to colocation. Over time, we see these companies develop roadmaps where they look to move to more automation and a consumption-based approach to workload management; we call it the stairway to the cloud. It is a hybrid approach where enterprises may have some colocation, some managed services, some public cloud, and some private cloud.

What do you think is driving this data center outsourcing trend?

David Meredith: There has never been a better time to grow a business, to exercise entrepreneurship at a grand or small scale, because everything needed to enable a growing business is available as-a-service. So you can focus on what makes a winner in your space and core competencies, and then outsource everything else that’s not core to your specific business. Infrastructure supported by the data center extends that concept.

Companies need agility, the ability to scale more quickly, to put capital to the highest and best use.

The data center business continues to require more sophistication as it relates to cooling, energy efficiency, change management, certifications, and standards. Enterprises don’t need to be expert on how to run and operate a data center because that distracts from being the best in the world at their core products and services. That’s a full-time job in and of itself, so it makes sense that data center-as-a-service continues to grow at a double-digit rate.

Drew Leonard: Today, if you look at IT budgets, they’re typically not growing. But the IT department, the CIOs and CTOs, they’re all expected to be more nimble and to play a bigger part in the growth of the company, not just figuring out how to reduce cost. So, outsourcing the colocation and other components allows them to be more nimble. But, it also gives them quicker speed to market and extends their geographic reach and ability to get into multiple markets.

If you’re going to manage and maintain your own data center—if you’re going to keep it up to the specs of where the commercial data centers are today—there’s a lot of training and maintenance that goes into that.

Do you see regional differences in how data center services are procured around the globe?

David Meredith: Yes, we do see cultural differences. One of the things we’re driving now is having a much wider range of flexibility on the densities we are able to accommodate. In Asia, customers are still looking for lower density solutions, whereas in North America, we have more demand for very high density solutions.

Drew Leonard: Carrier density and diversity are much more common in North America, and it’s becoming more mature in Europe. I’d say it’s different in Asia because of the regulated environment with regards to telcos. There are simple things, like David said, that’s very true; the densities in Asia right now are still a little bit lower as people move out of their existing data centers which traditionally are a lot lower density than the new commercial-grade type of colocation facilities.

When we speak with enterprise operations staff, they are tasked with either procuring colocation services or managing a data center remotely through a third party; they have had to do a lot of on-the-job training. Many have never been in this position before and do not have a lot of experience around the procurement side or third-party vendor management. What are the skill sets people try to develop to shift to a broker/manager of these kinds of services?

David Meredith: Financial modeling is important in understanding the true total cost of ownership (TCO), as well as understanding what exactly you’re getting. Not all data centers are created equal, and sometimes it’s hard for buyers to discern the quality level that went into one building versus another building. What are the points of differentiation there?

Also, what are going to be the incremental costs from a TCO perspective if you go with a cheaper solution? Digging down to that next level is pretty important. For example, how much distribution is included in your price quote and what are the non-recurring charges associated with the service?

Drew Leonard: Some customers are making decisions based purely on price and not looking at the historical background of the companies. Enterprises should look at the overall performance over a period of time and look back at that historical representation over a variety of different situations and circumstances. Are those providers maintaining all of their facilities to 100% uptime?

David Meredith: Building on that, don’t just look at the facility itself. You really have to look at the people and the processes that are managing the facilities. If there is a problem, it often comes down to human error. You want to have a provider with a very robust set of repeatable processes that meet or extend industry standards. Industries like financial services, health care, or government are attuned to this process. What will keep that data center up 100% of the time is having very good change management processes, so someone doesn’t make a mistake or cause a problem. You have to ask: What is the experience level of the people that are running the data center, what are the processes they’re following? That can be almost as important, if not more so, than evaluating the facility itself.

This seems like a decent segue to discuss your organization’s commitment to Tier Certification. Why is CenturyLink pursuing Tier Certification, and how is Certification impacting your conversations with customers?

CenturyLink executives (left to right) Joel Stone, Drew Leonard, and Ash Mathur accept the plaque for CenturyLink’s new Tier III Certified Facility in Toronto from Uptime Institute Chief Operating Officer Julian Kudritzki.

CenturyLink executives (left to right) Joel Stone, Drew Leonard, and Ash Mathur accept the plaque for CenturyLink’s new Tier III Certified Facility in Toronto from Uptime Institute Chief Operating Officer Julian Kudritzki.

David Meredith: CenturyLink invests heavily in our data center capabilities, and we’ve made a decision in terms of our positioning in the marketplace to be on the higher end of the quality spectrum. Also, CenturyLink is a big contractor to the government. We have a very significant financial services practice. So, standards are important, quality is critical, and we believe that the Tier Certification process is a way to clearly reflect a commitment to that quality.

We’re making the investments, so we think Tier Certification is a great fit for what we’re already doing. We have 100% uptime SLAs, and we put the resources behind that to make it something we can really stand behind.

Drew Leonard: I see more and more in RFPs—companies don’t want a facility that’s just Concurrently Maintainable. Customers are starting to look for official Tier III Certification. So, Tier Certification is increasingly important to the customers that are coming to us and the opportunities to even be considered as a data center provider for large enterprise companies. Having the Tier Certification foil is extremely important.

We’re making that commitment.

For us, it’s just the next step. We don’t want to have to explain ourselves. We want to be able to say that we are Uptime Institute Tier III Certified at the Design and Constructed Facility levels and that we’re executing on that plan. Then, our operations teams back it up with the day-to-day processes that they put in place to keep our facilities running.

What are some of the common mistakes enterprises get into when they first start entering these colocation relationships?

David Meredith: We’re seeing people focus on one number for cost. Then they’re paying more overall because they’ve only focused on one metric. Companies are pushing their price per kilowatt lower, but then they’re charging all sorts of add-on fees and other charges on top. You have to look at the entire cost and look at exactly what you’re getting when you’re comparing to make sure you’re getting an apples-to-apples comparison across the options, both in terms of all costs as well as exactly what you’re getting for what you’re spending. CenturyLink provides transparent pricing, and we don’t like to nickel and dime our customers. We tend to package more into the base package than our competitors.

Drew Leonard: Migration is always a key piece and adding services, turnover of equipment, or refresh. There is also staffing growth. Companies have a very hard time predicting their growth and having a scalable growth plan. When enterprises look at the future, they’re not able to clearly predict that path of growth. Follow-on costs may get overlooked in a long-term view when they’re trying to make this short-term decision.

Do you see any resource efficiency implications in this outsourcing trend?

David Meredith: For the enterprise, one analogy relates to energy efficiency for automobiles. You can buy a highly efficient vehicle, but if you’re slamming on the gas pedal and slamming on the brakes, that’s not a very fuel efficient way to drive and operate the car.

CenturyLink is focused on efficiency every day—we’re trying to figure out how to squeeze that next improvement in efficiency out of the data center in terms of power usage and operating efficiency.

To extend the automobile analogy, if you really want to be energy efficient, you can carpool to get to work each day. Similarly, when you start to migrate services to the cloud, essentially you’re carpooling in the data center. You want to have a colocation provider with a flexible set of product offerings that can move into the cloud when needed. It’s great to have contractual flexibility to shift your spend from colocation to cloud over time and do it all in the same footprint.

Do customers demand transparency on energy usage and resource efficiency from your data centers? If so, how do you meet those demands, and how does CenturyLink compare to other colocation organizations in this regard?

Drew Leonard: Yes, CenturyLink customers tend to be very sophisticated consumers of data center services. For example, we have a large financial services practice, and many of these customers like to be informed on the
bleeding-edge developments in terms of data center efficiency. CenturyLink works with customers to audit what they are doing and suggest improvements based on their specific requirements. We offer an option for metered pricing. Our recently announced modular data center deployments and IO.OS software from the IO partnership can be a differentiator for customers. Our engineering teams have been utilizing a variety of approaches to improve energy efficiency across our 57 data center footprint with positive results.

Where do you see the marketplace going in three years?

David Meredith: Each year, we see more colocation purchases from the service provider segment or what I call “X-as-a-Service” companies. Many of these companies are born in the cloud, and they need data center space to enable the end service that they provide for the enterprise. We invite and welcome service providers into our data centers as colocation customers because they help to strengthen our ecosystems and provide services that are just a cross-connect away from our enterprise customers.

We encourage our enterprise customers to be informed purchasers of managed services and to ask the right questions to understand what data centers are underpinning the managed solutions that they buy.

Drew Leonard: That’s right; we even launched a service called ClientConnect which acts like a dating service to help our thousands of customers more easily connect with service providers in our data center ecosystems.


matt-stansberry

Matt Stansberry

Matt Stansberry is director of Content and Publications for the Uptime Institute and also serves as program director for the Uptime Institute Symposium, an annual spring event that brings together 1,500 stakeholders in enterprise IT, data center facilities, and corporate real estate to deal with the critical issues surrounding enterprise computing. He was formerly editorial director for Tech Target’s Data Center and Virtualization media group, and was managing editor of Today’s Facility Manager magazine. He has reported on the convergence of IT and Facilities for more than a decade.

 

 

CenturyLink executives (left to right) Joel Stone, Drew Leonard, and Ash Mathur accept the plaque for CenturyLink’s new Tier III Certified Facility in Toronto from Uptime Institute Chief Operating Officer Julian Kudritzki.

David Meredith

As senior vice president and global general manager at CenturyLink Technology Solutions, David Meredith oversees 57 data centers and related services across North America, Europe, and Asia. Mr. Meredith’s team manages the ongoing expansion of the CenturyLink data center footprint, which involves several new buildout projects at any given time. Mr. Meredith’s global Operations and Facilities teams include several hundred members with over 15 years average data center experience and many certifications, which help them manage to a 100% uptime service level agreement (SLA) standard.

The data center teams also support the CenturyLink Cloud Platform and a large managed services customer base. From the sales perspective, the team has recently added a new global vice president of Sales from another large colocation provider and is actively on-boarding new colocation channel partners as well as launching a new real estate broker relations team to help drive sales.

Drew Leonard

Drew Leonard

Drew Leonard has more than 18 years in the telecom and data center industry. As vice president of Colocation Product Management for CenturyLink, Mr. Leonard is responsible for enhancing colocation services, growing the business through new client and market opportunities, and ensuring that customers receive the most current and cost effective solutions. Prior to joining CenturyLink, he was director of Product Marketing at Switch and Data Facilities, and director of Marketing at PAIX. As a seasoned product and marketing executive for these data center and Internet exchange providers, Mr. Leonard’s primary focus was developing detailed strategic marketing plans leveraging market and revenue opportunity through market sizing. Mr. Leonard has continued to specialize in market sizing, market share analysis, strategic planning, market-based pricing, product development, channel marketing, and sales development. He has a Bachelor of Science degree from the University of California

A Holistic Approach to Reducing Cost and Resource Consumption

Data center operators need to move beyond PUE and address the underlying factors driving poor IT efficiency.

By Matt Stansberry and Julian Kudritzki, with Scott Killian

Since the early 2000s, when the public and IT practitioners began to understand the financial and environmental repercussions of IT resource consumption, the data center industry has focused obsessively and successfully on improving the efficiency of data center facility infrastructure. Unfortunately, we have been focused on just the tip of the iceberg – the most visible, but smallest piece of the IT efficiency opportunity.

At the second Uptime Institute Symposium in 2007, Amory Lovins of the Rocky Mountain Institute stood on stage with Uptime Institute Founder Ken Brill and called on IT innovators and government agencies to improve server compute utilization, power supplies, and the efficiency of the software code itself.

But those calls to action fell on deaf ears, leaving power usage effectiveness (PUE) as the last vestige of the heady days when data center energy was top on the minds of industry executives, regulators, and legislators. PUE is an effective engineering ratio that data center facilities teams can use to capture baseline data and track the results of efficiency improvements to mechanical and electrical infrastructure. It is also useful for design teams comparing equipment or topology-level solutions. But, as industry adoption of PUE has expanded the metric is increasingly being misused as a methodology to cut costs and prove stewardship of corporate and/or environmental resources.

pue-measuring-stats

Figure 1. 82% of Senior Execs are tracking PUE and reporting those findings to their management. Source: Uptime Institute Data Center Industry Survey 2014

Feedback from the Uptime Institute Network around the world confirms Uptime Institute’s field experience that enterprise IT executives are overly focused on PUE. According to the Uptime Institute’s Annual Data Center Industry Survey, conducted January-April 2014, the vast majority of IT executives (82%) tracks PUE and reports that metric to their corporate management. By focusing on PUE, IT executives are spending effort and capital for diminishing returns and ignoring the underlying drivers of poor IT efficiency.

For nearly a decade, Uptime Institute has recommended that enterprise IT executives take a holistic approach to significantly reducing the cost and resource consumption of compute infrastructure.

Ken Brill identified the following as the primary culprits of poor IT efficiency as early as 2007:

  • Poor demand and capacity planning within and across functions (business, IT, facilities)
  • Significant failings in asset management (6% average server utilization, 56% facility utilization)
  • Boards, CEOs, and CFOs not holding CIOs accountable for critical data center facilities’ CapEx and data center operational efficiency

Perhaps the industry was not ready to hear this governance message and the economics did not motivate broad response. Additionally, the furious pace at which data centers were being built distracted from the ongoing cost of IT service delivery. Since then, operational costs have continued to escalate as a result of insufficient attention being paid to the true cost of operations.

Rising energy, equipment, and construction costs and increased government scrutiny are compelling a mature management model that identifies and rewards improvements to the most glaring IT inefficiencies. At the same time, the primary challenges facing IT organizations are unchanged from almost 10 years ago. Select leading enterprises have taken it upon themselves to address these challenges. But the industry lacks a coherent mode and method that can be shared and adopted for full benefit of the industry.

A solution developed by and for the IT industry will be more functional and impactful than a coarse adaptation of other industries’ efficiency programs (manufacturing and mining have been suggested as potential models) or government intervention.

In this document, Uptime Institute presents a meaningful justification for unifying the disparate disciplines and efforts together, under a holistic plan, to radically reduce IT cost and resource consumption.

Multidisciplinary Approach Includes Siting, Design, IT, Procurement, Operations, and Executive Leadership

Historically, data center facilities management has driven IT energy efficiency. According to the Uptime Institute’s Annual Data Center Industry Survey (2011-2014), less than 20% of companies report that their IT departments pay the data center power bill, and the vast majority of companies allocate this cost to the facilities or real estate budgets. This lopsided financial arrangement fosters unaccountable IT growth, inaccurate planning, and waste (see Figure 2).

data center power bill stats

Figure 2. Less than 20% of companies report that their IT departments pay the data center power bill, and the vast majority of companies allocate this cost to the facilities or real estate budgets. Source: Uptime Institute Data Center Industry Survey 2012

The key to success for enterprises pursuing IT efficiency is to create a multidisciplinary energy management plan (owned by senior executive leadership) that includes the following:

Executive commitment to sustainable results

  • A formal reporting relationship between IT and data center facilities management with a chargeback model that takes into account procurement and operations/maintenance costs
  • Key performance indicators (KPIs) and mandated reporting for power, water, and carbon utilization
  • A culture of continuous improvement with incentives and recognition for staff efforts
  • Cost modeling of efficiency improvements for presentation to senior management
  • Optimization of resource efficiency through ongoing management and operations
  • Computer room management: rigorous airflow management and no bypass airflow
  • Testing, documenting, and improving IT hardware utilization
  • IT asset management: consolidating, decommissioning, and recycling obsolete hardware
  • Managing software and hardware life cycles from procurement to disposal

Effective investment in efficiency during site planning and design phase of the data center

  • Site-level considerations: utility sourcing, ambient conditions, building materials, and effective land use
  • Design and topology that match business demands with maximum efficiency
  • Effective monitoring and control systems
  • Phased buildouts that scale to deployment cycles

Executive Commitment to Sustainable Results

Any IT efficiency initiative is going to be short-lived and less effective without executive authority to drive the changes across the organization. For both one-time and sustained savings, executive leadership must address the management challenges inherent in any process improvement. Many companies are unable to effectively hold IT accountable for inefficiencies because financial responsibility for energy costs lies instead with facilities management.

Uptime Institute has challenged the industry for years to restructure company financial reporting so that IT has direct responsibility for its own energy and data center costs. Unfortunately, there has been very little movement toward that kind of arrangement, and industry-wide chargeback models have been flimsy, disregarded, or nonexistent.

Average self reported PUE

Figure 3. Average PUE decreased dramatically from 2007-2011, but efficiencies have been harder to find since then. Source: Uptime Institute Data Center Industry Survey 2014

Perhaps we’ve been approaching this issue from the wrong angle. Instead of moving the data center’s financial responsibilities over to IT, some organizations are moving the entire Facilities team and costs wholesale into a single combined department.

In one example, a global financial firm with 22 data centers across 7 time zones recently merged its Facilities team into its overall IT infrastructure and achieved the following results:

  • Stability: an integrated team that provides single entity accountability and continuous improvement
  • Energy Efficiency: holistic approach to energy from chips to chillers
  • Capacity: design and planning much more closely aligned with IT requirements

This globally integrated organization with single-point ownership and accountability established firm-wide standards for data center design and operation and deployed an advanced tool set that integrates facilities with IT.

This kind of cohesion is necessary for a firm to conduct effective cost modeling, implement tools like DCIM, and overcome cultural barriers associated with a new IT efficiency program.

Executive leadership should consider the following when launching a formal energy management program:

  • Formal documentation of responsibility, reporting, strategy, and program implementation
  • Cost modeling and reporting on operating expenses, power cost, carbon cost per VM (virtual machine),
    and chargeback implementation
  • KPIs and targets: power, water, carbon emissions/offsets, hardware utilization, and cost reduction
  • DCIM implementation: dashboard that displays all KPIs and drivers that leadership deems important
    for managing to business objectives
  • Incentives and recognition for staff

Operational Efficiency

Regardless of an organization’s data center design topology, there are substantial areas in facility and IT management where low-cost improvements will reap financial and organizational rewards. On the facilities management side, Uptime Institute has written extensively about the simple fixes that prevent bypass airflow, such as ensuring Cold Aisle/Hot Aisle layout in data centers, installing blanking panels in racks, and sealing openings in the raised floor.

Brill Award winners

Figure 4. Kaiser Permanente won a Brill Award for Efficient IT in 2014 for improving operational efficiency across its legacy facilities.

In a 2004 study, Uptime Institute reported that the cooling capacity of the units found operating in a large sample of data centers was 2.6 times what was required to meet the IT requirements—well beyond any reasonable level of redundant capacity. In addition, an average of only 40% of the cooling air supplied to the data centers studied was used for cooling IT equipment. The remaining 60% was effectively wasted capacity, required only because of mismanaged airflow.

More recent industry data shows that the average ratio of operating nameplate cooling capacity has increased from 2.6 to 3.9 times the IT requirement. Disturbingly, this trend is going in the wrong direction.

Uptime Institute has published a comprehensive, 29-step guide to data center cooling best practices to help data center managers take greater advantage of the energy savings opportunities available while providing improved cooling of IT systems: Implementing Data Center Cooling Best Practices 

Health-care giant Kaiser Permanente recently deployed many of those steps across four legacy data centers in its portfolio, saving approximately US$10.5 million in electrical utility costs and averting 52,879 metric tons of carbon dioxide (CO2). Kaiser Permanente won a Brill Award for Efficient IT in 2014 for its leadership in this area (see Figure 4).

According to Uptime Institute’s 2014 survey data, a large percentage of companies are tackling the issues around inefficient cooling (see Figure 5). Unfortunately, there is not a similar level of adoption for IT operations efficiency.

advanced cooling technology

Figure 5. According to Uptime Institute’s 2014 survey data, a large percentage of companies are tackling the issues around inefficient cooling.

The Sleeping Giant: Comatose IT Hardware

Wasteful, or comatose, servers hide in plain sight in even the most sophisticated IT organizations. These servers, abandoned by application owners and users but still racked and running, represent a triple threat in terms of energy waste—squandering power at the plug, wasting data center facility capacity, and incurring software licensing and hardware maintenance costs.

Uptime Institute has maintained that an estimated 15-20% of servers in data centers are obsolete, outdated, or unused, and that remains true today.

The problem is likely more widespread than previously reported. According to Uptime Institute research, only 15% of respondents believe their server populations include 10% or more comatose machines. Yet, nearly half (45%) of survey respondents have no scheduled auditing to identify and remove unused machines.

Uptime Institute launched the Server Roundup contest in October 2011 to raise awareness about the removal and recycling of comatose and obsolete IT equipment and reduce data center energy use. Uptime Institute invited companies around the globe to help address and solve this problem by participating in the Server Roundup.

The financial firm Barclays removed nearly 10,000 servers in 2013, which directly consumed an estimated 2.5 megawatts (MW) of power. Left on the wire, the power bill would be approximately US$4.5 million higher than it is today. Installed together, these servers would fill up 588 server racks. Barclays also saved approximately US$1.3 million on legacy hardware maintenance costs, reduced the firm’s carbon footprint, and freed up more than 20,000 network ports and 3,000 SAN ports due to this initiative (see Figure 6).

Barclays was a Server Roundup winner in 2012 as well, removing 5,515 obsolete servers, with power savings of 3 MW, and US$3.4 million annualized savings for power, and a further US$800,000 savings in hardware maintenance.

server roundup winners

Figure 6. The Server Roundup sheds light on a serious topic in a humorous way. Barclays saved over $US10 million in two years of dedicated server decommissioning.

In two years, Barclays has removed nearly 15,000 servers and saved over US$10 million. Server Roundup overwhelmingly proves that disciplined hardware decommissioning can provide a significant financial impact. Yet, despite these huge savings and intangible benefits to the overall IT organization, many firms are not applying the same level of diligence and discipline to a server decommissioning plan, as noted previously.

This is the crux of the data center efficiency challenge ahead—convincing more organization of the massive return on investment in addressing IT instead of relentlessly pursuing physical infrastructure efficiency.

Organizations need to hold IT operations teams accountable to root out inefficiencies, of which comatose servers are only the most egregious example.

Other systemic IT inefficiencies include:

  • Neglected application portfolios with outdated, duplicate,or abandoned software programs
  • Low-risk activities and test and development applications consuming high-resiliency, resource-intensive capacity
  • Server hugging—not deploying workloads to solutions with highly efficient, shared infrastructure
  • Fragile legacy software applications requiring old, inefficient, outdated hardware—and often duplicate IT hardware installations—to maintain availability

But, in order to address any of these systemic problems, companies need to secure a commitment from executive leadership by taking a more activist role than previously assumed.

Resource Efficiency in the Design Phase

Some progress has been made, as the majority of current data center designs are now being engineered toward systems efficiency. By contrast, enterprises around the globe operate legacy data centers, and these existing sites by far present the biggest opportunity for improvement and financial return on efficiency investment.

That said, IT organizations should apply the following guidelines to resource efficiency in the design phase:

Take a phased approach, rather than building out vast expanses of white space at once and running rooms for years with very little IT gear. Find a way to shrink the capital project cycle; create a repeatable, scalable model.

Implement an operations strategy in the pre-design, design, and construction phases to improve operating performance. (See Start with the End in Mind

Define operating conditions that approach the limits of IT equipment thermal guidelines and exploit ambient conditions to reduce cooling load.

Data center owners should pursue resource efficiency in all capital projects, within the constraints of their business demands. The vast majority of companies will not be able to achieve the ultra-low PUEs of web-scale data center operators. Nor should they sacrifice business resiliency or cost effectiveness in pursuit of those kinds of designs—given that the opportunities to achieve energy and cost savings in the operations (rather than through design) are massive.

The lesson often overlooked when evaluating web-scale data centers is that IT in these organizations is closely aligned with the power and cooling topology. The web-scale companies have an IT architecture that allows low equipment-level redundancy and a homogeneous IT environment conducive to custom, highly utilized servers. These efficiency opportunities are not available to many enterprises. However, most enterprises can emulate, if not the actual design, then the concept of designing to match the IT need. Approaches include phasing, varied Tier data centers (e.g., test and development and low-criticality functions can live in Tier I and II rooms; while business-critical activity is in Tier III and IV rooms), and increased asset utilization.

Conclusion

Senior executives understand the importance of reporting and influencing IT energy efficiency, and yet they are currently using inappropriate tools and metrics for the job. The misguided focus on infrastructure masks, and distracts them from addressing, the real systemic inefficiencies in most enterprise organizations.

The data center design community should be proud of its accomplishments in improving power and cooling infrastructure efficiency, yet the biggest opportunities and savings can only be achieved with an integrated and multi-disciplined operations and management team. Any forthcoming gains in efficiency will depend on documenting data center cost and performance, communicating that data in business terms to finance and other senior management within the company, and getting the hardware and software disciplines to take up the mantle of pursuing efficient IT on a holistic basis.

There is increasing pressure for the data center industry to address efficiency in a systematic manner, as more government entities and municipalities are contemplating green IT and energy mandates.

In the 1930s, the movie industry neutralized a patchwork of onerous state and local censorship efforts (and averted the threat of federal action) by developing and adopting its own set of rules: the Motion Picture Production Code. These rules, often called the Hays Code, evolved into the MPAA film ratings system used today, a form of voluntary self-governance that has helped the industry to successfully avoid regulatory interference for decades.

Uptime Institute will continue to produce research, guidelines, and assessment models to assist the industry in self-governance and continuous improvement. Uptime Institute will soon release supplemental papers on relevant topics such as effective reporting and chargebacks.

Additional Resources

Implementing data center cooling best practices

Server Decommissioning as a Discipline

2014 Uptime Institute Data Center Industry Survey Results

Start With the End in Mind

Putting DCIM to work for you


The High Cost of Chasing Lower PUEs

In 2007, Uptime Institute surveyed its Network members (a user group of large data center owners and operators), and found an average PUE of 2.5. The average PUE improved from 2.50 in 2007 to 1.89 in 2011 in Uptime Institute’s data center industry survey.

So how did the industry make those initial improvements?

A lot of these efforts were simple fixes that prevented bypass airflow, such as ensuring Cold Aisle/Hot Aisle arrangement in data centers, installing blanking panels in racks, and sealing cutouts. Many facilities teams appear to have done what they can to improve existing data center efficiency, short of making huge capital improvements.

From 2011 to today, the average self-reported PUE has only improved from 1.89 to 1.70. The biggest infrastructure efficiency gains happened 5 years ago, and further improvements will require significant investment and effort, with increasingly diminishing returns.

In a 2010 interview, Christian Belady, architect of the PUE metric, said, “The job is never done, but if you focus on improving in one area very long you’ll start to get diminishing returns. You have to be conscious of the cost pie, always be conscious of where the bulk of the costs are.”

But executives are pressuring for more. Further investments in technologies and design approaches may provide negative financial payback and zero improvement of the systemic IT efficiency problems.


What Happens If Nothing Happens?

In some regions, energy costs are predicted to increase by 40% by 2020. Most organizations cannot afford such a dramatic increase to the largest operating cost of the data center.

For finance, information services, and other industries, IT is the largest energy consumer in the company. Corporate sustainability teams have achieved meaningful gains in other parts of the company but seek a meaningful way to approach IT.

China is considering a government categorization of data centers based upon physical footprint. Any resulting legislation will ignore the defining business, performance, and resource consumption characteristics of a data center.

In South Africa, the carbon tax has reshaped the data center operations cost structure for large IT operators and visibly impacted the bottom line.

Government agencies will step in to fill the void and create a formula- or metric-based system for demanding efficiency improvement, which will not take into account an enterprise’s business and operating objectives. For example, the U.S. House of Representatives recently passed a bill (HR 2126) that would mandate new energy efficiency standards in all federal data centers.


 

matt-stansberryMatt Stansberry is director of Content and Publications for the Uptime Institute and also serves as program director for the Uptime Institute Symposium, an annual spring event that brings together 1,500 stakeholders in enterprise IT, data center facilities, and corporate real estate to deal with the critical issues surrounding enterprise computing. He was formerly editorial director for Tech Target’s Data Center and Virtualization media group, and was managing editor of Today’s Facility Manager magazine. He has reported on the convergence of IT and Facilities for more than a decade.

 

julian-kudritzkiJulian Kudritzki joined the Uptime Institute in 2004 and currently serves as COO. He is responsible for the global proliferation of Uptime Institute standards. He has supported the founding of Uptime Institute offices in numerous regions, including Brasil, Russia, and North Asia. He has collaborated on the development of numerous Uptime Institute publications, education programs, and unique initiatives such as Server Roundup and FORCSS. He is based in Seattle, WA.

 

 

 

scott-killianScott Killian joined the Uptime Institute in 2014 and currently serves as VP for Efficient IT Program. He surveys the industry for current practices, and develops new products to facilitate industry adoption of best practices. Mr. Killian directly delivers consulting at the site management, reporting, and governance levels. He is based in Virginia.

Prior to joining Uptime Institute, Mr. Killian led AOL’s holistic resource consumption initiative, which resulted in AOL winning two Uptime Institute Server Roundups for decommissioning more than 18,000 servers and reducing operating expenses more than US$6 million. In addition, AOL received three awards in the Green Enterprise IT (GEIT) program. AOL accomplished all this in the context of a five-year plan developed by Mr. Killian to optimize data center resources, which saved ~US$17 million annually.

Want to stay informed on this topic, provide us your contact information and we will add your to our Efficient IT communications group.

McKesson Retrofits Economizers in Live DC Environment

Pharmaceutical installs economizers in a live site to improve energy efficiency

By Wayne Everett, Dean Scharffenberg, and Coleman Jones

McKesson Corporation, the oldest and largest health-care services company in the United States, plays an integral role in meeting the nation’s health-care needs. McKesson is the largest pharmaceutical distributor in North America, delivering one-third of all medications used in the U.S. and Canada every day. In addition, McKesson Technology Services provides 52% of U.S. hospitals with secure data storage. With revenues in excess of US$122 billion annually, McKesson ranks 15th on the Fortune 500 list.

The McKesson facility is a purpose-built data center originally constructed in 1985. It includes 34,000 square feet (ft2) of raised floor space on two floors; an IT load of 1,690 kilowatts (kW) occupies 69% of the raised floor. Until the retrofit, 30-ton direct expansion (DX) air-cooled air handling units (AHUs) provided cooling.

The addition of a 2,000-ft2 economizer on the first floor (see Figure 1) and a new penthouse structure (see Figure 2) to house an economizer on the second floor will reduce the data center’s cooling requirement drastically when ambient temperatures allow. Currently the outside air economizer is on line with a total savings of 550 kW.

Figure 1. Roof deck of the first floor economizer

Figure 1. Roof deck of the first floor economizer

Being a leader in distribution and technology is achieved by continual improvement, something McKesson believes in and strives for. McKesson continuously strives to lower health-care costs for its customers. The goal of lowering costs while maintaining the best product for consumers was apparent in McKesson’s decision to retrofit its primary data center to use outside air for primary cooling.

Figure 2. Roof of the completed penthouse.

Figure 2. Roof of the completed penthouse.

The reason for retrofitting the facility was simple: McKesson wanted to use outside air in lieu of AHUs, thus reducing the power load. The construction plan was complex due to the strict zero downtime requirement of the fully operational facility. To ensure construction proceeded as planned, McKesson employed the expertise and experience of general contractor Rudolph and Sletten. The solution devised and implemented by McKesson and the Rudolph and Sletten team enables the McKesson facility to exhaust hot air from the Hot Aisles and introduce cool outside air under the raised floor. In addition, Rudolph and Sletten:

• Relocated the condensing units (CUs)

• Retrofitted existing AHUs

• Built the new first-floor economizer

• Installed controls and performed functional testing

Figure 3. Airflow in McKesson’s data center

Figure 3. Airflow in McKesson’s data center

The design for introducing cool outside air into the first floor computer room removed 90 feet of the existing outside wall. Eleven condensing units populated a concrete pad on the other side of this wall. A new 2000 ft2 economizer room was constructed on this pad, and the condensing units (CU) were relocated to the roof of this new economizer room.

The economizer room consists of a warm air upper plenum area where the Hot Aisle plenum from the computer room is joined.  A dozen exhaust fans penetrating the economizer roof expel warm air from the plenum area. Cool outside air is drawn through a louvered wall into a filter bank and a fan wall consisting of 17 variable speed fans. The fans blow air under the raised floor into the Cold Aisles. Nine mixing dampers between the warm air plenum and the nine outside air intake dampers allow for warming of cold outside air. The AHUs in the first floor room are not running when outside air is being cooled. Each AHU has a return damper that closes to preclude backflow through the AHU when outside air is in cooling mode (see Figure 3).

The design for introducing cool outside air to the second floor called for air to be ducted from a new penthouse area into the AHU return, using the AHU fans to blow the air under the raised floor. Some of the AHU condensing units resided on the roof. These were moved to a new higher roof and filtered air intakes with dampers were mounted on the old roof. A new 18,000 ft2 penthouse with louvered outside walls on three sides was constructed. This provided a double roof over the second floor critical computer equipment. Warm air from the second floor Hot Aisle plenum area is ducted through the original and new roofs to exhaust fans on the new upper roof. Each AHU has its own outside air intake and mixing dampers and these dampers are controlled in tandem to maintain the desired discharge temperature at each AHU.

Economizer Construction
The second floor modifications began first. To begin construction of the new roof, Rudolph and Sletten bolted and then welded steel stub columns to the tops of the existing building columns. With the rainy season approaching, the roofing was strategically cut to allow the new steel stub columns to fit down onto the existing building. Exhaust fans and welding blankets, as well as temporary plastic structures in the computer rooms, were set in place to ensure that neither construction debris nor smoke would interfere with the servers running just feet from the construction. Phase 1 completed with the new steel penetrations sealed and the roof watertight for the winter.

Once the weather began to improve, Phase 2 commenced. Several days of crane picks strategically placed the new structural steel for the penthouse in place. Steel was raised over the existing structure–directly above the server rooms and running condensing units–and placed with precision. Perimeter steel tied directly to the Phase 1 stub columns while interior steel connected to the existing penthouse structure.

Figure 4. The first floor economizer is just outside the plastic structures in the exterior wall of a data hall.

Figure 4. The first floor economizer is just outside the plastic structures in the exterior wall of a data hall.

On the first floor, work began on a 2,000-ft2 economizer. Rudolph and Sletten’s crew excavated and placed large concrete footings to accept new structural steel columns. The new footings tied directly into the existing footing of the building. As soon as the concrete met its compressive strength requirements, the new steel was installed inches away from the data room walls (see Figure 4).

CU Phased relocation
Next, roofs needed to be installed on the new penthouse and on the first floor economizer. This provided a unique obstacle; if the roof deck were to be completed, heat from the condensing units would be trapped under the roof area resulting in AHU shutdowns, and there would be no access to systematically shut down CUs to relocate them onto the new rooftop. The solution was to leave out the center portion of the new roof (see Figure 5). This allowed heat to escape as well as provide an access point for the cranes to move the CUs to the new roof area. Once the leave-out bay was constructed McKesson started relocating CUs to the new penthouse roof (see Figure 6).

possible to build new economizers and a penthouse without risking damage to the Figure 5. Creative construction sequencing made it possible to build new economizers and a penthouse without risking damage to the data center and while maintaining cooling to an operating facility.

Figure 5. Creative construction sequencing made it possible to build new economizers and a penthouse without risking damage to the data center and while maintaining cooling to an operating facility.

Building the economizer on the first floor posed some of the same challenges as there were condensing units under that roof area. Roof decking had to be placed in three different phases. McKesson installed a third of the roof decking and then relocated come of the CUs from under the structure to the new roof of the first floor structure. Once this was successful, the second and third phases of roof deck and relocation of CUs commenced.

Figure 6. Cranes were needed at time points in the program, first to move steel into place and later for CUs

Figure 6. Cranes were needed at time points in the program, first to move steel into place and later for CUs

Retrofitting AHUs
With the penthouse and first floor economizers well under way, it was time to retrofit the CRACs. Every AHU on the second floor had to be retrofitted with new ductwork to provide either outside or return air. Ductwork was fitted with dampers to modulate between outside air and return air or mix the two when outside air is too cold. Approximately 30 new penetrations in the old roof were needed to supply the AHU with outside air. The new roof penetrations housed the new ductwork for the AHUs as well as structural steel to stiffen the roof and support the ductwork (see Figure 7).

Figure 7. Ductwork was the last stage in bringing the second floor economizer on line.

Figure 7. Ductwork was the last stage in bringing the second floor economizer on line.

The second floor computer room retrofit was performed in sections with a temporary anti-static fire resistant poly barrier installed to maintain the server area during construction. Once the temporary barriers were installed and the raised flooring protected, the ceiling could be removed to start fitting up the new ductwork.

The phased demolition of the roofing began at the same time. Several high-powered magnets holding welding blankets tight against the roof facilitated welding of the new structure at the roof penetration. These steps ensured all debris, welding slag, and unwanted materials would not fall into the second floor server room. Ductwork for supply air to the AHUs stopped at the old roof, while the new exhaust went up to the new penthouse roof.

The First Floor Economizer
The first floor air economizer presented its own unique set of challenges. In order to get outside air to the data center, the 2,000-ft2 first floor economizer was built before being connected to the existing data center. Once connected, outside air would be pulled in through a fan wall and pushed under the raised floor to provide cooling to the servers. Newly installed exhaust fans would draw exhaust from the Hot Aisles out through the roof of the mechanical room. The economizer was built on the same location as a CU farm that held the 30-ton condensers for the data center’s first floor. The interior buildout began once these CUs were moved to the roof of the ground floor economizer and started operation.

A temporary perimeter wall erected inside the data center where the two buildings adjoin protected equipment and provided a dustproof barrier. The temporary wall was constructed of a 2-hour-fire-rated board with anti-static fire retardant connected to the board on the data center side. There was approximately 6 ft between the servers and the existing wall and approximately 2 ft between the equipment and the temporary wall. McKesson and Rudolph and Sletten planned ways to remove the temporary wall to allow equipment replacement in the event of equipment failure, In addition, staff monitored differential pressure between the data center and the construction area to ensure the data center stayed in a positive pressure to keep dust from entering the facility. A professional service cleaned the data center daily and weekly to ensure the room stayed at a clean room status.

Once temporary containment was in place, deconstruction commenced. The plasterboard was removed to expose the perimeter precast concrete wall. Over 1,000 ft2 of precast concrete perimeter wall was removed, and new structural steel was welded in place to support the opening. After the steel was in place, sections of the precast concrete wall were carefully saw cut inside the first floor economizer and removed.

New construction began as soon as the all these preparations were ready. A new wall inside the data center, made of metal stud framing and insulated foam board, now separates the two areas. A new sandwich panel system separates the outside air intake from the exhaust. Dampers line the ceiling, between the outside air intake and exhaust, to allow for mixing and re-use of air. An expansion joint capable of moving up to 10 inches to protect against any potential seismic event lines the connecting walls and rooftop of the first floor economizer.

Controls
Outside air cooling on the second floor is provided by modulating the outside air supply damper and the warm-air plenum mixing damper of each AHU to supply a specific temperature under the floor when outside air temperatures are cool enough for this mode of cooling. Temperature sensors located throughout the penthouse air intake area enable and disable the second floor outside air cooling system. Each AHU controls its own discharge temperature as each AHU has its own outside air supply damper and mixing damper.

Fixed-speed AHU fans move the supply air. The AHU controls are used to control its compressors. On initiation of outside air cooling, each outside air AHU damper opens wide and the mixing damper closes.

As outside air is cooler than warm plenum return air, the AHU controls shut off its compressors. When outside air cooling mode is initiated, four exhaust fans in each of the two outside air cooled rooms on the second floor are enabled. These variable speed exhaust fans modulate their speed according to the differential pressure between the computer room and the office area lobby space. If outside air temperature decreases and becomes too cool, the AHU mixing damper partially opens while the outside air supply damper partially closes. When the outside air temperature in the air intake area rises to a value too high for supply air, the outside air supply damper closes fully and the mixing damper opens wide. The AHU return temperature senses the higher temperature of the plenum air and starts its compressors. The exhaust fans are shut down and their exhaust dampers closed as the AHUs transition to recirculation cooling mode.

The first floor outside air cooling regime does not use the AHU fans to supply air under the floor but instead employs a fan wall. The control of the fan wall, exhaust fan gallery, outside air supply dampers, and mixing dampers is split into thirds. Each third of the system is independently controlled. When the outside air temperature is low enough to initiate cooling, the supply damper opens wide and the mixing damper remains closed. The supply fan speed is controlled by variable frequency drives (VFD) with a third of the fans acting in concert to a raised floor differential pressure signal. Exhaust fan speed is controlled by the differential pressure between the computer room and the lobby, again by thirds. When outside air cooling is initiated, and after a time delay to allow outside air cooling to take effect, a signal is sent to each AHU to stop its blower fan and compressors. A signal is also sent to close the AHU return damper to preclude back flow though the shutdown AHU. As outside air temperature decreases, the supply damper and mixing damper are modulated to maintain a minimum supply temperature to the raised floor. When the temperature of the outside air rises to an unacceptable level, the AHU return dampers are opened and AHU units restart in sequence. Warm air is dawn through the AHU and this initiates compressor cooling. The fan wall and exhaust fans are then sequenced off and the outside air supply and mixing dampers close.

The controls consist of two programmable logic controllers, one for the first floor and one for the second floor controls.

Fire Protection of New Areas
Both the new penthouse addition and the ground floor economizer addition are protected by pre-action dry pipe sprinkler systems. Two sets of VESDA (very early smoke detector aspirating) systems monitor each room for smoke. The first system monitors for smoke within the room.  The second system senses air at the louvers of the first floor and penthouse air intakes. If these VESDA units at the louvers sense increased smoke, they signal interior VESDA units to increase their smoke alarm thresholds by the same value. This precludes a false actuation of the sprinkler system due to fires outside the economizer or penthouse area.

Prior to the outside air upgrade the building sprinkler system was supplied directly from the street. The addition of the penthouse required the installation of a building fire pump to boost water pressure. The new 30 horsepower electric fire pump is fed directly from a utility transformer.

Conclusion
The McKesson Data Center Outside Air Project achieved its goal of reducing the required power load and is currently running 550 kW below historic levels for this time of year. The collaborative construction plan between McKesson and Rudolph and Sletten ensured zero downtime and minimal disruption to the 24/7 operational mission critical center. The retrofitted system will continue to reduce power loads and achieve significant cost savings.


 

Wayne Everett

Wayne Everett

With 25 years of experience in electrical and data center construction, Wayne Everett is team lead of the Data Center Facilities Engineering Department at McKesson. He oversees design and strategic planning, facilities maintenance, infrastructure upgrades, construction, and special projects for McKesson’s critical hosting and networking facilities. His team consists of engineers, data center facilities technicians, electricians, and HVAC mechanics. Prior to coming to McKesson, he worked as a project manager for electrical contractors in the Atlanta area, where he managed a wide range of electrical construction projects including critical systems. He also holds an electrical contractor license in the state of Georgia. Mr. Everett recently celebrated 10 years with McKesson.

 

Dean Scharffenberg

Dean Scharffenberg

Dean Scharffenberg, a 19-year veteran of McKesson, serves as the senior director of Data Center Engineering and Automation. He provides data center analysis for McKesson’s acquisitions and enterprise data center strategy/design. Mr. Scharffenberg’s primary responsibilities include oversight of data center facilities, enterprise storage, data center networks, server engineering, and computer systems installation. As a mechanical engineer, he has a passion for engineering with special interest in fluid dynamic, electrical distribution, virtualization, and capacity planning.

 

 

 

Coleman Jones

Coleman Jones

As a project manager for general contractor Rudolph and Sletten, Coleman Jones is responsible for managing all aspects of the construction project. His primary duties include negotiating and administrating contracts, supervising project team members, monitoring job costs and schedules, and working closely with the architect and the owner to ensure the project is completed on time and within budget. A graduate of the California State University Chico Construction Management program, Mr. Jones is a LEED Accredited Professional with seven years’ experience working on critical systems for technology and health-care clients.

Avoid Target Fixation in the Data Center

Removing heat is more effective than adding cooling

By Steve Press, with Pitt Turner, IV

According to Wikipedia, the term target fixation was used in World War II fighter-bomber pilot training to describe why pilots would sometimes fly into targets during a strafing or bombing run. Since then, others have adopted the term. The phenomenon is most commonly associated with scenarios in which the observer is in control of a high-speed vehicle or other mode of transportation. In such cases, the operators become so focused on an observed object that their awareness of hazards or obstacles diminishes.

Motorcyclists know about the danger of target fixation. According to both psychologists and riding experts, focusing so intently on a pothole or any other object may lead to bad choices. It is better, they say, to focus on your ultimate goal. In fact, one motorcycle website advises riders, “…once you are in trouble, use target fixation to save your skin. Don’t look at the oncoming truck/tree/pothole; figure out where you would rather be and fixate on that instead.” Data center operators can experience something like target fixation, when they find themselves continually adding cooling to raised floor spaces, without ever solving the underlying problem and they can arrive at better solutions by focusing on the ultimate goal rather than intermediate steps.

A Hot Aisle/Cold Aisle solution that focuses on pushing more cold air through the raised floor is often defeated by bypass airflow

A Hot Aisle/Cold Aisle solution that focuses on pushing more cold air through the
raised floor is often defeated by bypass airflow

Good airflow management is a first step in reducing energy use in a data center.

Good airflow management is a first step in reducing energy use in a data center.

We have been in raised floor environments that had hot spots just a few feet away from very cold spaces. The industry usually tries to address this problem by adding yet more cooling and being creative about air distribution. While other factors are examined, we always have wind up adding more cooling capacity.

But, you know what? Data centers still have a cooling problem. In 2004, an industry study found that our computer rooms had nearly 2.6 times the cooling capacity warranted by IT loads. And while the industry has been focused on solutions to solve this cooling problem, a recent industry study reports that the ratio of cooling capacity to IT load is now 3.9.

Could it be that the data center industry has target fixation on cooling? What if we focused on removing heat rather than adding cooling?

Hot Aisle containment is a way to effectively remove hot air without mixing.

Hot Aisle containment is a way to effectively remove hot air without mixing.

Science and physics instructors teach that you can’t add anything to cool an object; to cool an object, you have to remove heat. Computers generate a watt of heat for every watt they use to operate. Unless you remove that heat, temperatures will increase, potentially damaging critical IT components. Focusing on removing the heat is a subtle but significantly different approach from adding cooling equipment.

Like other data center teams, the Kaiser Permanente Data Center Solutions team had been frustrated by cooling problems. For years, we had been trying to make sure we put enough cool air into the computer rooms. Common solutions addressed cubic feet per minute under the floor, static pressure through the perforated tiles, and moving cold air to the top of 42U computer racks. Although important, these tasks resulted from our fixation on augmenting cooling rather than removing heat. It occurred to us that our conceptual framework might be limiting our efforts, and the language that we used reinforced that framework.

We decided we would no longer think of the problem as a cooling problem. We decided we were removing heat, concluding that after removing all the heat generated by the computer equipment from a room, the only air left would be cool, unheated air. That realization was exciting. A simple change in wording opened up all kinds of possibilities. There are lots of ways to remove heat besides adding cool air.

Ducted exhaust only maximizes heat removal.

Ducted exhaust only maximizes heat removal.

Suddenly, we were no longer putting all our mental effort into dreaming up ways to distribute cold air to the data center’s computing equipment. Certainly, the room would still require enough perforated tiles and cool air supply openings for air with less heat to flow, but no longer would we focus on increasingly elaborate cool air distribution methods.

Computer equipment automatically pulls in surrounding air. If that surrounding air contains less heat, the equipment doesn’t have to work as hard to stay cool. The air with less heat would pick up heated air from the computer equipment and carry it on its way to our cooling units. Sounds simple, right?

The authors believe that Hot Aisle pods can be the basis of a very efficient heat rejection system.

The authors believe that Hot Aisle pods can be the basis of a very efficient heat rejection system.

In this particular case, we were just thinking of air as the working medium to move heat from one location to another; we would use it to blow the hot air around, so to speak.

Another way of moving heat is to use cool outdoor air to push heated air off the floor, eliminating the need to create your own cool air. And, there are more traditional approaches, where air passes through a heat transfer system, transferring its heat into another medium such as water in a refrigeration system. All of these methods eject heat from the space.

We didn’t know if it would work at first. We understood the overall computer room cooling cycle: the computer room air handling (CRAH) units cool air, the cool air gets distributed to computer rooms and blows hot air off the equipment and back to the CRAH units, and the CRAH units’ cooling coils transfer the heat to the chiller plant.

But we’d shifted our focus to removing heat rather than cooling air; we’d decided to look at the problem differently. Armed with a myriad of the new wireless temperature sensor technology available from various manufactures, we were able to capture temperature data throughout our computer rooms for a very small investment. This allowed us to actually visualize the changes we were making and trend performance data.

Cold Aisle containment.

Cold Aisle containment.

Our team became obsessed with getting heat out of the equipment racks and back to the cooling coils so it could be taken out of the building, using the appropriate technology at each site (e.g., air-side economizers, water-side economizers, mechanical refrigeration). We improved our use of blanking plates and sealed penetrations under racks and at the edges of the rooms. We looked at containment methods: Cold Aisle containment first, then Hot Aisle containment, and even ducted cabinet returns. All these efforts focused on moving heat away from computer equipment and back to our heating, ventilation, and air conditioning equipment. Making sure cool air was available became a lower priority.

To our surprise, we became so successful at removing heat from our computer rooms that the cold spots got even colder. As a result, we reduced our use of perforated tiles and turned off some cooling units, which is where the real energy savings came in. We not only turned off cooling units, but also increased the load on each unit, further improving its efficiency. The overall efficiency of the room started to compound quickly.

We started to see dramatic increases in efficiencies. These showed up in the power usage effectiveness (PUE) numbers. While we recognized that PUE only measures facility overhead and not the overall efficiency of the IT processes, reducing facility overhead was the goal, so PUE was one way to measure success.

To measure the success in a more granular way, we developed the Computer Room Functional Efficiency (CRFE) metric, which won an Uptime Institute GEIT award in 2011 http://bit.ly/1oxnTY8). We used this metric to demonstrate the effects of our new methods on our environment, and prove that we had seen significant improvement.

This air divider is home made unit to separate Hot and Cold Aisles and prevent bypass air. Air is stupid; you have to force it to do what you want.

This air divider is home made unit to separate Hot and Cold Aisles and prevent bypass air. Air is stupid; you have to force it to do what you want.

Floor penetration seals are essential but often overlooked steps.

Floor penetration seals are essential but often
overlooked steps.

In the first Kaiser Permanente data center in which we implemented what have become our best practices, we started out with 90 CRAH units in a 65,000-square-foot (ft2) raised floor space. The total UPS load was 3,595 kilowatts (kW), and the CCF was 2.92. After piloting our new methods, we were able to shut down 41 CRAH units, which helped reduce CCF to 1.7, and realize a sustained energy reduction of just under 10% or 400 kW. Facility operators can easily project savings to their own environments, using 400 kW and 8,760 hours per year, times their own electrical utility rate. At $0.10/kilowatt-hour, this would be $350,000 annually.

In addition, our efficiency increased from 67% to 87%, as measured using Kaiser Permanente’s CRFE methodology, and we were still able to maintain stable air supplies to the computer equipment within ASHRAE recommended ranges. The payback on the project to make the airflow management improvements and reduce the number of CRAH units running in the first computer room was just under 6 months. Kaiser Permanente also received a nice energy incentive rebate from the local utility, which was slightly less than 25% of the material implementation cost for the project.

Under floor PDU Plenaform - No leak is too small; find the hole and plug it!

Under floor PDU Plenaform – No leak is too small; find the hole and plug it!

Perforated tile placement is key.

Perforated tile placement is key.

As an added benefit, we also reduced maintenance costs. The raised floor environment is much easier to maintain because of the reduced heat in the environment. When making changes or additions, we just have to worry about removing the heat for the change and making sure an additional air supply is available; the cooling part takes care of itself. There is also a lot less equipment running on the raised floor that needs to be maintained, and we have increased redundancy.

Kaiser Permanente has now rolled this thinking and training out across its entire portfolio of six national data centers and is realizing tremendous benefits. We’re still obsessed with heat removal. We’re still creating new best practices in airflow management, and we’re still taking care of those cold spots. A simple change in how we thought about computer room air has paid dividends. We’re focused on what we want to achieve, not what we’re trying to avoid. We found a path around the pothole.


 

steve-pressSteve Press has been a data center professional for more than 30 years and is the leader of Kaiser Permanente’s (KP) National Data Center Solutions group, responsible for hyper-critical facility operations, data center engineering, strategic direction, and IT environmental sustainability, delivering real-time health care to more then 8.7 million health-care members. Mr. Press came to KP after working in Bank of America’s international critical facilities team for more than 20 years. He is a Certified Energy Manager (CEM), and an Accredited Tier Specialist (ATS). Mr. Press has a tremendous passion for greening KP’s data centers and IT environments. This has led to several significant recognitions for KP, including Uptime Institute’s GEIT award for Facilities Innovation in 2011.

 

pitt-turnerPitt Turner IV, PE, is the executive director emeritus of Uptime Institute, LLC. Since 1993 he has been a senior consultant and principal for Uptime Institute Professional Services (UIPS), previously known as ComputerSite Engineering, Inc.

Before joining the Institute, Mr.Turner was with Pacific Bell, now AT&T, for more than 20 years in their corporate real estate group, where he held a wide variety of leadership positions in real estate property management, major construction projects, capital project justification and capital budget management. He has a BS in Mechanical Engineering from UC Davis and an MBA with honors in Management from Golden Gate University in San Francisco. He is a registered professional engineer in several states. He travels extensively and is a school-trained chef.