Uptime Institute Accredited Tier Designers address particulates and seismic activity in the Chilean capitol.
By Panagiotis Laziridis, ATD and Jan Carlos Sens, ATD
The city of Santiago, Chile, is a challenging place to site a data center. It has a history of strong earthquakes, it is close to dozens of active volcanoes, and the nearby Andes Mountains capture air particulates and VOCs, so the air is very polluted. On the other hand, its moderate climate means that it enjoys low temperatures practically every day of the year, with average temperature ranging from 15° C to 25°C, which is a highly favorable environment for an air conditioning energy recovery system.
In 2011, Sonda, a South American IT provider, decided to tackle Santiago’s difficult building environment by building a data center that would meet the Uptime Institute’s Tier III, withstand earthquakes and achieve a PUE between 1.25 and 1.50 by making use of the city’s potential for free cooling.
Other features of the data center include:
Total area of 6,500 square meters (m2)
Six 250 m2 IT rooms be installed in three steps
Installed load of 600 kilowatts (kW) per room (density of 2.4 kW/m2, 6 kW/rack)
48-hours minimum storage of diesel oil for generators
48-hours minimum storage of potable water for replenishing evaporative coolers.
In addition, the site would include a well capable of supplying all the water consumed in the data center, making the data center self-sufficient with regard to makeup water for the cooling system.
Project designers began by evaluating a direct free-cooling solution for the site. That idea was discarded after the design team analyzed the risks posed by the local environment, particularly the particulates and VOCs, and the low humidity level. Santiago sits in a valley created by the Andes ridge and the Pre-cordillera ridge, which causes thermal inversion throughout the year, especially in the winter.
Based on recommendations from ASHRAE TC 9.9 – Mission Critical Facilities, Technology Spaces and Electronic Equipment and ASHRAE – Gaseous and Particulate Contamination Guidelines for Data Centers, an indirect free-cooling system replaced the direct free-cooling system. The air conditioning system was designed using chillers with a centrifugal compressor and water condenser. Dual coil CRAC units were used for energy recovery.
The new design met the challenges posed by local temperatures that can go below as low as -6.0° C, which could cause water in the piping and cooling tower and thermal storage tank to freeze. As a consequence, the design team chose evaporative coolers instead of cooling towers, which minimized the volume of water exposed to low temperatures. In addition, an ethylene glycol-clean water solution prevents freezing in closed circuits and reduces fouling sludge in the cooling coil of the free cooling, piping and chillers. The racks work with return ductwork in order to enable the use of cold aisles at 25 ° C and return above 35 ° C, maximizing the hours of free cooling.
The evaporative cooler meets the total capacity of the central chiller (CAG), but takes into account the transient flow of the CAG plus free cooling. The total thermal load that reaches the cooler is at the most equal to the maximum load of the CAG, i.e, the energy dissipated by the evaporative cooler operating with the CAG at full load corresponds to the thermal load of the rooms (IT) plus the work of the compressor and the total heat rejected defines the capacity of the evaporative cooler. When free cooling dissipates the load, the portion of heat corresponding to the work of the compressor is lower, and the more efficient the free cooling the lower will be (total heat is always lower).
The automation system was designed to take into account the performance curve of the chiller (NPLV) and evaporative coolers. The number of machines in operation is always determined by the equation of load times setup at the best operating point, and the automation system can even activate a backup machine to achieve the best energy efficiency point.
The free cooling also reduces the consumption of makeup water, since the volume of evaporated water directly relates to the thermal load dissipated by the evaporative cooler.
The heat load is also lower because there is no need to dissipate heat generated by the chiller compressor, so the volume of water to evaporate is less. Under extreme low conditions (TBU below 2.0° C), it is possible to disconnect evaporative cooler’s recirculation pump, and it will operate as a dry cooler.
The table below shows the overall energy and water performance of the data center, including PUE and WUE values for the year. Carrier’s HAP 4:50 software program was used both for the thermal load calculation and the energy simulation. The data were transferred to Excel spreadsheets where chiller, cooler and pump data were introduced, as well as energy losses in transformers and UPS according to installation.
In addition, use of indirect free cooling eliminated the risk of particulates and VOC contamination in the data center as well as the problems associated with low humidity in the air, saving the filtration system, power consumption, and water for humidification costs and reducing the risk to the IT equipment.
Even in the presence of volcanic ash, only the fresh air conditioner (which can be turned off ) and the water exposed to the outside environment, the basin and external components of the heat exchanger of the evaporative cooler are exposed.
Simulation Calculated PUE
The concentration of particulates can be controlled by increasing the blowdown flow of the cooler, so the cooler ends up acting as an air scrubber.
The data center building was designed using advanced technology against earthquakes. It was built as an independent block sitting on foundations with a vibration dampening system. All piping and penetrations are equipped with vibration damping elements (flexible).
As a general practice, the project met the challenges of building a data center in a region full of hazards, with high incidence of earthquakes and high levels of environmental contamination, but also one featuring a low operating cost and reliable utility power supply (urban area).
Commissioning on this project has been concluded, with integrated testing underway in May. Sonda Quilicura received Tier III Certification of Constructed Facility in the spring of 2013.
Panagiotis Lazaridis is a mechanical engineer and graduated at Faculdade de Engenharia Industrial (FEI) in 1985. He is director of L&M Engenharia, an engineering company specialized in design of electrical installations, hydraulic installations and air conditioning systems design for office buildings, industrial plants and mission critical areas. He has participated in projects for numerous clients from the banking, manufacturing, and technology sectors, including EDS, Barclays, Alstom, IBM and Dell.
Jan Carlos Sens is a mechanical engineer and graduated at Faculdade Armando Álvares Penteado in 1985, postgraduate in Complements of Thermodynamics Applied to Processes, at Universidade de São Paulo – USP Engineering Manager Mechanical Installations and System of Ventilation and Air Conditioning. He participated in projects in mission critical data center, industrial; hydroelectric, nuclear and thermoelectric plants and other technical projects. His data center clients have included Vivo, Ativas, T-System, Uol, Banco do Brasil Caixa, CIPD PB and Barclays.
https://journal.uptimeinstitute.com/wp-content/uploads/2014/05/30.jpg4751201Matt Stansberryhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngMatt Stansberry2014-05-08 12:12:552014-06-19 07:10:55Meeting Data Center Design Challenges in Santiago, Chile
Editor’s note: Mr. Dickerman’s feature on the challenges of starting a Facilities Team breaks new and unexpected ground, as Mr. Dickerman adapts the new FORCSS methodology to help resolve a staffing question in a hypothetical case. The Uptime Institute did not anticipate this use of FORCSS as it developed the new methodology. In fact, records of the two Charrettes of industry stakeholders do not include any discussion of staffing levels.
Nonetheless, the Uptime Institute and the authors of the FORCSS document are gratified by Mr. Dickerman’s imaginative, if hypothetical, application of FORCSS, recognizing that the success of FORCSS as a tool depends on individuals like Mr. Dickerman finding it an easy-to-use tool to prioritize IT deployments.
It is Monday morning, and Pat, newly promoted to the position of Data Center program manager, is flying to the construction site of the company’s new data center. Last Friday the CIO of the company said to Pat, “I have good news and bad news! The good news is that based on the great job you did with the FORCSS™ analysis for our company’s five-year data center strategy, we have decided to promote you. You are going to be responsible for the operation of the new Tier III data center we’re building so we can consolidate all our IT into one facility.”
Naturally Pat wondered, “What’s the bad news?”
The CIO continued, “Since this is the first data center we’re going to own, we don’t actually have a Facility Management team for you to manage. You’re going to have to create the team starting from scratch. And the team has to be in place within four months to help commission the new data center and accept the facility from the contractor. Fly out to the site on Monday, and come back to me in two weeks with a draft of your plans for the new Facility Management team.”
Of course, some program managers, informed that they will be creating a Facility Management (FM) team from scratch, might view that as more good news rather than bad news.
Experienced facility managers will immediately recognize that Pat’s challenge is much bigger than just hiring a new FM team. The “People” component is certainly part of what needs to be done in the next four months; however, operating the new data center will also require establishing relationships with service vendors, utilities and suppliers. Creating a maintenance plan with tasks and schedules is essential, and each maintenance task, whether preventive, predictive or reactive, will require a written procedure for the operators to follow. In addition, the data center will need a complete set of operating policies and rules. While all this is being created, Pat (and the FM team) will need to monitor the construction of the new facility and participate in the commissioning, acceptance and certifications of the site. Finally, the FM team will want to establish a good working relationship with the team’s “customers,” the IT personnel who will be installing and operating IT equipment in the data center.
The Mission
Pat’s first task is to decide on a mission statement for the new FM team. Employees sometimes think of a mission statement as an enterprise-level declaration of the goals and objectives of a company, developed to have something to put on the first page of the annual report but having little relevance to operations. But all employees of an enterprise, and certainly all the managers, should have mission statements of their own, with three key elements:
What am I supposed to do (goals and objectives)?
Who am I supposed to do it for (clients/stakeholders)?
How am I going to be measured (measures of value)?
In Pat’s case, the mission statement might start out quite simply:
Goals:
Operate the data center with 100% safety and 100% availability. [In our story, Pat’s data center is a critical facility, with a consequent requirement for a commitment to 100% availability. An organization with several sites and a resilient overall architecture might, in theory, accept an objective of less than 100% availability, but it is hard to imagine any facility manager going to a CIO and saying “I’m committed to 95% uptime for this data center!”]
Install and activate IT equipment as it migrates into the data center, on time and within budget.
Manage the facility within an approved budget.
Clients/stakeholders:
The CIO
The company’s IT departments and IT users
Stakeholders – The rest of the company, vendors, the company’s customers, shareholders.
Measures of value:
Safety and availability records – no incidents, no events
IT fit-out scheduled dates versus actual dates
Facility budget versus actuals.
Of course, when Pat presents a mission statement to the CIO some additional objectives might be added:
A PUE target
A renewable energy target to support the company’s environmental sustainability goals
Certification targets for the team to achieve Uptime Institute, ISO or other certifications within specified time frames.
Creating the Structure, and Structuring the Team
Once management has approved the mission of the new department, Pat can begin to develop the strategies required to reach the objectives. From this point forward all decisions will be based on the technical and business conditions specific to the new corporate data center, so Pat will need to understand those conditions. In addition to the obvious review of the design and equipment selections for the new data center, Pat will engage in in-depth discussions with all the IT user groups to understand their migration plans, capacity growth forecasts and the criticality of the applications that will be hosted in the data center.
Pat will also need to understand the exostructure–the external resources and factors that can either support or constrain the FM team or that pose a risk to the operation of the data center. [Note that IT departments sometimes refer to cloud-based services and other external IT resources as exostructure, but that is not how Pat’s company views it].
To understand the exostructure, Pat will need to interview key equipment vendors, local utilities and service providers to determine their capabilities, strengths and weaknesses and look at all the physical constraints and risks in the area. Finally, since almost all the decisions will result in some level of expenditure of company funds, Pat will want to carefully document everything learned during those investigations and the subsequent strategic decisions for inclusion in the budgeting process for the new department.
To develop strategies in a relatively simple manner, Pat might consider starting with three focus areas:
People
Materials and Methods (M&M)
Policies and Procedures (P&P)
For each focus area, Pat must identify critical decision points; factors influencing each decision can be listed, perhaps in a simple spreadsheet or decision matrix as shown in Figure 1.
Figure 1. Pat’s decision matrix
As Pat investigates the infrastructure and the exostructure specific to the new data center, the critical factors for each item can be replaced with real world data–costs, conditions, existing regulations or standards and so on. Then the decisions column can be filled in with Pat’s recommendations on each item. During the planning sessions with the CIO, the approval of those decisions can be documented.
The most significant decisions will require separate documentation, to explain the costs, risks and benefits of a particular issue (e.g. using FTEs or a service vendor to provide operations coverage after hours and on weekends or investing in a large inventory of spare parts on site).
Pat is comfortable with the FORCSS methodology and wants to make FORCSS a standard decision tool within the company. The more times the methodology is used in the decision process, the better. So, here’s how Pat might apply FORCSS to presenting one of the key decisions that needs to be made, justified and documented: whether to use FTEs or a service contractor for operational staffing of the new data center (see Figure 2).
Figure 2. Pat adapted FORCSS as a methodology to choose between a service vendor and in-house staffing.
The Facility Management Structure
After spending time going through the decision matrix and reviewing the FORCSS analyses of the major decisions with the CIO, Pat will be well positioned to assemble the facility management structure for the new data center.
The elements will include:
The team’s mission statement with clear objectives and measurements. One of the objectives will be an internal service level agreement, essentially a contract between the FM team and the IT department, committing to availability, response and communication levels.
A table of organization and responsibilities for the new department, including both internal positions and key vendors. For each internal position, there will be an associated job description; for each vendor, a scope description and service level agreement (see Figure 3).
Figure 3. Building a new facilities requires strict delineation of roles.
A staffing plan that details hours of operation, shift structures and tasks to be self-performed and outsourced. As this is a new data center, the staffing plan will include a recruiting plan to find and hire the employees required.
A training plan with descriptions of each required training session and course, ranging from the safety overview session (SOS) given to every person who enters the data center to the individual development plans for full-time employees. Licenses, certifications or professional credentials required for any employee will be included in the training plan.
A list of internal and external policies that will need to be created for the data center. This will include policies for safety, work rules, human resources, security and access, environmental sustainability, purchasing and materials management, cleaning and rules of conduct for employees, visitors and vendors.
The vendor management plan, listing the vendors the FM team intends to contract with and ultimately detailing the scope of the vendor’s responsibilities and commitments, the service level agreement with that vendor, the vendor’s contacts and escalation, and the vendor’s freedom of action–tasks the vendor is allowed to do on a routine basis, tasks the vendor must ask permission to do and tasks the vendor must be supervised for.
A list of the plans and procedures for preventive, predictive and reactive maintenance that will need to be created over the next four months. The procedures themselves can be developed based on manufacturer’s maintenance recommendations, industry standards, the design of the data center (including Tier level) and the framework of policies and rules that will regulate the operation of the data center.
A description of the maintenance management system that the team will use to schedule, track and document preventive, predictive and reactive maintenance. This will include an acquisition plan to purchase the system and an implementation plan to place the system in operation, load the full list of maintenance tasks with associated tools, spares and consumables and a full set of operational procedures for using the system starting from Day 1.
And since this is a new data center, Pat will include a commissioning and acceptance plan that will detail the steps in the transition from a site under construction to site in operation.
The last element of the facility-structuring plan that Pat will develop is a budget, which will have both capital expense and operating expense forecasts. Capital expenses in the first year will be high, since tools and equipment will need to be purchased, along with initial stocks of spares and consumables. Since Pat has carefully documented all the decisions that combine to create the facility management structure for the new data center, the operating budget can be derived from those decisions.
With a decision matrix and the outlines of the facility-structuring plan in hand, Pat is ready to meet with the CIO and get approval for these decisions and strategies. And once management has approved the plan, the hard work of turning that plan into reality can begin.
Timing
How much involvement the FM team has in the design, construction and commissioning of the site will affect the facility’s availability and hence the ability of the team to meet its mission statement, especially in the first days and weeks after a facility is accepted and handed over. Many of us have had experience with design engineers and skilled contractors who have never actually operated a critical facility, and who include elements in the infrastructure that will make the operations team’s task more difficult. Examples can range from mundane–valves that can only be reached by ladder or scissor lift, no convenience outlets in critical spaces to plug in tools–to silly–security doors with the magnetic lock on the unsecure side of the door, roof drains at the high point of a roof section–to serious–obstructed fire corridors and poorly positioned smoke detectors. And, commissioning agents usually test for reliability, not operability, so many of these operational snags will be missed during commissioning. And, at many new sites, commissioning tests (and sometimes Uptime Institute Tier Certification of Constructed Facility demonstrations) are performed by the contractor with the commissioning agent watching.
The FM team that will actually operate the facility may only be observing, not actively participating. So, the first time the facilities engineers take “hands-on” control of the systems is when the facility is in production.
The first time the critical systems’ maintenance tasks are performed is when the facility is in production. And, the first time the FM team must respond to a system or component failure is when the facility is in production.
It is understandable that owners may be reluctant to pay salaries or fees to an operations team with “nothing to operate” (while the facility is in design and construction), but having the FM team on site early on will reduce risk of human error during the critical first months of operation, when the site is at its most vulnerable. In addition, FM team involvement in the selection and purchase negotiations for major infrastructure systems should result in lower total cost of ownership (by reducing operational costs) as well as better documentation and training packages (two items which construction contractors seldom include in purchase negotiations, but which make a big impact on the operability of the site).
Fred Dickerman is vice president, Data Center Operations for DataSpace. In this role, Mr. Dickerman oversees all data center facility operations for DataSpace, a colocation data center owner/operator in Moscow, Russian Federation. He has more than 30 years experience in data center and missioncritical facility design, construction and operation. His project resume includes owner representation and construction management for over $1 billion (US) in facilities development, including 500,000 square feet of data center and five million square feet of commercial properties. Prior to joining DataSpace, Mr. Dickerman was the VP of Engineering and Operations for a Colocation Data Center Development company in Silicon Valley.
https://journal.uptimeinstitute.com/wp-content/uploads/2014/05/021.jpg4751201Matt Stansberryhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngMatt Stansberry2014-05-08 12:07:262014-06-27 07:36:10Building a data center facilities management team from scratch
Excel is the basis of a surprisingly simple visual tool to document underfloor conditions
By Chad Beery, ATD
Engineers regularly employ pictures and drawings to communicate design ideas. They sketch on whiteboards during a meeting, talk with their hands and sometimes even sketch on the back of napkins. In fact, the deliverables produced by design engineers and consultants include drawings—although today these are typically computer-generated. And while it is true that engineers and design consultants rely on all sorts of images, it is equally true that sometimes only a photograph can capture and convey important details accurately. The composite underfloor picture is one such instance.
Take, for example, a large enterprise that has a 14,000-square-foot (ft2) computer room with a 30-inch raised floor. Chilled-water air-handling units fed from a piping loop in the raised floor provided cooling, with most of the piping having been installed in the early 1990s. As part of a room refresh, the enterprise engaged Peters, Tschantz & Associates to develop a plan to replace the piping without interrupting service to the active computer room.
Even with a very clear underfloor plenum, replacing piping without interrupting service in a computer room is a difficult task—demolition and welding present hazards to sensitive equipment, and floor space for rigging and staging is at a premium. In this case, extensive copper and fiber data cables as well as power whips run under the floor compounded the complexity. In many areas, the cabling had been abandoned in place as equipment was removed. Over time, the problem became worse, as more and more abandoned cables made it even less clear what was in service and what could be safely removed.
Peters, Tschantz & Associates began its design work with a field investigation to help it fully understand the existing conditions. We were able to document the existing piping and create a three-dimensional model (Revit) of the piping. We knew that understanding the location of the power and data cabling was crucial to completing this project successfully.
Simple observation told us that it would be very impractical, if not impossible, to document and present the underfloor condition with enough detail to accurately convey the complexity of the work. As an alternative, we developed a technique to create a composite photograph of the entire underfloor plenum (see Figure 1).
Overview of the entire facility pieced together using individual photos and Excel.
First, we established a labeling nomenclature for floor tiles (letters for columns, numbers for rows). Then we lifted the floor tiles one at a time, so that we could take a digital photograph of the underfloor plenum directly beneath each tile. In some cases, equipment such as PDUs, CRACs and IT cabinets on the tiles blocked our efforts. As the photographs were taken, we recorded their addresses on a dry erase board laid next to the open tile.
Post-processing work began as soon as the last of the photographs was taken. We renamed the photographs using the column-and-row nomenclature. We cropped all the photos to a 1:1 aspect ratio (see Figure 3) to match the 24-by-24-inch floor tile opening and then compressed the files.
Filter tools allow user to select portion of floor for composite picture to reduce file size and processing time.
Next, we constructed a grid matching the floor tile layout in Microsoft Excel, using cell borders to outline the room. A VBA macro written by our design team scans through a folder of pictures, reading their addresses from their file names and inserting them in the proper location in the grid. The result is a single file composed of over 2,500 individual pictures.
As can be expected, the file is quite large. To allow for higher-resolution photographs of smaller regions of the floor, the macro was enhanced to allow the user to filter for a specific location in the room, instead of generating a picture of the entire room (see Figure 4), which helps us generate composite pictures of small areas of the floor quickly.
The individual grids were assembled digitally in an Excel filter.
Because of the size of the picture, the client wanted us to develop a method of identifying each floor tile on the composite photo, so we added a user-selected label-view function to the macro. These labels greatly ease the referencing specific floor tiles in reports, meetings or telephone conversations (see Figure 5).
The final version included labels that described the grid locations.
Having an accurate record of underfloor conditions has been helpful to the engineers as they plan minor moves, adds and changes in the computer room; as well as for developing master planning strategies for system replacement and upgrade. Concepts that seem practical on the surface (looking above the floor only) can be reviewed after looking deeper (under the floor).
The facilities staff has also found the composite photo useful for conveying the need for underfloor cleanup of abandoned cabling.
In one case, the photo was used to overcome a manager’s reluctance to approve a project for removal of underfloor cabling.
While photographs do not appear poised to take the place of traditional engineering drawings in the near future, this project is an example of how an innovative use of technology can provide great benefit to the designer, facility owner and installing contractor.
Chad Beery, PE, ATD, LEED AP, is one of the three ATDs at Peters, Tschantz & Associates, Inc. in Akron, OH. The firm’s 30+ employees provide MEP engineering services to a variety of industries, with a focus on the mission-critical and health-care sectors. Since joining the firm in 2007, Mr. Beery has been involved in many different project types. His mission-critical work has involved data center design, equipment replacements, CFD modeling and redundancy and capacity consulting. Another important part of his work has been numerous control system design projects, both new and retrofit. Enjoying hands-on work and seeing systems in action, Mr. Beery has also developed system commissioning skills on a number of projects ranging from data centers to schools to research facilities.
https://journal.uptimeinstitute.com/wp-content/uploads/2014/05/29.jpg4751201Matt Stansberryhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngMatt Stansberry2014-05-08 08:12:112017-12-13 08:36:58Documenting Underfloor Conditions in an Operational Data Center
Meeting Data Center Design Challenges in Santiago, Chile
/in Design/by Matt StansberryUptime Institute Accredited Tier Designers address particulates and seismic activity in the Chilean capitol.
By Panagiotis Laziridis, ATD and Jan Carlos Sens, ATD
The city of Santiago, Chile, is a challenging place to site a data center. It has a history of strong earthquakes, it is close to dozens of active volcanoes, and the nearby Andes Mountains capture air particulates and VOCs, so the air is very polluted. On the other hand, its moderate climate means that it enjoys low temperatures practically every day of the year, with average temperature ranging from 15° C to 25°C, which is a highly favorable environment for an air conditioning energy recovery system.
In 2011, Sonda, a South American IT provider, decided to tackle Santiago’s difficult building environment by building a data center that would meet the Uptime Institute’s Tier III, withstand earthquakes and achieve a PUE between 1.25 and 1.50 by making use of the city’s potential for free cooling.
Other features of the data center include:
In addition, the site would include a well capable of supplying all the water consumed in the data center, making the data center self-sufficient with regard to makeup water for the cooling system.
Project designers began by evaluating a direct free-cooling solution for the site. That idea was discarded after the design team analyzed the risks posed by the local environment, particularly the particulates and VOCs, and the low humidity level. Santiago sits in a valley created by the Andes ridge and the Pre-cordillera ridge, which causes thermal inversion throughout the year, especially in the winter.
Based on recommendations from ASHRAE TC 9.9 – Mission Critical Facilities, Technology Spaces and Electronic Equipment and ASHRAE – Gaseous and Particulate Contamination Guidelines for Data Centers, an indirect free-cooling system replaced the direct free-cooling system. The air conditioning system was designed using chillers with a centrifugal compressor and water condenser. Dual coil CRAC units were used for energy recovery.
The new design met the challenges posed by local temperatures that can go below as low as -6.0° C, which could cause water in the piping and cooling tower and thermal storage tank to freeze. As a consequence, the design team chose evaporative coolers instead of cooling towers, which minimized the volume of water exposed to low temperatures. In addition, an ethylene glycol-clean water solution prevents freezing in closed circuits and reduces fouling sludge in the cooling coil of the free cooling, piping and chillers. The racks work with return ductwork in order to enable the use of cold aisles at 25 ° C and return above 35 ° C, maximizing the hours of free cooling.
The evaporative cooler meets the total capacity of the central chiller (CAG), but takes into account the transient flow of the CAG plus free cooling. The total thermal load that reaches the cooler is at the most equal to the maximum load of the CAG, i.e, the energy dissipated by the evaporative cooler operating with the CAG at full load corresponds to the thermal load of the rooms (IT) plus the work of the compressor and the total heat rejected defines the capacity of the evaporative cooler. When free cooling dissipates the load, the portion of heat corresponding to the work of the compressor is lower, and the more efficient the free cooling the lower will be (total heat is always lower).
The automation system was designed to take into account the performance curve of the chiller (NPLV) and evaporative coolers. The number of machines in operation is always determined by the equation of load times setup at the best operating point, and the automation system can even activate a backup machine to achieve the best energy efficiency point.
The free cooling also reduces the consumption of makeup water, since the volume of evaporated water directly relates to the thermal load dissipated by the evaporative cooler.
The heat load is also lower because there is no need to dissipate heat generated by the chiller compressor, so the volume of water to evaporate is less. Under extreme low conditions (TBU below 2.0° C), it is possible to disconnect evaporative cooler’s recirculation pump, and it will operate as a dry cooler.
The table below shows the overall energy and water performance of the data center, including PUE and WUE values for the year. Carrier’s HAP 4:50 software program was used both for the thermal load calculation and the energy simulation. The data were transferred to Excel spreadsheets where chiller, cooler and pump data were introduced, as well as energy losses in transformers and UPS according to installation.
In addition, use of indirect free cooling eliminated the risk of particulates and VOC contamination in the data center as well as the problems associated with low humidity in the air, saving the filtration system, power consumption, and water for humidification costs and reducing the risk to the IT equipment.
Even in the presence of volcanic ash, only the fresh air conditioner (which can be turned off ) and the water exposed to the outside environment, the basin and external components of the heat exchanger of the evaporative cooler are exposed.
Simulation Calculated PUE
The concentration of particulates can be controlled by increasing the blowdown flow of the cooler, so the cooler ends up acting as an air scrubber.
The data center building was designed using advanced technology against earthquakes. It was built as an independent block sitting on foundations with a vibration dampening system. All piping and penetrations are equipped with vibration damping elements (flexible).
As a general practice, the project met the challenges of building a data center in a region full of hazards, with high incidence of earthquakes and high levels of environmental contamination, but also one featuring a low operating cost and reliable utility power supply (urban area).
Commissioning on this project has been concluded, with integrated testing underway in May. Sonda Quilicura received Tier III Certification of Constructed Facility in the spring of 2013.
Panagiotis Lazaridis is a mechanical engineer and graduated at Faculdade de Engenharia Industrial (FEI) in 1985. He is director of L&M Engenharia, an engineering company specialized in design of electrical installations, hydraulic installations and air conditioning systems design for office buildings, industrial plants and mission critical areas. He has participated in projects for numerous clients from the banking, manufacturing, and technology sectors, including EDS, Barclays, Alstom, IBM and Dell.
Jan Carlos Sens is a mechanical engineer and graduated at Faculdade Armando Álvares Penteado in 1985, postgraduate in Complements of Thermodynamics Applied to Processes, at Universidade de São Paulo – USP Engineering Manager Mechanical Installations and System of Ventilation and Air Conditioning. He participated in projects in mission critical data center, industrial; hydroelectric, nuclear and thermoelectric plants and other technical projects. His data center clients have included Vivo, Ativas, T-System, Uol, Banco do Brasil Caixa, CIPD PB and Barclays.
Building a data center facilities management team from scratch
/in Executive/by Matt StansberryThe Dream Job
By Fred Dickerman
Editor’s note: Mr. Dickerman’s feature on the challenges of starting a Facilities Team breaks new and unexpected ground, as Mr. Dickerman adapts the new FORCSS methodology to help resolve a staffing question in a hypothetical case. The Uptime Institute did not anticipate this use of FORCSS as it developed the new methodology. In fact, records of the two Charrettes of industry stakeholders do not include any discussion of staffing levels.
Nonetheless, the Uptime Institute and the authors of the FORCSS document are gratified by Mr. Dickerman’s imaginative, if hypothetical, application of FORCSS, recognizing that the success of FORCSS as a tool depends on individuals like Mr. Dickerman finding it an easy-to-use tool to prioritize IT deployments.
It is Monday morning, and Pat, newly promoted to the position of Data Center program manager, is flying to the construction site of the company’s new data center. Last Friday the CIO of the company said to Pat, “I have good news and bad news! The good news is that based on the great job you did with the FORCSS™ analysis for our company’s five-year data center strategy, we have decided to promote you. You are going to be responsible for the operation of the new Tier III data center we’re building so we can consolidate all our IT into one facility.”
Naturally Pat wondered, “What’s the bad news?”
The CIO continued, “Since this is the first data center we’re going to own, we don’t actually have a Facility Management team for you to manage. You’re going to have to create the team starting from scratch. And the team has to be in place within four months to help commission the new data center and accept the facility from the contractor. Fly out to the site on Monday, and come back to me in two weeks with a draft of your plans for the new Facility Management team.”
Of course, some program managers, informed that they will be creating a Facility Management (FM) team from scratch, might view that as more good news rather than bad news.
Experienced facility managers will immediately recognize that Pat’s challenge is much bigger than just hiring a new FM team. The “People” component is certainly part of what needs to be done in the next four months; however, operating the new data center will also require establishing relationships with service vendors, utilities and suppliers. Creating a maintenance plan with tasks and schedules is essential, and each maintenance task, whether preventive, predictive or reactive, will require a written procedure for the operators to follow. In addition, the data center will need a complete set of operating policies and rules. While all this is being created, Pat (and the FM team) will need to monitor the construction of the new facility and participate in the commissioning, acceptance and certifications of the site. Finally, the FM team will want to establish a good working relationship with the team’s “customers,” the IT personnel who will be installing and operating IT equipment in the data center.
The Mission
Pat’s first task is to decide on a mission statement for the new FM team. Employees sometimes think of a mission statement as an enterprise-level declaration of the goals and objectives of a company, developed to have something to put on the first page of the annual report but having little relevance to operations. But all employees of an enterprise, and certainly all the managers, should have mission statements of their own, with three key elements:
In Pat’s case, the mission statement might start out quite simply:
Goals:
Clients/stakeholders:
Measures of value:
Of course, when Pat presents a mission statement to the CIO some additional objectives might be added:
Creating the Structure, and Structuring the Team
Once management has approved the mission of the new department, Pat can begin to develop the strategies required to reach the objectives. From this point forward all decisions will be based on the technical and business conditions specific to the new corporate data center, so Pat will need to understand those conditions. In addition to the obvious review of the design and equipment selections for the new data center, Pat will engage in in-depth discussions with all the IT user groups to understand their migration plans, capacity growth forecasts and the criticality of the applications that will be hosted in the data center.
Pat will also need to understand the exostructure–the external resources and factors that can either support or constrain the FM team or that pose a risk to the operation of the data center. [Note that IT departments sometimes refer to cloud-based services and other external IT resources as exostructure, but that is not how Pat’s company views it].
To understand the exostructure, Pat will need to interview key equipment vendors, local utilities and service providers to determine their capabilities, strengths and weaknesses and look at all the physical constraints and risks in the area. Finally, since almost all the decisions will result in some level of expenditure of company funds, Pat will want to carefully document everything learned during those investigations and the subsequent strategic decisions for inclusion in the budgeting process for the new department.
To develop strategies in a relatively simple manner, Pat might consider starting with three focus areas:
For each focus area, Pat must identify critical decision points; factors influencing each decision can be listed, perhaps in a simple spreadsheet or decision matrix as shown in Figure 1.
Figure 1. Pat’s decision matrix
As Pat investigates the infrastructure and the exostructure specific to the new data center, the critical factors for each item can be replaced with real world data–costs, conditions, existing regulations or standards and so on. Then the decisions column can be filled in with Pat’s recommendations on each item. During the planning sessions with the CIO, the approval of those decisions can be documented.
The most significant decisions will require separate documentation, to explain the costs, risks and benefits of a particular issue (e.g. using FTEs or a service vendor to provide operations coverage after hours and on weekends or investing in a large inventory of spare parts on site).
Pat is comfortable with the FORCSS methodology and wants to make FORCSS a standard decision tool within the company. The more times the methodology is used in the decision process, the better. So, here’s how Pat might apply FORCSS to presenting one of the key decisions that needs to be made, justified and documented: whether to use FTEs or a service contractor for operational staffing of the new data center (see Figure 2).
Figure 2. Pat adapted FORCSS as a methodology to choose between a service vendor and in-house staffing.
The Facility Management Structure
After spending time going through the decision matrix and reviewing the FORCSS analyses of the major decisions with the CIO, Pat will be well positioned to assemble the facility management structure for the new data center.
The elements will include:
Figure 3. Building a new facilities requires strict delineation of roles.
The last element of the facility-structuring plan that Pat will develop is a budget, which will have both capital expense and operating expense forecasts. Capital expenses in the first year will be high, since tools and equipment will need to be purchased, along with initial stocks of spares and consumables. Since Pat has carefully documented all the decisions that combine to create the facility management structure for the new data center, the operating budget can be derived from those decisions.
With a decision matrix and the outlines of the facility-structuring plan in hand, Pat is ready to meet with the CIO and get approval for these decisions and strategies. And once management has approved the plan, the hard work of turning that plan into reality can begin.
Timing
How much involvement the FM team has in the design, construction and commissioning of the site will affect the facility’s availability and hence the ability of the team to meet its mission statement, especially in the first days and weeks after a facility is accepted and handed over. Many of us have had experience with design engineers and skilled contractors who have never actually operated a critical facility, and who include elements in the infrastructure that will make the operations team’s task more difficult. Examples can range from mundane–valves that can only be reached by ladder or scissor lift, no convenience outlets in critical spaces to plug in tools–to silly–security doors with the magnetic lock on the unsecure side of the door, roof drains at the high point of a roof section–to serious–obstructed fire corridors and poorly positioned smoke detectors. And, commissioning agents usually test for reliability, not operability, so many of these operational snags will be missed during commissioning. And, at many new sites, commissioning tests (and sometimes Uptime Institute Tier Certification of Constructed Facility demonstrations) are performed by the contractor with the commissioning agent watching.
The FM team that will actually operate the facility may only be observing, not actively participating. So, the first time the facilities engineers take “hands-on” control of the systems is when the facility is in production.
The first time the critical systems’ maintenance tasks are performed is when the facility is in production. And, the first time the FM team must respond to a system or component failure is when the facility is in production.
It is understandable that owners may be reluctant to pay salaries or fees to an operations team with “nothing to operate” (while the facility is in design and construction), but having the FM team on site early on will reduce risk of human error during the critical first months of operation, when the site is at its most vulnerable. In addition, FM team involvement in the selection and purchase negotiations for major infrastructure systems should result in lower total cost of ownership (by reducing operational costs) as well as better documentation and training packages (two items which construction contractors seldom include in purchase negotiations, but which make a big impact on the operability of the site).
Fred Dickerman is vice president, Data Center Operations for DataSpace. In this role, Mr. Dickerman oversees all data center facility operations for DataSpace, a colocation data center owner/operator in Moscow, Russian Federation. He has more than 30 years experience in data center and missioncritical facility design, construction and operation. His project resume includes owner representation and construction management for over $1 billion (US) in facilities development, including 500,000 square feet of data center and five million square feet of commercial properties. Prior to joining DataSpace, Mr. Dickerman was the VP of Engineering and Operations for a Colocation Data Center Development company in Silicon Valley.
Documenting Underfloor Conditions in an Operational Data Center
/in Operations/by Matt StansberryExcel is the basis of a surprisingly simple visual tool to document underfloor conditions
By Chad Beery, ATD
Engineers regularly employ pictures and drawings to communicate design ideas. They sketch on whiteboards during a meeting, talk with their hands and sometimes even sketch on the back of napkins. In fact, the deliverables produced by design engineers and consultants include drawings—although today these are typically computer-generated. And while it is true that engineers and design consultants rely on all sorts of images, it is equally true that sometimes only a photograph can capture and convey important details accurately. The composite underfloor picture is one such instance.
Take, for example, a large enterprise that has a 14,000-square-foot (ft2) computer room with a 30-inch raised floor. Chilled-water air-handling units fed from a piping loop in the raised floor provided cooling, with most of the piping having been installed in the early 1990s. As part of a room refresh, the enterprise engaged Peters, Tschantz & Associates to develop a plan to replace the piping without interrupting service to the active computer room.
Even with a very clear underfloor plenum, replacing piping without interrupting service in a computer room is a difficult task—demolition and welding present hazards to sensitive equipment, and floor space for rigging and staging is at a premium. In this case, extensive copper and fiber data cables as well as power whips run under the floor compounded the complexity. In many areas, the cabling had been abandoned in place as equipment was removed. Over time, the problem became worse, as more and more abandoned cables made it even less clear what was in service and what could be safely removed.
Peters, Tschantz & Associates began its design work with a field investigation to help it fully understand the existing conditions. We were able to document the existing piping and create a three-dimensional model (Revit) of the piping. We knew that understanding the location of the power and data cabling was crucial to completing this project successfully.
Simple observation told us that it would be very impractical, if not impossible, to document and present the underfloor condition with enough detail to accurately convey the complexity of the work. As an alternative, we developed a technique to create a composite photograph of the entire underfloor plenum (see Figure 1).
Overview of the entire facility pieced together using individual photos and Excel.
First, we established a labeling nomenclature for floor tiles (letters for columns, numbers for rows). Then we lifted the floor tiles one at a time, so that we could take a digital photograph of the underfloor plenum directly beneath each tile. In some cases, equipment such as PDUs, CRACs and IT cabinets on the tiles blocked our efforts. As the photographs were taken, we recorded their addresses on a dry erase board laid next to the open tile.
Post-processing work began as soon as the last of the photographs was taken. We renamed the photographs using the column-and-row nomenclature. We cropped all the photos to a 1:1 aspect ratio (see Figure 3) to match the 24-by-24-inch floor tile opening and then compressed the files.
Filter tools allow user to select portion of floor for composite picture to reduce file size and processing time.
Next, we constructed a grid matching the floor tile layout in Microsoft Excel, using cell borders to outline the room. A VBA macro written by our design team scans through a folder of pictures, reading their addresses from their file names and inserting them in the proper location in the grid. The result is a single file composed of over 2,500 individual pictures.
As can be expected, the file is quite large. To allow for higher-resolution photographs of smaller regions of the floor, the macro was enhanced to allow the user to filter for a specific location in the room, instead of generating a picture of the entire room (see Figure 4), which helps us generate composite pictures of small areas of the floor quickly.
The individual grids were assembled digitally in an Excel filter.
Because of the size of the picture, the client wanted us to develop a method of identifying each floor tile on the composite photo, so we added a user-selected label-view function to the macro. These labels greatly ease the referencing specific floor tiles in reports, meetings or telephone conversations (see Figure 5).
The final version included labels that described the grid locations.
Having an accurate record of underfloor conditions has been helpful to the engineers as they plan minor moves, adds and changes in the computer room; as well as for developing master planning strategies for system replacement and upgrade. Concepts that seem practical on the surface (looking above the floor only) can be reviewed after looking deeper (under the floor).
The facilities staff has also found the composite photo useful for conveying the need for underfloor cleanup of abandoned cabling.
In one case, the photo was used to overcome a manager’s reluctance to approve a project for removal of underfloor cabling.
While photographs do not appear poised to take the place of traditional engineering drawings in the near future, this project is an example of how an innovative use of technology can provide great benefit to the designer, facility owner and installing contractor.
Chad Beery, PE, ATD, LEED AP, is one of the three ATDs at Peters, Tschantz & Associates, Inc. in Akron, OH. The firm’s 30+ employees provide MEP engineering services to a variety of industries, with a focus on the mission-critical and health-care sectors. Since joining the firm in 2007, Mr. Beery has been involved in many different project types. His mission-critical work has involved data center design, equipment replacements, CFD modeling and redundancy and capacity consulting. Another important part of his work has been numerous control system design projects, both new and retrofit. Enjoying hands-on work and seeing systems in action, Mr. Beery has also developed system commissioning skills on a number of projects ranging from data centers to schools to research facilities.