ATD Perspectives From Mainframes to Modular Designs

The experiences of Christopher Johnston, Elie Siam, and Dennis Julian are very different. Yet, their experiences as Accredited Tier Designers (ATDs) all highlight the pace of change in the industry. Somehow, though, despite significant changes in how IT services are delivered and business practices affect design, the challenge of meeting reliability goals remains very much the same, complicated only by greater energy concerns and increased density. All three men agree that Uptime Institute’s Tiers and ATD programs have helped raise the level of practice worldwide and the quality of facilities. In this article, the last of a series of three, Integrated Design Group’s Dennis Julian examines past practices in data center design and future trends, including modular designs.

Dennis Julian

Dennis Julian

Dennis Julian is principal – A/E Design Services at Integrated Design Group Inc. He is a Professional Engineer with more than 25 years experience in multi-discipline project management and engineering, including management of architectural, mechanical, HVAC, plumbing, fire protection, and electrical engineering departments for data center, office, medical, financial, and high technology facilities. Mr. Julian has been involved in the design and engineering of over 2 million square feet ft2 of mission critical facilities, including Uptime Institute Tier IV Certified data centers in the Middle East. He has designed facilities for Digital Realty Trust, RBS/Citizens, State Street Bank, Orange LLC, Switch & Data (Equinix), Fidelity, Verizon, American Express, Massachusetts Information Technology Center (the state’s main data center), Novartis, One Beacon Insurance, Hartford Hospital, Saint Vincent Hospital, University Hospital, and Southern New England Telephone.

Dennis, please tell me how you got your start in the data center industry?

I started at Worcester Polytechnic University in Worcester and finished up nights at Northeastern University here in Boston. I got into the data center industry back in the 1980s doing mainframes for companies like John Hancock and American Express. I was working at Carlson Associates. Actually at the beginning, I was working for Aldrich, which was a (contractor) part of Carlson. Later they changed all the names to Carlson, so it became Carlson Associates and Carlson Contracting.

We did all kinds of work, including computer rooms. We did a lot of work with Digital Equipment doing VAX and minicomputer projects. We gradually moved into the PC and server worlds as computer rooms progressed from being water-cooled, 400-hertz (Hz) power systems to what they are today, which is basically back to water cooled but at 120 or 400 volts (V). The industry kind of went the whole way around, from water-cooled to air-cooled back to water-cooled equipment. The mainframes we do now are partly water cooled.

After Carlson went through a few ownership changes, I ended up working for a large firm—Shooshanian Engineering Associates Inc. in Boston—where I did a larger variety of work but continued doing data center work. From there, I went to van Zelm HeyWood & Shadford in Connecticut for a few years, and then to Carter and Burgess during the telecom boom. When the telecom boom crashed and Carter and Burgess closed their local office, I went to work for a local firm, Cubellis Associates Inc. They did a lot of retail work, but I was building their MEP practice. And, when I left them about 8-1/2 years ago, I joined Integrated Design Group to get back into an integrated AE group doing mission critical work.

When I joined Integrated Design Group, it was already a couple of years old. As with any new company, it’s hard to get major clients. But they had luck. They carried on some projects from Carlson. They were able to get nice work with some major firms, and we were able to just continue that work. Then the market for mission critical took off, and we just started doing more and more mission critical. I’d say 90-95% of our work is for mission critical clients.

What was it like working in the mainframe environment?

Back in those days, it was very strict. The cooling systems were called precision cooling because many of the projects were based on the IBM spec. It was really the only spec out in those days, so it was ± 2° on cooling. The mainframes had internal chillers, so we brought chilled water to the mainframes in addition to the CRACs that cooled the room itself.

Also, the mainframes of the time were 400 Hz. They had their own MG (motor-generator) sets to convert the power from 60 to 400 Hz, which caused its own issues with regard to distribution. For instance, we had a lot of voltage drop due to the higher frequency of the power. So that made for different design challenges than those we see now, but watts per square foot were fairly low, even though the computers tended to be fairly large.

There was a lot of printing in those days as well, and the printers tended to be part of the computer room, which was a fire hazard and caused a dust problem. So there were a number of different issues we had to deal with in those days.

What are some of the other differences with today’s hardware?

Today’s systems are obviously high capacity. Even so the power side is much easier than the cooling side. I can go up a few wire sizes and provide power to a rack much more easily than I can provide more cooling to a rack.

Cooling with air is difficult because of the volumes of air, so we find ourselves going back to localized cooling using refrigerant gas, chilled water, or warm water. The specific gravity of water allows it to reject more heat than the same volume of air so it makes a lot more sense.

For energy efficiency, moving water is easier than moving large volumes of air. And with localized cooling, in-rack, or rear-door heat exchangers, we are able to use warm water so we either use a chiller system at 60°F instead of 44°F water or we run it directly off a cooling tower using 80°F or 85°F water. The efficiencies are much higher, but now you have made the HVAC systems more complicated, a bigger part of the equation.

Can you tell me about some of your current projects?

We’re doing a project right now, we’re just finishing construction, and we are ready to do testing for Fidelity’s West data center which is located in Omaha, NE.

It’s a combination of a large support building, a large mainframe wing, and another section for their Centercore project. The Centercore product is a modular data center, which we designed and developed with them. This project starts at 1 MW of data center space but has the capacity to grow to 6 MW.

The project is very interesting because it is a mix of stick built and modular. We’re using the local aquifer to do some cooling for us. We’re also using some free cooling and chilled water beams, so it’s very energy efficient. It’s nestled into the side of a hill, so it is low visibility for security purposes and blends in with the landscape. It’s about 100,000 ft2 overall.

 

Centercore

Fidelity’s Centercore Project

We’re designing mainly for about 6-7 kW per cabinet in the Centercore. In the mainframe it’s not as simple to calculate, but the area is 1.2 MW.

Industry people tend to associate firms like Integrated Design Group with stick-built projects, How did this project come about?

Fidelity asked Integrated Design Group to develop an off-site constructed data center to address the limitations they saw in other offerings on the market. It needed to be flexible, non-proprietary and function like any other stick-built data center. The proof of concept was a 500-kW Concurrently Maintainable and Fault Tolerant data center in North Carolina with 100% air-side economizer. It is a complete stand-alone data center with connections back to the main facility only for water and communications.

The next generation was the Fidelity West data center. We located the power units on the lower level and the computer room on the upper level so it would match the new support building being built on the site. It is Concurrently Maintainable and Fault Tolerant and uses a pumped refrigerant cooling system that provides 100% refrigerant-side economization.

Fidelity wanted an off-site constructed system that was modular and had the ability to be relocated. Under this definition it could be treated as a piece of equipment and would not need to be depreciated as a piece of real property. This could lead to substantial tax savings.  The other goal was to prevent overbuilding facilities that would not be occupied for several years, if ever.

We think of Centercore as more of a delivery system and not just a product. It can be customized to suit the customer’s requirements. ID has conceptualized units from 100 kW to multi-megawatt assemblies in any reliability configuration desired. The other goal was to prevent overbuilding facilities that would not be occupied for several years, if ever.  Given the difficulty in predicting where the IT systems will go in terms of size and power requirements, a “Load on Demand” solution was desired.

When did you get your ATD? Do you remember your motivation?

December 2009. I was foil number 147. At the time we were doing work in the Middle East, where there was a lot of emphasis on having Tier Certification. It was an opportune time to get accredited since ATD training had just started. Since then we have designed facilities in the Middle East, one of which (ITCC in Riyad, Saudi Arabia), has already received Tier IV Certification of Design Documents. We anticipate testing it early this summer for Constructed Facility. Several other projects are being readied for Tier Certification of Design Documents.

Are you still working internationally?

We have done several Tier III Certified Design Documents and Constructed Facility data centers for redIT in Mexico City. Well, it’s one facility with several data centers. And we’ve got several more under construction there. We’re still doing some work in Riyadh with Adel Rizk from Edarat, which is our partner over there. They do the project management and IT, so they become the local representation. We do design development and documents. Then we attend the test and do a little construction administration as well.

Is the ATD an important credential to you?

Yes. It helped bring us credibility when we teamed up with Edarat. We can say, “The designer is not only American-based but also an ATD.” When I see some of the designs that are out there I can see why customers want to see that accreditation, as a third-party verification that the designer is qualified.

What changes do you see in the near future?

Energy efficiency is going to stay at forefront. Electrically we are probably going to see more high-voltage distribution.

I think more people are going to colos or big data centers because they don’t want to run their own data centers, because it is just a different expertise. There is a shortage of competent building engineers and operators so it is easier to go somewhere they can leverage that among many clients.

You will see more medium voltage closer to the data center. You will see more high voltage in the U.S. I think you will see more 480/277-V distribution in the data center. Some of the bigger colo players and internet players are using 277 V. It isn’t mainstream, but I think it will be.

And, I think we are going to go to more compressor-less cooling, which is going to require wider temperature and humidity ranges inside the data centers. As the servers allow that to happen and the warrantees allow that to happen, eventually we’ll get the operators to feel comfortable with those wider ranges. Then, we’ll be able to do a lot more cooling without compressors, either with rear-doors, in-rack, or directly on a chip in a water-cooled cabinet.

Kevin Heslin

Kevin Heslin

This article was written by Kevin Heslin. Kevin Heslin is senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.

 

Share this