In this keynote video from 2014 Uptime Institute Symposium, Andy Lawrence, Research Director at 451 Research, provides an overview of what a data center might look like in 2020 and beyond. This presentation covers how all the various new technologies might be combined with new data center services, along with extrapolated improvements in processing, storage and energy management.
https://journal.uptimeinstitute.com/wp-content/uploads/2014/07/andy.jpg4751201Matt Stansberryhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngMatt Stansberry2014-07-24 07:29:202015-07-13 13:12:03The data center in 2020 and beyond
At Symposium 2013, Erik Ko of Twitter, Hewlett-Packard’s Ken Jackson, and James Pryor of Regions Bank discussed the experience of selecting and implementing DCIM solutions from the end user perspective. The panel was moderated by Uptime Institute’s Kevin Heslin.
Earlier that week, Symposium 2013 attendees heard presentations that focused on the potential for DCIM and some of the barriers to further adoption from 451 Research’s Andy Lawrence and Uptime Institute’s Matt Stansberry. These presentations conveyed a great deal of generalized information.
In this session, the three panelists shared information about the decision to implement DCIM, organizational goals, product selection, and implementation. Twitter, Hewlett-Packard, and Regions bank are very diverse organizations, in different industries, serving different customers, and with dissimilar IT needs. Yet each of the organizations had to make relatively judgments about cost, benefits, implementation times, and benefits to find which DCIM solution fit it best.
It turns out that it is hard to generalize too much about DCIM procurement and implementation, as each organization will have different goals, different pressure points, different needs, and different resources to put to bear.
This almost 45-minute presentation is worth watching in its entirety as the panelists examine the different paths they took to reach functioning DCIM implementations and what effort is required on an ongoing basis.
https://journal.uptimeinstitute.com/wp-content/uploads/2014/07/dcim.jpg4751201Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2014-07-23 08:16:432014-07-24 06:12:59Putting DCIM to Work for You
The experiences of Christopher Johnston, Elie Siam, and Dennis Julian are very different. Yet, their experiences as Accredited Tier Designers (ATDs) all highlight the pace of change in the industry. Somehow, though, despite significant changes in how IT services are delivered and business practices affect design, the challenge of meeting reliability goals remains very much the same, complicated only by greater energy concerns and increased density. All three men agree that Uptime Institute’s Tiers and ATD programs have helped raise the level of practice worldwide and the quality of facilities. In this article, the last of a series of three, Integrated Design Group’s Dennis Julian examines past practices in data center design and future trends, including modular designs.
Dennis Julian
Dennis Julian is principal – A/E Design Services at Integrated Design Group Inc. He is a Professional Engineer with more than 25 years experience in multi-discipline project management and engineering, including management of architectural, mechanical, HVAC, plumbing, fire protection, and electrical engineering departments for data center, office, medical, financial, and high technology facilities. Mr. Julian has been involved in the design and engineering of over 2 million square feet ft2 of mission critical facilities, including Uptime Institute Tier IV Certified data centers in the Middle East. He has designed facilities for Digital Realty Trust, RBS/Citizens, State Street Bank, Orange LLC, Switch & Data (Equinix), Fidelity, Verizon, American Express, Massachusetts Information Technology Center (the state’s main data center), Novartis, One Beacon Insurance, Hartford Hospital, Saint Vincent Hospital, University Hospital, and Southern New England Telephone.
Dennis, please tell me how you got your start in the data center industry?
I started at Worcester Polytechnic University in Worcester and finished up nights at Northeastern University here in Boston. I got into the data center industry back in the 1980s doing mainframes for companies like John Hancock and American Express. I was working at Carlson Associates. Actually at the beginning, I was working for Aldrich, which was a (contractor) part of Carlson. Later they changed all the names to Carlson, so it became Carlson Associates and Carlson Contracting.
We did all kinds of work, including computer rooms. We did a lot of work with Digital Equipment doing VAX and minicomputer projects. We gradually moved into the PC and server worlds as computer rooms progressed from being water-cooled, 400-hertz (Hz) power systems to what they are today, which is basically back to water cooled but at 120 or 400 volts (V). The industry kind of went the whole way around, from water-cooled to air-cooled back to water-cooled equipment. The mainframes we do now are partly water cooled.
After Carlson went through a few ownership changes, I ended up working for a large firm—Shooshanian Engineering Associates Inc. in Boston—where I did a larger variety of work but continued doing data center work. From there, I went to van Zelm HeyWood & Shadford in Connecticut for a few years, and then to Carter and Burgess during the telecom boom. When the telecom boom crashed and Carter and Burgess closed their local office, I went to work for a local firm, Cubellis Associates Inc. They did a lot of retail work, but I was building their MEP practice. And, when I left them about 8-1/2 years ago, I joined Integrated Design Group to get back into an integrated AE group doing mission critical work.
When I joined Integrated Design Group, it was already a couple of years old. As with any new company, it’s hard to get major clients. But they had luck. They carried on some projects from Carlson. They were able to get nice work with some major firms, and we were able to just continue that work. Then the market for mission critical took off, and we just started doing more and more mission critical. I’d say 90-95% of our work is for mission critical clients.
What was it like working in the mainframe environment?
Back in those days, it was very strict. The cooling systems were called precision cooling because many of the projects were based on the IBM spec. It was really the only spec out in those days, so it was ± 2° on cooling. The mainframes had internal chillers, so we brought chilled water to the mainframes in addition to the CRACs that cooled the room itself.
Also, the mainframes of the time were 400 Hz. They had their own MG (motor-generator) sets to convert the power from 60 to 400 Hz, which caused its own issues with regard to distribution. For instance, we had a lot of voltage drop due to the higher frequency of the power. So that made for different design challenges than those we see now, but watts per square foot were fairly low, even though the computers tended to be fairly large.
There was a lot of printing in those days as well, and the printers tended to be part of the computer room, which was a fire hazard and caused a dust problem. So there were a number of different issues we had to deal with in those days.
What are some of the other differences with today’s hardware?
Today’s systems are obviously high capacity. Even so the power side is much easier than the cooling side. I can go up a few wire sizes and provide power to a rack much more easily than I can provide more cooling to a rack.
Cooling with air is difficult because of the volumes of air, so we find ourselves going back to localized cooling using refrigerant gas, chilled water, or warm water. The specific gravity of water allows it to reject more heat than the same volume of air so it makes a lot more sense.
For energy efficiency, moving water is easier than moving large volumes of air. And with localized cooling, in-rack, or rear-door heat exchangers, we are able to use warm water so we either use a chiller system at 60°F instead of 44°F water or we run it directly off a cooling tower using 80°F or 85°F water. The efficiencies are much higher, but now you have made the HVAC systems more complicated, a bigger part of the equation.
Can you tell me about some of your current projects?
We’re doing a project right now, we’re just finishing construction, and we are ready to do testing for Fidelity’s West data center which is located in Omaha, NE.
It’s a combination of a large support building, a large mainframe wing, and another section for their Centercore project. The Centercore product is a modular data center, which we designed and developed with them. This project starts at 1 MW of data center space but has the capacity to grow to 6 MW.
The project is very interesting because it is a mix of stick built and modular. We’re using the local aquifer to do some cooling for us. We’re also using some free cooling and chilled water beams, so it’s very energy efficient. It’s nestled into the side of a hill, so it is low visibility for security purposes and blends in with the landscape. It’s about 100,000 ft2 overall.
Fidelity’s Centercore Project
We’re designing mainly for about 6-7 kW per cabinet in the Centercore. In the mainframe it’s not as simple to calculate, but the area is 1.2 MW.
Industry people tend to associate firms like Integrated Design Group with stick-built projects, How did this project come about?
Fidelity asked Integrated Design Group to develop an off-site constructed data center to address the limitations they saw in other offerings on the market. It needed to be flexible, non-proprietary and function like any other stick-built data center. The proof of concept was a 500-kW Concurrently Maintainable and Fault Tolerant data center in North Carolina with 100% air-side economizer. It is a complete stand-alone data center with connections back to the main facility only for water and communications.
The next generation was the Fidelity West data center. We located the power units on the lower level and the computer room on the upper level so it would match the new support building being built on the site. It is Concurrently Maintainable and Fault Tolerant and uses a pumped refrigerant cooling system that provides 100% refrigerant-side economization.
Fidelity wanted an off-site constructed system that was modular and had the ability to be relocated. Under this definition it could be treated as a piece of equipment and would not need to be depreciated as a piece of real property. This could lead to substantial tax savings. The other goal was to prevent overbuilding facilities that would not be occupied for several years, if ever.
We think of Centercore as more of a delivery system and not just a product. It can be customized to suit the customer’s requirements. ID has conceptualized units from 100 kW to multi-megawatt assemblies in any reliability configuration desired. The other goal was to prevent overbuilding facilities that would not be occupied for several years, if ever. Given the difficulty in predicting where the IT systems will go in terms of size and power requirements, a “Load on Demand” solution was desired.
When did you get your ATD? Do you remember your motivation?
December 2009. I was foil number 147. At the time we were doing work in the Middle East, where there was a lot of emphasis on having Tier Certification. It was an opportune time to get accredited since ATD training had just started. Since then we have designed facilities in the Middle East, one of which (ITCC in Riyad, Saudi Arabia), has already received Tier IV Certification of Design Documents. We anticipate testing it early this summer for Constructed Facility. Several other projects are being readied for Tier Certification of Design Documents.
Are you still working internationally?
We have done several Tier III Certified Design Documents and Constructed Facility data centers for redIT in Mexico City. Well, it’s one facility with several data centers. And we’ve got several more under construction there. We’re still doing some work in Riyadh with Adel Rizk from Edarat, which is our partner over there. They do the project management and IT, so they become the local representation. We do design development and documents. Then we attend the test and do a little construction administration as well.
Is the ATD an important credential to you?
Yes. It helped bring us credibility when we teamed up with Edarat. We can say, “The designer is not only American-based but also an ATD.” When I see some of the designs that are out there I can see why customers want to see that accreditation, as a third-party verification that the designer is qualified.
What changes do you see in the near future?
Energy efficiency is going to stay at forefront. Electrically we are probably going to see more high-voltage distribution.
I think more people are going to colos or big data centers because they don’t want to run their own data centers, because it is just a different expertise. There is a shortage of competent building engineers and operators so it is easier to go somewhere they can leverage that among many clients.
You will see more medium voltage closer to the data center. You will see more high voltage in the U.S. I think you will see more 480/277-V distribution in the data center. Some of the bigger colo players and internet players are using 277 V. It isn’t mainstream, but I think it will be.
And, I think we are going to go to more compressor-less cooling, which is going to require wider temperature and humidity ranges inside the data centers. As the servers allow that to happen and the warrantees allow that to happen, eventually we’ll get the operators to feel comfortable with those wider ranges. Then, we’ll be able to do a lot more cooling without compressors, either with rear-doors, in-rack, or directly on a chip in a water-cooled cabinet.
Kevin Heslin
This article was written by Kevin Heslin. Kevin Heslin is senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.
https://journal.uptimeinstitute.com/wp-content/uploads/2014/07/julian.jpg4751201Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2014-07-23 08:16:032014-07-23 08:16:03ATD Perspectives From Mainframes to Modular Designs
These interviews with Uptime Institute Network directors Rob Costa, Sylvie Le Roy, and Mozart Mello highlight the qualifications of the Uptime Institute staff working with Network members across the globe and also portray the opportunities for growth and need for the Network in different business environments. National borders do not limit business operations to a single nation or even a handful of nations. Enterprises follow business opportunities wherever they find them and feel that they can make a profit. Hence, it follows that IT operations also transcend national borders and that the data centers required to support IT can be sited in any country that offers strong Internet connections, reliable electrical infrastructure, sufficient water and other advantages deemed essential to an enterprise’s 24 x 7 operation.
Nonetheless, it has long been true that the Uptime Institute Network has been rooted in North America, probably because most of the original members primarily maintained data center operations in the U.S.
Soon, this will no longer be the case. As Thomas L. Friedman, author of The World is Flat, made clear in a 2010 Symposium keynote, global connectivity is erasing the importance of national borders and time zones so that no enterprise and no individual is unaffected by worldwide competition.
The Uptime Institute Network, which supports many enterprise activities that span the globe, must itself adapt to the changing environment to meet its members’ needs. To that end, Uptime Institute has expanded the Network with groups in Europe, Middle East and Africa (EMEA); Latin America (LATAM); and Asia Pacific (APAC).
Following, then, are interviews with Rob Costa, Director, Network-North America; Sylvie Le Roy, Director, Network-EMEA; and Mozart Mello, Managing Director, Latin America, representing the Latin American Network.
Costa became director of the North American Network in July 2013. LeRoy became director of the EMEA Network in February 2013. The EMEA and Latin American Networks were founded in 2011 and 2012, respectively. The APAC Network also convened for the first time in 2012.
In these brief interviews, Costa, Le Roy, and Mello will talk about their experiences in Network leadership, the challenges facing Network members, and the plans to address the globalized mission of IT.
Rob Costa joined the Uptime Institute as Director, Network-North America in May 2013. He is responsible for all activities related to its management, including overseeing content development for the conferences, membership retention and growth, and program development. Prior to that he was principal of Data Center Management Consulting, which provided on-site consulting services.
Mr. Costa developed an extensive body of experience as senior IT management at the Boeing Company, where he worked for almost 38 years. Boeing’s focus was on the improvement of data center availability, reliability, consolidation and cost efficiency. Mr. Costa and his team achieved over 7 years of continuous data center uptime through solid processes and communication between IT and Facilities.
Sylvie Le Roy is Director, Network-EMEA of the Uptime Institute. She joined the Uptime Institute recently after spending 12 years at Interxion, a well-known colocation provider, where she was customer service director.
Sylvie Le Roy’s role with the Uptime Institute is to promote and develop the Network in EMEA, maximizing opportunities for network members as well as educating them on various topics with a focus on data center operations, innovation and sustainability. She seeks to hold conferences in a variety of countries to include the major stakeholders of that country and show members how each country works in respect to the data center industry.
Mozart Mello is the Uptime Institute Managing Director, Latin America. He joined Uptime Institute with experience in both IT and Facilities, and in many aspects of design, build, operations and migration processes.
Mr. Mello was responsible for the startup of Uptime Institute Brasil in May of 2011. In 2012, he started the Network Brasil, an independent and self-reliant knowledge community exclusively for data center owners and operators.
Just prior to joining Uptime Institute Brasil, Mr. Mello was the Senior Project Manager for CPMBraxis in Brasil. He was responsible for managing the design and construction of Data Center Tamboré of Vivo, a 50,000-ft2 data center that received Uptime Institute Tier III Certification of Constructed Facility. He also managed the team responsible for designing the new Data Center Campinas of Santander Bank.
Mr. Mello has a degree in Electrical Engineering at Maua Engineering University, Specialization of Systems at Mackenzie University, and an MBA in Business Strategy at Fundação Getúlio Vargas.
Welcome Aboard: New Captain at the Helm Rob, you just joined the Uptime Institute as Director, Network-North America. I know you have been attending Network meetings, assessing programs and familiarizing yourself with Network operations. Still, as the former Network Principal from the Boeing Company, you have also seen a lot of familiar faces. Is the proper greeting welcome aboard or welcome back? Perhaps I should just ask you to introduce yourself to the entire operations community that benefits from the Network.
Thank you very much for the opportunity. I retired from Boeing in August 2011 after 37 years. I spent the last 20 managing Boeing Defense’s enterprise data centers.
In the beginning of those 20 years, I managed the primary two of those data centers, which were located in the Puget Sound area (Washington State). And not too long after that we consolidated many of the smaller server rooms into the two main facilities located in Bellevue and Kent, WA.
And then the years passed, and we eventually merged with McDonnell Douglas, and were part of many other smaller M&As. Each of those companies came with their own data centers, so we had a major program to bring those data centers into the Boeing family under a single point of contact. I wound up managing all the Boeing data centers throughout the U.S. and two small server rooms, one in Amsterdam and one in Tokyo.
In my last few years with Boeing, we undertook a program called Data Center Modernization, which set up a strategy to merge all the data centers into three new data centers across the U.S.
We completed the first phase of that program by migrating into a data center in the Phoenix area, and the second site is currently underway. That in a nutshell is my background with Boeing. On May 1 of this year I joined the Network and am excited to be here.
Have you had any previous experience with the Uptime Institute Network?
In 1996 Boeing joined the Uptime Institute Network. At that time it was the Uninterruptible Uptime Users Group (UUUG) with Ken Brill. And Boeing has been a member since 1996. I was a principal representing Boeing to the Network for probably about my last 10 years at Boeing. We hosted several meetings at our Boeing sites and found membership very valuable in managing enterprise data centers.
I think it is important to note that you had retired from Boeing in 2009. What made the director position so interesting that you came back to the Network?
I guess the main reason I came back to work was that I had attended so many conferences and watched all the previous directors at work that I always thought it would be interesting to be able to interact with all the members and be able to provide some value to data centers across the U.S. So, when that opportunity came to me, it was very attractive. It was something I just wanted to do.
What do you see as the Network’s strength? Can you say how you used the Network at Boeing?
We used the Network a lot. One of the biggest values we found was the ability to use email queries when we trying to improve our own sites and processes. We’d hit decision points along the way, and we’d always wonder what other members were doing with regard to a specific issue. And, one of the values we found in the Network was the ability to query the members.
We would form a series of questions, which Network staff would send to the entire membership of more than 70 enterprises. So we would send out a questionnaire we developed and the members responded. Not every member responded to every question, but we’d get a good percentage return on our questions. The responses from other members helped us formulate our decisions on where we wanted to go, and it was all information from members who were also running enterprise data centers.
I bet we probably did that 15-20 times through my career at Boeing. Beyond the queries, we also got a lot of value from attending the conferences and building relationships. When I was with Boeing, I’d develop relationships with other Network members and would look forward to seeing them at Network conferences a couple of times of year.
Because of those relationships, well, there is no resistance to picking up the telephone or sending an email about a specific issue or asking how another member might attack it. I’m not talking about the email queries; I’m talking about just picking up the phone.
In the Network, you develop those relationships, and some of them become very strong. Then you have very good discussion on the phone regarding specific issues. It’s one of the great values of the Network and you can utilize it to the fullest extent on an ongoing, even constant, basis.
What kinds of issues would you ask about in an email query?
The most recent one that comes to mind is that Boeing changed its strategy in regard to owning/operating its own data centers to moving into leased facilities, which generated many questions.
What we wanted to know is were there any members who went through the same process, and, if so, what was their decision making process and how did that turn out for them.
So we formed our questions around that topic and the Network staff sent it out. We got very good responses that guided us in that decision, and Boeing went forward with that change.
How has the Network changed?
A: Now that I am a member of the Network again, only from a different perspective, many of the companies are the same. They were members when I attended conferences as a member. But the faces have changed. The new people bring some new ideas with them. The Network is always getting refreshed, even though many companies have been there for many years. In many ways, the challenges addressed by the Network are continual: planning, continuous improvement, mitigating risk. The folks that are members are bringing their own ideas, so members are always getting a new view of facility operations, maintenance, etc.
Also IT involvement seems to be growing. I remember going to meetings and it would be all Facilities people with one or two IT people among them. Now it seems that the population of the IT side of the house is growing within the membership.
Can you share some of the Network’s plans for the future?
A: I think the main vision over the next three years is to grow the Network to infuse new ideas. I guess one of the big things we are looking to do is grow the Network. We’re at about 66 and would like to grow that to about 100 companies in three years. Among the main reasons to grow the Network is to bring additional new companies on board with new ideas. We’d have new facility managers on board with fresh ideas of how to operate major enterprise data centers. We’re also thinking about mixing the organization up just so that the members are meeting with new people making new relationships.
Customer Service Provides a Great Perspective for First EMEA Network Director
Sylvie, please tell me about your career and how it prepared you for your current role.
I joined the Uptime Institute in February 2013, and it started quite well because we had our first EMEA Network conference in Frankfurt and that was my first opportunity to meet with all the members. I used to work for Interxion, which is one of the main colocation providers in Europe.
I spent 12 years there, starting on the help desk and working my way up to customer service manager for the group. So I know quite a bit about data centers now. I was dealing with crisis management for all 33 sites in Europe.
I am very much in tune with what data centers need now, what they might need in the future and what the issues are.
Did you have any exposure to the Network before you became Director, Network-EMEA?
Interxion has been a Network member since 1997 (Interxion was a founding member of the EMEA Network). I used to work closely with Phil Collerton, who was the VP Operations at Interxion (now Uptime Institute’s Managing Director, EMEA) and Lex Coors, CEO there. We used to be the trio during crisis management, so I was very well aware of the Uptime Network before I joined.
What made the position at the Uptime Institute Network interesting to you?
Seeing the industry from another angle. From just the colocation point of view to all the different data center facilities that exist and what they are about, depending on the center, and all the different technologies that are used. I think its also interesting that the Uptime Institute Network is the original forum for the data center industry. For me, learning and exchanging information between data center experts is the best way to do it.
What are the most important values of the Network? What do you see as its strengths?
Well the Network in EMEA little different than in the U.S. I think our strength is that EMEA is not quite as developed so we have a little more leeway as to how we want to develop it.
In EMEA, the members, are on average, in a higher position, so they have more decision making power. And, they can influence more in their own company how things are done, or they can even have more impact on other members from either smaller or larger companies. It can work both ways.
Why has it developed that way?
I think the Uptime Institute Network in Europe, at least for me at Interxion, is perceived as prestigious. And in 2008, it was really the only body that was looking after all data center issues and best practices. Since then, there have been other organizations and various lobbying groups that have been formed in Europe. I think the Uptime Institute is still considered the original group and what we say matters.
Have you had time to establish goals for the Network or what direction you might go?
Yes, I think priority number one is to increase membership, so that allows us to develop more topics. There will always be M&E topics, but I’d like to see it go to another level.
Two, the Network could cover the IT part, the cloud and the diffusion of data centers and IT budgets.
And, three, in the future I would like to give more attention to the financial aspects of anything to do with data centers. Basically I don’t want it to be just M&E, but expand to really having anything to do with the data center because it is all interlinked. It’s all what our members are thinking about, especially when they are at a responsible level. So it’s not just how it works, but it’s how can we save more money doing this, and if we do things in a certain way what does it mean for the future. Data centers are a fast-moving environment, not necessarily the building itself, but the technologies that go with it. It is important to keep your finger on the pulse really.
Getting the Latin American Network Launched
Mozart, please tell me a little about your background.
I’m the Managing Director of Latin America and Director of its Network. I joined Uptime Institute with experience in both IT and Facilities and in many aspects of the design, build, operations and migration processes. I had been at the Fall Network meeting in San Antonio in 2011 before the first meeting in Brasil in 2012. It was very important to understand the methodology and process of the Network and to maintain the standard of Network meetings at Brasil/LATAM.
In May 2012, I helped start up Uptime Institute Brasil, which enabled us to introduce the full complement of Uptime Institute services to Brasil including Tier Certification, consulting and training. As director, I represent and lead the commercial interests of Uptime Institute Brasil in Latin America. In 2012, we started the LATAM Network Brasil.
As Managing Director in LATAM, what is your relationship to the Network?
I have the mission of spreading knowledge of the concept and benefits of the Network in LATAM and to build a Network group that includes different industries before hiring a Network Director for LATAM. To that end, I´m responsible for planning for LATAM Network with Uptime Institute COO Julian Kudritzki.
Please tell me about the early development of the LATAM Network.
I have 25 years of experience working with data centers, so I have a good relationship with the most important clients and users of data centers in Brasil. This experience helped us start the LATAM Network in March 2012. We held our first Network meeting in São Paulo, Brasil, in 2012, with our three founding members: Bradesco Bank, Itau-Unibanco Bank and Petrobras.
This meeting was very important even though we had only members from Brasil, because our clients could evaluate the real benefits of being a member of the Network.
The Brasilian members have already participated at the North American Network’s fall meeting in Atlanta in 2012.
The experience of the Brasilian members at the Fall Network meeting in the U.S., where they shared experiences with members from different cultures the growth of the data center industry in South America, encouraged us to invite members from other South America countries to join the LATAM Network. So, at our 2013 Network meeting in Brasil, we had six Network members, including our first member from Chile. Now we have members from banks, oil/gas, data center services providers and internet/mobile telephone services.
Growing from a Brasilian meeting to a Latin American meeting gave us the opportunity to better understand the data center industry in all of Latin America, and understand different cultures, difficulties and plans for future.
What Network services been offered to LATAM members?
We offer all the Uptime Institute Network services like AIRs, papers, exchanging information and questions with members, webinars, and two conferences, one meeting in Brasil and either a North America or EMEA fall Network meeting.
Which activities seem to best meet the information needs of LATAM members?
One of the most important is the ability to consult other members about news or implementation questions for new projects in Brasil, and then using these recommendations and experiences to avoid any mistake and to have better results.
Another important benefit is the ability to participate at the North America or EMEA Network fall meeting, exchanging experiences with people from other countries and visiting different data centers for the Network data center tour.
How do you expect the Network to grow in coming years?
The growth in all areas will be faster as members make use of the benefits. I believe that in the long term we will see the growth of a global Network.
The Uptime Institute has many global clients and Network globalization will help data centers reduce efforts and costs to have a better operation.
This article was written by Kevin Heslin. Mr. Heslin is senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.
https://journal.uptimeinstitute.com/wp-content/uploads/2014/07/network.jpg4751201Kevin Heslinhttps://journal.uptimeinstitute.com/wp-content/uploads/2022/12/uptime-institute-logo-r_240x88_v2023-with-space.pngKevin Heslin2014-07-21 06:15:142014-07-21 06:15:14Uptime Institute Network Activities Around the Globe
David Fullagar, Director of Content Delivery Architecture at Netflix, presents at Uptime Institute Symposium 2014. In his presentation, Fullagar discusses the hardware design and open source software components of Netflix Open Connect, the custom-designed content delivery network that enables Netflix to handle its massive video-streaming demands, and explains how these designs are well-suited to other high-volume media providers as well.
Industry Perspective: Three Accredited Tier Designers (ATDs) discuss 25 years of data center evolution
The experiences of Christopher Johnston, Elie Siam, and Dennis Julian are very different. Yet, their experiences as Accredited Tier Designers (ATDs) all highlight the pace of change in the industry. Somehow, though, despite significant changes in how IT services are delivered and business practices affect design, the challenge of meeting reliability goals remains very much the same, complicated only by greater energy concerns and increased density. All three men agree that Uptime Institute’s Tiers and ATD programs have helped raise the level of practice worldwide and the quality of facilities. In this installment, Elie Siam of Pierre Dammous & Partners examines worldwide trends with a perspective honed in the Uptime Institute’s EMEA region. This is the second of three ATD interviews from the May 2014 Issue of the Uptime Institute Journal.
Mathworks data hall, a Pierre Dammous & Partners project.
Elie Siam graduated in 1993 from the American University of Beirut and joined Pierre Dammous & Partners (PDP) in 1994 as a design engineer. PDP is an engineering consulting firm with offices in Beirut, Riyadh, and Abu Dhabi. The firm is dedicated to the supply of quality services in mechanical, electrical, plumbing, fire, and telecommunications systems. After multiple successes, Mr. Siam became the lead of the team for resolving mission-critical facilities problems and eventually proved to be a local authority in the Electrical Engineering field. He is an active contributor to Lebanese Standards and a Building Services Instructor at Lebanon’s Notre Dame University. He has special qualifications in telecommunications and data networks and established highly performing systems for large banks (Credit Libanais, SGBL, SGBJ, Fransabank, CSC, BBAC, LCB, BLOM), telecom companies (IDM, STC), universities (LAU, Balamand), and leading media companies (Annahar, VTR, Bloomberg).
Elie, please tell me about yourself.
I have been headquartered in Beirut and worked here since 1993, when I graduated from the American University of Beirut as an electrical engineer and started work at Pierre Dammous & Partners (PDP) in 1994, so just over 20 years now.
A substantial part of our work at PDP is related to telecom and power infrastructure.
When PDP started with server rooms and then data centers, I decided to get an ATD accreditation, due to my exposure to networks.
We have done a lot of data center projects, not all of them Tier Certified. The issue is that the data center market in Lebanon is small, but clients are becoming more aware. Consequently, we are doing more Tier designs. Tier III is the most commonly requested, mostly from banking institutions.
The central bank of Lebanon has asked for qualifying banks to go to Tier III and now the market is picking up.
We are also starting to do design and construction management for data centers outside Lebanon as in Dubai (Bloomberg Currency House), Amman (Societé Générale de Banque – Jordanie), or Larnaca (CSC bank). We also did large colocation data centers for a major telecom company in Riyadh and Dammam.
Which project are you most proud of?
Well, I’ve worked on many kinds of projects, including hospitality, health care, and data centers. My last data center project was for a Saudi Arabian telecom company, and that was a 25-MW facility to house 1,200 server racks. It was awarded Tier III Certification of Design Documents.
Our responsibility was to do the design up to construction drawings, but we were not involved in the construction phase. The client was going to go for a Tier Certification of Constructed Facility, but, again, we are not involved in this process. It is a very nice, large-scale project.
The project is composed of two parts. The first part is a power plant building that can generate 25 MW of redundant continuous power, and the second part is the data center building.
Is the bulk of your work in Beirut?
By number of projects, yes. By the number of racks, no. One project in Saudi Arabia or the Gulf countries could be like 50 projects in Beirut. When you design a project in the Gulf Countries, it’s for 1,200 racks, and when you design one in Beirut, it’s for about 25 racks.
What drives the larger projects to Saudi Arabia?
Money, and the population. The population of Lebanon is about 4 million. Saudi Arabia has about 25-30 million people. It is also fiscally larger. I’m not sure of the numbers, but Saudi Arabia has something like US$60 billion in terms of annual budget surplus. Lebanon has an annual budget deficit of US$4.5 billion.
Tourism in Lebanon, one of the main pillars of the economy, is tending to zero, due to political issues in Syria. Consequently, we have economic issues affecting growth.
Is it difficult for a Lebanese company to get projects in Saudi Arabia?
Yes, it can be difficult. You have to know people there. You have to be qualified. You have to be known to be qualified. You can be qualified, but if you can’t prove it you won’t get work, so you have to be known to be qualified so you can grasp jobs.
We have a branch office in Dubai for Dubai and UAE projects and also for some projects in Saudi Arabia as well. We‘ve done many projects in Jeddah and Riyadh, so people know us there.
Also, we are partly owned by an American company called Ted Jacob Engineering Group, which owns 25% of PDP. This ownership facilitates the way we can get introduced to projects in the Gulf because our partner is well known in the region.
Tell me about some of your current projects.
We are working on three new data centers, each above 200 kW. Two of them belong to Fransabank (main site and business continuity site) while the third belongs to CSC bank. All of them will be going for Tier III Certification. We’re also working on several other smaller scale data center projects.
One of the major data centers currently in construction in Lebanon is for Audi Bank, which is designed and executed by Schneider Electric. The second project is for another banking institution called Credit Libanais. It is 95% complete, as of April 2014. We are the designers. We also worked as integrators and BIM (building information modeling) engineers and did the testing and commissioning. This is a 120-kW data center.
The Credit Libanais facility has the following features:
Chilled water-cooled white space. The chilled water system provides higher efficiencies than DX (direct expansion) systems and avoids heavy use of CFC (chlorofluorocarbons) derivatives.
The chilled water system uses high water supply temperature (10°C versus 6°C for standard systems), which significantly increases efficiency and running costs.
Operation at higher water temperature reduces the need to provide humidification in the computing space. That saves energy.
The chiller compressors and pumps are provided with electronically commutated (EC) technology, allowing the direct current motors to modulate from 0-100% to optimize energy consumption based on actual demand.
A special variable primary chilled water system does not require secondary chilled water pumps, which further reduces energy expenditures.
Computer room air conditioning units are provided with variable speed EC fans modulating from 0-100% to reduce energy consumption.
Cold air containment prevents mixing of cold and return hot air, which further enhances the overall system efficiency.
The fresh air for the data center is centrally pre-treated with an energy recovery unit through thermal exchange with office space exhaust.
Associated office space is air conditioned by a VRV (variable refrigerant flow) system for the high efficiency and lower energy expenditures. The system interfaces with the BMS (building management system) for scheduling and centralized parameterization to avoid operation during unoccupied periods
The latest VRV system technology provides cooling to the office space with Unit COP (coefficient of performance) greater than 4 using R-410 refrigerant.
Recirculated air from offices ventilates the UPS/battery room through transfer fans, which reduces the amount of treated fresh air.
Modular UPS systems adjust capacity to actual IT loads. The efficiency of the UPS system is 95.5% at 25% load and 96% at 100% load.
T5 fluorescent lamps with low-loss optics also save energy. The lighting is switched by a KNX lighting control system that includes automatic motion sensors and centralized parameterization and scheduling to avoid operation during unoccupied periods.
The BMS integrates all subsystems, either directly or via SNMP (Simple Network Management Protocol), KNX, and data center infrastructure (DCIM) controls. The system allows an overall insight on the operations of the data center, monitoring all energy expenditures, faults, and alerts.
DCIM optimizes operations and increases overall efficiency by operating at lower PUE.
What does Credit Libanais plan to do with the facility?
Credit Libanais is one of the top banks in Beirut. They have about 70 branches. They are constructing a new 32-story headquarters building. This will be the main data center of the bank. The data center is in a basement floor. It is about 450 m2 with 120 kW of net IT load. The data center will handle all the functions of the bank. An additional 350-m2 space hosts a company called Credit Card Management (CCM). CCM has also a dedicated server room within the Credit Libanais data center.
At first Credit Libanais did not want to engage in Tier Certification because they do not provide colocation services, but they changed their minds. In March, I queried the Uptime Institute for a proposal, which I brought back to the bank. The proposal includes full support including Tier III Certification of the Design, Constructed Facility, and Operations.
Can you describe the cooling system of the Credit Libanais data center in more detail?
The critical environment is water cooled. Not only is it water cooled, but it is water cooled at relatively high temperatures. Normally water-cooled systems for building’s supply water at around 6°C; this data center uses a 10°C chilled water temperature, which greatly increases the efficiency and reduces cost.
Since the cold water temperature is not too low, there is no need to provide large-scale humidification because water will not condense as much as when the chilled water temperature is lower, which substantially saves on energy.
The chillers also have variable speed compressors and variable chilled water pumps, and the fans of the chillers and the fans of the CRACs all have EC variable speed fans so that you can permanently adjust the speed to exactly the amount of capacity you would like to have and are not working at either 100 or 0%.
What’s the value of being an ATD?
I believe that you would need to have substantial experience before you go for the accreditation because if you do not have experience you would not benefit. But, it is very useful for people who have experience in mechanical and electrical engineering and even more useful for those who have more experience in data centers.
You could get experience from working on clients’ projects, but you would need the accreditation to know how things should be done in a data center and what things that should not be overlooked. To do that you need a methodology and the ATD gives you that.
ATD gives you a methodology that eliminates forgetting things or overlooking things that could lead to failure.
When you go through the training they teach you the methodology to check each and every system so that they are Concurrently Maintainable or Fault Tolerant, as required by the client.
This article was written by Kevin Heslin, senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.
The data center in 2020 and beyond
/in Executive/by Matt StansberryIn this keynote video from 2014 Uptime Institute Symposium, Andy Lawrence, Research Director at 451 Research, provides an overview of what a data center might look like in 2020 and beyond. This presentation covers how all the various new technologies might be combined with new data center services, along with extrapolated improvements in processing, storage and energy management.
Putting DCIM to Work for You
/in Operations/by Kevin HeslinAt Symposium 2013, Erik Ko of Twitter, Hewlett-Packard’s Ken Jackson, and James Pryor of Regions Bank discussed the experience of selecting and implementing DCIM solutions from the end user perspective. The panel was moderated by Uptime Institute’s Kevin Heslin.
Earlier that week, Symposium 2013 attendees heard presentations that focused on the potential for DCIM and some of the barriers to further adoption from 451 Research’s Andy Lawrence and Uptime Institute’s Matt Stansberry. These presentations conveyed a great deal of generalized information.
In this session, the three panelists shared information about the decision to implement DCIM, organizational goals, product selection, and implementation. Twitter, Hewlett-Packard, and Regions bank are very diverse organizations, in different industries, serving different customers, and with dissimilar IT needs. Yet each of the organizations had to make relatively judgments about cost, benefits, implementation times, and benefits to find which DCIM solution fit it best.
It turns out that it is hard to generalize too much about DCIM procurement and implementation, as each organization will have different goals, different pressure points, different needs, and different resources to put to bear.
This almost 45-minute presentation is worth watching in its entirety as the panelists examine the different paths they took to reach functioning DCIM implementations and what effort is required on an ongoing basis.
ATD Perspectives From Mainframes to Modular Designs
/in Design/by Kevin HeslinThe experiences of Christopher Johnston, Elie Siam, and Dennis Julian are very different. Yet, their experiences as Accredited Tier Designers (ATDs) all highlight the pace of change in the industry. Somehow, though, despite significant changes in how IT services are delivered and business practices affect design, the challenge of meeting reliability goals remains very much the same, complicated only by greater energy concerns and increased density. All three men agree that Uptime Institute’s Tiers and ATD programs have helped raise the level of practice worldwide and the quality of facilities. In this article, the last of a series of three, Integrated Design Group’s Dennis Julian examines past practices in data center design and future trends, including modular designs.
Dennis Julian
Dennis Julian is principal – A/E Design Services at Integrated Design Group Inc. He is a Professional Engineer with more than 25 years experience in multi-discipline project management and engineering, including management of architectural, mechanical, HVAC, plumbing, fire protection, and electrical engineering departments for data center, office, medical, financial, and high technology facilities. Mr. Julian has been involved in the design and engineering of over 2 million square feet ft2 of mission critical facilities, including Uptime Institute Tier IV Certified data centers in the Middle East. He has designed facilities for Digital Realty Trust, RBS/Citizens, State Street Bank, Orange LLC, Switch & Data (Equinix), Fidelity, Verizon, American Express, Massachusetts Information Technology Center (the state’s main data center), Novartis, One Beacon Insurance, Hartford Hospital, Saint Vincent Hospital, University Hospital, and Southern New England Telephone.
Dennis, please tell me how you got your start in the data center industry?
I started at Worcester Polytechnic University in Worcester and finished up nights at Northeastern University here in Boston. I got into the data center industry back in the 1980s doing mainframes for companies like John Hancock and American Express. I was working at Carlson Associates. Actually at the beginning, I was working for Aldrich, which was a (contractor) part of Carlson. Later they changed all the names to Carlson, so it became Carlson Associates and Carlson Contracting.
We did all kinds of work, including computer rooms. We did a lot of work with Digital Equipment doing VAX and minicomputer projects. We gradually moved into the PC and server worlds as computer rooms progressed from being water-cooled, 400-hertz (Hz) power systems to what they are today, which is basically back to water cooled but at 120 or 400 volts (V). The industry kind of went the whole way around, from water-cooled to air-cooled back to water-cooled equipment. The mainframes we do now are partly water cooled.
After Carlson went through a few ownership changes, I ended up working for a large firm—Shooshanian Engineering Associates Inc. in Boston—where I did a larger variety of work but continued doing data center work. From there, I went to van Zelm HeyWood & Shadford in Connecticut for a few years, and then to Carter and Burgess during the telecom boom. When the telecom boom crashed and Carter and Burgess closed their local office, I went to work for a local firm, Cubellis Associates Inc. They did a lot of retail work, but I was building their MEP practice. And, when I left them about 8-1/2 years ago, I joined Integrated Design Group to get back into an integrated AE group doing mission critical work.
When I joined Integrated Design Group, it was already a couple of years old. As with any new company, it’s hard to get major clients. But they had luck. They carried on some projects from Carlson. They were able to get nice work with some major firms, and we were able to just continue that work. Then the market for mission critical took off, and we just started doing more and more mission critical. I’d say 90-95% of our work is for mission critical clients.
What was it like working in the mainframe environment?
Back in those days, it was very strict. The cooling systems were called precision cooling because many of the projects were based on the IBM spec. It was really the only spec out in those days, so it was ± 2° on cooling. The mainframes had internal chillers, so we brought chilled water to the mainframes in addition to the CRACs that cooled the room itself.
Also, the mainframes of the time were 400 Hz. They had their own MG (motor-generator) sets to convert the power from 60 to 400 Hz, which caused its own issues with regard to distribution. For instance, we had a lot of voltage drop due to the higher frequency of the power. So that made for different design challenges than those we see now, but watts per square foot were fairly low, even though the computers tended to be fairly large.
There was a lot of printing in those days as well, and the printers tended to be part of the computer room, which was a fire hazard and caused a dust problem. So there were a number of different issues we had to deal with in those days.
What are some of the other differences with today’s hardware?
Today’s systems are obviously high capacity. Even so the power side is much easier than the cooling side. I can go up a few wire sizes and provide power to a rack much more easily than I can provide more cooling to a rack.
Cooling with air is difficult because of the volumes of air, so we find ourselves going back to localized cooling using refrigerant gas, chilled water, or warm water. The specific gravity of water allows it to reject more heat than the same volume of air so it makes a lot more sense.
For energy efficiency, moving water is easier than moving large volumes of air. And with localized cooling, in-rack, or rear-door heat exchangers, we are able to use warm water so we either use a chiller system at 60°F instead of 44°F water or we run it directly off a cooling tower using 80°F or 85°F water. The efficiencies are much higher, but now you have made the HVAC systems more complicated, a bigger part of the equation.
Can you tell me about some of your current projects?
We’re doing a project right now, we’re just finishing construction, and we are ready to do testing for Fidelity’s West data center which is located in Omaha, NE.
It’s a combination of a large support building, a large mainframe wing, and another section for their Centercore project. The Centercore product is a modular data center, which we designed and developed with them. This project starts at 1 MW of data center space but has the capacity to grow to 6 MW.
The project is very interesting because it is a mix of stick built and modular. We’re using the local aquifer to do some cooling for us. We’re also using some free cooling and chilled water beams, so it’s very energy efficient. It’s nestled into the side of a hill, so it is low visibility for security purposes and blends in with the landscape. It’s about 100,000 ft2 overall.
Fidelity’s Centercore Project
We’re designing mainly for about 6-7 kW per cabinet in the Centercore. In the mainframe it’s not as simple to calculate, but the area is 1.2 MW.
Industry people tend to associate firms like Integrated Design Group with stick-built projects, How did this project come about?
Fidelity asked Integrated Design Group to develop an off-site constructed data center to address the limitations they saw in other offerings on the market. It needed to be flexible, non-proprietary and function like any other stick-built data center. The proof of concept was a 500-kW Concurrently Maintainable and Fault Tolerant data center in North Carolina with 100% air-side economizer. It is a complete stand-alone data center with connections back to the main facility only for water and communications.
The next generation was the Fidelity West data center. We located the power units on the lower level and the computer room on the upper level so it would match the new support building being built on the site. It is Concurrently Maintainable and Fault Tolerant and uses a pumped refrigerant cooling system that provides 100% refrigerant-side economization.
Fidelity wanted an off-site constructed system that was modular and had the ability to be relocated. Under this definition it could be treated as a piece of equipment and would not need to be depreciated as a piece of real property. This could lead to substantial tax savings. The other goal was to prevent overbuilding facilities that would not be occupied for several years, if ever.
We think of Centercore as more of a delivery system and not just a product. It can be customized to suit the customer’s requirements. ID has conceptualized units from 100 kW to multi-megawatt assemblies in any reliability configuration desired. The other goal was to prevent overbuilding facilities that would not be occupied for several years, if ever. Given the difficulty in predicting where the IT systems will go in terms of size and power requirements, a “Load on Demand” solution was desired.
When did you get your ATD? Do you remember your motivation?
December 2009. I was foil number 147. At the time we were doing work in the Middle East, where there was a lot of emphasis on having Tier Certification. It was an opportune time to get accredited since ATD training had just started. Since then we have designed facilities in the Middle East, one of which (ITCC in Riyad, Saudi Arabia), has already received Tier IV Certification of Design Documents. We anticipate testing it early this summer for Constructed Facility. Several other projects are being readied for Tier Certification of Design Documents.
Are you still working internationally?
We have done several Tier III Certified Design Documents and Constructed Facility data centers for redIT in Mexico City. Well, it’s one facility with several data centers. And we’ve got several more under construction there. We’re still doing some work in Riyadh with Adel Rizk from Edarat, which is our partner over there. They do the project management and IT, so they become the local representation. We do design development and documents. Then we attend the test and do a little construction administration as well.
Is the ATD an important credential to you?
Yes. It helped bring us credibility when we teamed up with Edarat. We can say, “The designer is not only American-based but also an ATD.” When I see some of the designs that are out there I can see why customers want to see that accreditation, as a third-party verification that the designer is qualified.
What changes do you see in the near future?
Energy efficiency is going to stay at forefront. Electrically we are probably going to see more high-voltage distribution.
I think more people are going to colos or big data centers because they don’t want to run their own data centers, because it is just a different expertise. There is a shortage of competent building engineers and operators so it is easier to go somewhere they can leverage that among many clients.
You will see more medium voltage closer to the data center. You will see more high voltage in the U.S. I think you will see more 480/277-V distribution in the data center. Some of the bigger colo players and internet players are using 277 V. It isn’t mainstream, but I think it will be.
And, I think we are going to go to more compressor-less cooling, which is going to require wider temperature and humidity ranges inside the data centers. As the servers allow that to happen and the warrantees allow that to happen, eventually we’ll get the operators to feel comfortable with those wider ranges. Then, we’ll be able to do a lot more cooling without compressors, either with rear-doors, in-rack, or directly on a chip in a water-cooled cabinet.
Kevin Heslin
This article was written by Kevin Heslin. Kevin Heslin is senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.
Uptime Institute Network Activities Around the Globe
/in Operations/by Kevin HeslinNo longer “the best kept secret” the Uptime Institute Network has gone international
These interviews with Uptime Institute Network directors Rob Costa, Sylvie Le Roy, and Mozart Mello highlight the qualifications of the Uptime Institute staff working with Network members across the globe and also portray the opportunities for growth and need for the Network in different business environments. National borders do not limit business operations to a single nation or even a handful of nations. Enterprises follow business opportunities wherever they find them and feel that they can make a profit. Hence, it follows that IT operations also transcend national borders and that the data centers required to support IT can be sited in any country that offers strong Internet connections, reliable electrical infrastructure, sufficient water and other advantages deemed essential to an enterprise’s 24 x 7 operation.
Nonetheless, it has long been true that the Uptime Institute Network has been rooted in North America, probably because most of the original members primarily maintained data center operations in the U.S.
Soon, this will no longer be the case. As Thomas L. Friedman, author of The World is Flat, made clear in a 2010 Symposium keynote, global connectivity is erasing the importance of national borders and time zones so that no enterprise and no individual is unaffected by worldwide competition.
The Uptime Institute Network, which supports many enterprise activities that span the globe, must itself adapt to the changing environment to meet its members’ needs. To that end, Uptime Institute has expanded the Network with groups in Europe, Middle East and Africa (EMEA); Latin America (LATAM); and Asia Pacific (APAC).
Following, then, are interviews with Rob Costa, Director, Network-North America; Sylvie Le Roy, Director, Network-EMEA; and Mozart Mello, Managing Director, Latin America, representing the Latin American Network.
Costa became director of the North American Network in July 2013. LeRoy became director of the EMEA Network in February 2013. The EMEA and Latin American Networks were founded in 2011 and 2012, respectively. The APAC Network also convened for the first time in 2012.
In these brief interviews, Costa, Le Roy, and Mello will talk about their experiences in Network leadership, the challenges facing Network members, and the plans to address the globalized mission of IT.
Rob Costa joined the Uptime Institute as Director, Network-North America in May 2013. He is responsible for all activities related to its management, including overseeing content development for the conferences, membership retention and growth, and program development. Prior to that he was principal of Data Center Management Consulting, which provided on-site consulting services.
Mr. Costa developed an extensive body of experience as senior IT management at the Boeing Company, where he worked for almost 38 years. Boeing’s focus was on the improvement of data center availability, reliability, consolidation and cost efficiency. Mr. Costa and his team achieved over 7 years of continuous data center uptime through solid processes and communication between IT and Facilities.
Sylvie Le Roy is Director, Network-EMEA of the Uptime Institute. She joined the Uptime Institute recently after spending 12 years at Interxion, a well-known colocation provider, where she was customer service director.
Sylvie Le Roy’s role with the Uptime Institute is to promote and develop the Network in EMEA, maximizing opportunities for network members as well as educating them on various topics with a focus on data center operations, innovation and sustainability. She seeks to hold conferences in a variety of countries to include the major stakeholders of that country and show members how each country works in respect to the data center industry.
Mozart Mello is the Uptime Institute Managing Director, Latin America. He joined Uptime Institute with experience in both IT and Facilities, and in many aspects of design, build, operations and migration processes.
Mr. Mello was responsible for the startup of Uptime Institute Brasil in May of 2011. In 2012, he started the Network Brasil, an independent and self-reliant knowledge community exclusively for data center owners and operators.
Just prior to joining Uptime Institute Brasil, Mr. Mello was the Senior Project Manager for CPMBraxis in Brasil. He was responsible for managing the design and construction of Data Center Tamboré of Vivo, a 50,000-ft2 data center that received Uptime Institute Tier III Certification of Constructed Facility. He also managed the team responsible for designing the new Data Center Campinas of Santander Bank.
Mr. Mello has a degree in Electrical Engineering at Maua Engineering University, Specialization of Systems at Mackenzie University, and an MBA in Business Strategy at Fundação Getúlio Vargas.
Welcome Aboard: New Captain at the Helm
Rob, you just joined the Uptime Institute as Director, Network-North America. I know you have been attending Network meetings, assessing programs and familiarizing yourself with Network operations. Still, as the former Network Principal from the Boeing Company, you have also seen a lot of familiar faces. Is the proper greeting welcome aboard or welcome back? Perhaps I should just ask you to introduce yourself to the entire operations community that benefits from the Network.
Thank you very much for the opportunity. I retired from Boeing in August 2011 after 37 years. I spent the last 20 managing Boeing Defense’s enterprise data centers.
In the beginning of those 20 years, I managed the primary two of those data centers, which were located in the Puget Sound area (Washington State). And not too long after that we consolidated many of the smaller server rooms into the two main facilities located in Bellevue and Kent, WA.
And then the years passed, and we eventually merged with McDonnell Douglas, and were part of many other smaller M&As. Each of those companies came with their own data centers, so we had a major program to bring those data centers into the Boeing family under a single point of contact. I wound up managing all the Boeing data centers throughout the U.S. and two small server rooms, one in Amsterdam and one in Tokyo.
In my last few years with Boeing, we undertook a program called Data Center Modernization, which set up a strategy to merge all the data centers into three new data centers across the U.S.
We completed the first phase of that program by migrating into a data center in the Phoenix area, and the second site is currently underway. That in a nutshell is my background with Boeing. On May 1 of this year I joined the Network and am excited to be here.
Have you had any previous experience with the Uptime Institute Network?
In 1996 Boeing joined the Uptime Institute Network. At that time it was the Uninterruptible Uptime Users Group (UUUG) with Ken Brill. And Boeing has been a member since 1996. I was a principal representing Boeing to the Network for probably about my last 10 years at Boeing. We hosted several meetings at our Boeing sites and found membership very valuable in managing enterprise data centers.
I think it is important to note that you had retired from Boeing in 2009. What made the director position so interesting that you came back to the Network?
I guess the main reason I came back to work was that I had attended so many conferences and watched all the previous directors at work that I always thought it would be interesting to be able to interact with all the members and be able to provide some value to data centers across the U.S. So, when that opportunity came to me, it was very attractive. It was something I just wanted to do.
What do you see as the Network’s strength? Can you say how you used the Network at Boeing?
We used the Network a lot. One of the biggest values we found was the ability to use email queries when we trying to improve our own sites and processes. We’d hit decision points along the way, and we’d always wonder what other members were doing with regard to a specific issue. And, one of the values we found in the Network was the ability to query the members.
We would form a series of questions, which Network staff would send to the entire membership of more than 70 enterprises. So we would send out a questionnaire we developed and the members responded. Not every member responded to every question, but we’d get a good percentage return on our questions. The responses from other members helped us formulate our decisions on where we wanted to go, and it was all information from members who were also running enterprise data centers.
I bet we probably did that 15-20 times through my career at Boeing. Beyond the queries, we also got a lot of value from attending the conferences and building relationships. When I was with Boeing, I’d develop relationships with other Network members and would look forward to seeing them at Network conferences a couple of times of year.
Because of those relationships, well, there is no resistance to picking up the telephone or sending an email about a specific issue or asking how another member might attack it. I’m not talking about the email queries; I’m talking about just picking up the phone.
In the Network, you develop those relationships, and some of them become very strong. Then you have very good discussion on the phone regarding specific issues. It’s one of the great values of the Network and you can utilize it to the fullest extent on an ongoing, even constant, basis.
What kinds of issues would you ask about in an email query?
The most recent one that comes to mind is that Boeing changed its strategy in regard to owning/operating its own data centers to moving into leased facilities, which generated many questions.
What we wanted to know is were there any members who went through the same process, and, if so, what was their decision making process and how did that turn out for them.
So we formed our questions around that topic and the Network staff sent it out. We got very good responses that guided us in that decision, and Boeing went forward with that change.
How has the Network changed?
A: Now that I am a member of the Network again, only from a different perspective, many of the companies are the same. They were members when I attended conferences as a member. But the faces have changed. The new people bring some new ideas with them. The Network is always getting refreshed, even though many companies have been there for many years. In many ways, the challenges addressed by the Network are continual: planning, continuous improvement, mitigating risk. The folks that are members are bringing their own ideas, so members are always getting a new view of facility operations, maintenance, etc.
Also IT involvement seems to be growing. I remember going to meetings and it would be all Facilities people with one or two IT people among them. Now it seems that the population of the IT side of the house is growing within the membership.
Can you share some of the Network’s plans for the future?
A: I think the main vision over the next three years is to grow the Network to infuse new ideas. I guess one of the big things we are looking to do is grow the Network. We’re at about 66 and would like to grow that to about 100 companies in three years. Among the main reasons to grow the Network is to bring additional new companies on board with new ideas. We’d have new facility managers on board with fresh ideas of how to operate major enterprise data centers. We’re also thinking about mixing the organization up just so that the members are meeting with new people making new relationships.
Customer Service Provides a Great Perspective for First EMEA Network Director
Sylvie, please tell me about your career and how it prepared you for your current role.
I joined the Uptime Institute in February 2013, and it started quite well because we had our first EMEA Network conference in Frankfurt and that was my first opportunity to meet with all the members. I used to work for Interxion, which is one of the main colocation providers in Europe.
I spent 12 years there, starting on the help desk and working my way up to customer service manager for the group. So I know quite a bit about data centers now. I was dealing with crisis management for all 33 sites in Europe.
I am very much in tune with what data centers need now, what they might need in the future and what the issues are.
Did you have any exposure to the Network before you became Director, Network-EMEA?
Interxion has been a Network member since 1997 (Interxion was a founding member of the EMEA Network). I used to work closely with Phil Collerton, who was the VP Operations at Interxion (now Uptime Institute’s Managing Director, EMEA) and Lex Coors, CEO there. We used to be the trio during crisis management, so I was very well aware of the Uptime Network before I joined.
What made the position at the Uptime Institute Network interesting to you?
Seeing the industry from another angle. From just the colocation point of view to all the different data center facilities that exist and what they are about, depending on the center, and all the different technologies that are used. I think its also interesting that the Uptime Institute Network is the original forum for the data center industry. For me, learning and exchanging information between data center experts is the best way to do it.
What are the most important values of the Network? What do you see as its strengths?
Well the Network in EMEA little different than in the U.S. I think our strength is that EMEA is not quite as developed so we have a little more leeway as to how we want to develop it.
In EMEA, the members, are on average, in a higher position, so they have more decision making power. And, they can influence more in their own company how things are done, or they can even have more impact on other members from either smaller or larger companies. It can work both ways.
Why has it developed that way?
I think the Uptime Institute Network in Europe, at least for me at Interxion, is perceived as prestigious. And in 2008, it was really the only body that was looking after all data center issues and best practices. Since then, there have been other organizations and various lobbying groups that have been formed in Europe. I think the Uptime Institute is still considered the original group and what we say matters.
Have you had time to establish goals for the Network or what direction you might go?
Yes, I think priority number one is to increase membership, so that allows us to develop more topics. There will always be M&E topics, but I’d like to see it go to another level.
Two, the Network could cover the IT part, the cloud and the diffusion of data centers and IT budgets.
And, three, in the future I would like to give more attention to the financial aspects of anything to do with data centers. Basically I don’t want it to be just M&E, but expand to really having anything to do with the data center because it is all interlinked. It’s all what our members are thinking about, especially when they are at a responsible level. So it’s not just how it works, but it’s how can we save more money doing this, and if we do things in a certain way what does it mean for the future. Data centers are a fast-moving environment, not necessarily the building itself, but the technologies that go with it. It is important to keep your finger on the pulse really.
Getting the Latin American Network Launched
Mozart, please tell me a little about your background.
I’m the Managing Director of Latin America and Director of its Network. I joined Uptime Institute with experience in both IT and Facilities and in many aspects of the design, build, operations and migration processes. I had been at the Fall Network meeting in San Antonio in 2011 before the first meeting in Brasil in 2012. It was very important to understand the methodology and process of the Network and to maintain the standard of Network meetings at Brasil/LATAM.
In May 2012, I helped start up Uptime Institute Brasil, which enabled us to introduce the full complement of Uptime Institute services to Brasil including Tier Certification, consulting and training. As director, I represent and lead the commercial interests of Uptime Institute Brasil in Latin America. In 2012, we started the LATAM Network Brasil.
As Managing Director in LATAM, what is your relationship to the Network?
I have the mission of spreading knowledge of the concept and benefits of the Network in LATAM and to build a Network group that includes different industries before hiring a Network Director for LATAM. To that end, I´m responsible for planning for LATAM Network with Uptime Institute COO Julian Kudritzki.
Please tell me about the early development of the LATAM Network.
I have 25 years of experience working with data centers, so I have a good relationship with the most important clients and users of data centers in Brasil. This experience helped us start the LATAM Network in March 2012. We held our first Network meeting in São Paulo, Brasil, in 2012, with our three founding members: Bradesco Bank, Itau-Unibanco Bank and Petrobras.
This meeting was very important even though we had only members from Brasil, because our clients could evaluate the real benefits of being a member of the Network.
The Brasilian members have already participated at the North American Network’s fall meeting in Atlanta in 2012.
The experience of the Brasilian members at the Fall Network meeting in the U.S., where they shared experiences with members from different cultures the growth of the data center industry in South America, encouraged us to invite members from other South America countries to join the LATAM Network. So, at our 2013 Network meeting in Brasil, we had six Network members, including our first member from Chile. Now we have members from banks, oil/gas, data center services providers and internet/mobile telephone services.
Growing from a Brasilian meeting to a Latin American meeting gave us the opportunity to better understand the data center industry in all of Latin America, and understand different cultures, difficulties and plans for future.
What Network services been offered to LATAM members?
We offer all the Uptime Institute Network services like AIRs, papers, exchanging information and questions with members, webinars, and two conferences, one meeting in Brasil and either a North America or EMEA fall Network meeting.
Which activities seem to best meet the information needs of LATAM members?
One of the most important is the ability to consult other members about news or implementation questions for new projects in Brasil, and then using these recommendations and experiences to avoid any mistake and to have better results.
Another important benefit is the ability to participate at the North America or EMEA Network fall meeting, exchanging experiences with people from other countries and visiting different data centers for the Network data center tour.
How do you expect the Network to grow in coming years?
The growth in all areas will be faster as members make use of the benefits. I believe that in the long term we will see the growth of a global Network.
The Uptime Institute has many global clients and Network globalization will help data centers reduce efforts and costs to have a better operation.
This article was written by Kevin Heslin. Mr. Heslin is senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.
Designing Netflix’s Content Delivery Network
/in Executive/by Matt StansberryDavid Fullagar, Director of Content Delivery Architecture at Netflix, presents at Uptime Institute Symposium 2014. In his presentation, Fullagar discusses the hardware design and open source software components of Netflix Open Connect, the custom-designed content delivery network that enables Netflix to handle its massive video-streaming demands, and explains how these designs are well-suited to other high-volume media providers as well.
ATD Interview: Elie Siam, Pierre Dammous & Partners
/in Design/by Kevin HeslinIndustry Perspective: Three Accredited Tier Designers (ATDs) discuss 25 years of data center evolution
The experiences of Christopher Johnston, Elie Siam, and Dennis Julian are very different. Yet, their experiences as Accredited Tier Designers (ATDs) all highlight the pace of change in the industry. Somehow, though, despite significant changes in how IT services are delivered and business practices affect design, the challenge of meeting reliability goals remains very much the same, complicated only by greater energy concerns and increased density. All three men agree that Uptime Institute’s Tiers and ATD programs have helped raise the level of practice worldwide and the quality of facilities. In this installment, Elie Siam of Pierre Dammous & Partners examines worldwide trends with a perspective honed in the Uptime Institute’s EMEA region. This is the second of three ATD interviews from the May 2014 Issue of the Uptime Institute Journal.
Mathworks data hall, a Pierre Dammous & Partners project.
Elie Siam graduated in 1993 from the American University of Beirut and joined Pierre Dammous & Partners (PDP) in 1994 as a design engineer. PDP is an engineering consulting firm with offices in Beirut, Riyadh, and Abu Dhabi. The firm is dedicated to the supply of quality services in mechanical, electrical, plumbing, fire, and telecommunications systems. After multiple successes, Mr. Siam became the lead of the team for resolving mission-critical facilities problems and eventually proved to be a local authority in the Electrical Engineering field. He is an active contributor to Lebanese Standards and a Building Services Instructor at Lebanon’s Notre Dame University. He has special qualifications in telecommunications and data networks and established highly performing systems for large banks (Credit Libanais, SGBL, SGBJ, Fransabank, CSC, BBAC, LCB, BLOM), telecom companies (IDM, STC), universities (LAU, Balamand), and leading media companies (Annahar, VTR, Bloomberg).
Elie, please tell me about yourself.
I have been headquartered in Beirut and worked here since 1993, when I graduated from the American University of Beirut as an electrical engineer and started work at Pierre Dammous & Partners (PDP) in 1994, so just over 20 years now.
A substantial part of our work at PDP is related to telecom and power infrastructure.
When PDP started with server rooms and then data centers, I decided to get an ATD accreditation, due to my exposure to networks.
We have done a lot of data center projects, not all of them Tier Certified. The issue is that the data center market in Lebanon is small, but clients are becoming more aware. Consequently, we are doing more Tier designs. Tier III is the most commonly requested, mostly from banking institutions.
The central bank of Lebanon has asked for qualifying banks to go to Tier III and now the market is picking up.
We are also starting to do design and construction management for data centers outside Lebanon as in Dubai (Bloomberg Currency House), Amman (Societé Générale de Banque – Jordanie), or Larnaca (CSC bank). We also did large colocation data centers for a major telecom company in Riyadh and Dammam.
Which project are you most proud of?
Well, I’ve worked on many kinds of projects, including hospitality, health care, and data centers. My last data center project was for a Saudi Arabian telecom company, and that was a 25-MW facility to house 1,200 server racks. It was awarded Tier III Certification of Design Documents.
Our responsibility was to do the design up to construction drawings, but we were not involved in the construction phase. The client was going to go for a Tier Certification of Constructed Facility, but, again, we are not involved in this process. It is a very nice, large-scale project.
The project is composed of two parts. The first part is a power plant building that can generate 25 MW of redundant continuous power, and the second part is the data center building.
Is the bulk of your work in Beirut?
By number of projects, yes. By the number of racks, no. One project in Saudi Arabia or the Gulf countries could be like 50 projects in Beirut. When you design a project in the Gulf Countries, it’s for 1,200 racks, and when you design one in Beirut, it’s for about 25 racks.
What drives the larger projects to Saudi Arabia?
Money, and the population. The population of Lebanon is about 4 million. Saudi Arabia has about 25-30 million people. It is also fiscally larger. I’m not sure of the numbers, but Saudi Arabia has something like US$60 billion in terms of annual budget surplus. Lebanon has an annual budget deficit of US$4.5 billion.
Tourism in Lebanon, one of the main pillars of the economy, is tending to zero, due to political issues in Syria. Consequently, we have economic issues affecting growth.
Is it difficult for a Lebanese company to get projects in Saudi Arabia?
Yes, it can be difficult. You have to know people there. You have to be qualified. You have to be known to be qualified. You can be qualified, but if you can’t prove it you won’t get work, so you have to be known to be qualified so you can grasp jobs.
We have a branch office in Dubai for Dubai and UAE projects and also for some projects in Saudi Arabia as well. We‘ve done many projects in Jeddah and Riyadh, so people know us there.
Also, we are partly owned by an American company called Ted Jacob Engineering Group, which owns 25% of PDP. This ownership facilitates the way we can get introduced to projects in the Gulf because our partner is well known in the region.
Tell me about some of your current projects.
We are working on three new data centers, each above 200 kW. Two of them belong to Fransabank (main site and business continuity site) while the third belongs to CSC bank. All of them will be going for Tier III Certification. We’re also working on several other smaller scale data center projects.
One of the major data centers currently in construction in Lebanon is for Audi Bank, which is designed and executed by Schneider Electric. The second project is for another banking institution called Credit Libanais. It is 95% complete, as of April 2014. We are the designers. We also worked as integrators and BIM (building information modeling) engineers and did the testing and commissioning. This is a 120-kW data center.
The Credit Libanais facility has the following features:
What does Credit Libanais plan to do with the facility?
Credit Libanais is one of the top banks in Beirut. They have about 70 branches. They are constructing a new 32-story headquarters building. This will be the main data center of the bank. The data center is in a basement floor. It is about 450 m2 with 120 kW of net IT load. The data center will handle all the functions of the bank. An additional 350-m2 space hosts a company called Credit Card Management (CCM). CCM has also a dedicated server room within the Credit Libanais data center.
At first Credit Libanais did not want to engage in Tier Certification because they do not provide colocation services, but they changed their minds. In March, I queried the Uptime Institute for a proposal, which I brought back to the bank. The proposal includes full support including Tier III Certification of the Design, Constructed Facility, and Operations.
Can you describe the cooling system of the Credit Libanais data center in more detail?
The critical environment is water cooled. Not only is it water cooled, but it is water cooled at relatively high temperatures. Normally water-cooled systems for building’s supply water at around 6°C; this data center uses a 10°C chilled water temperature, which greatly increases the efficiency and reduces cost.
Since the cold water temperature is not too low, there is no need to provide large-scale humidification because water will not condense as much as when the chilled water temperature is lower, which substantially saves on energy.
The chillers also have variable speed compressors and variable chilled water pumps, and the fans of the chillers and the fans of the CRACs all have EC variable speed fans so that you can permanently adjust the speed to exactly the amount of capacity you would like to have and are not working at either 100 or 0%.
What’s the value of being an ATD?
I believe that you would need to have substantial experience before you go for the accreditation because if you do not have experience you would not benefit. But, it is very useful for people who have experience in mechanical and electrical engineering and even more useful for those who have more experience in data centers.
You could get experience from working on clients’ projects, but you would need the accreditation to know how things should be done in a data center and what things that should not be overlooked. To do that you need a methodology and the ATD gives you that.
ATD gives you a methodology that eliminates forgetting things or overlooking things that could lead to failure.
When you go through the training they teach you the methodology to check each and every system so that they are Concurrently Maintainable or Fault Tolerant, as required by the client.
This article was written by Kevin Heslin, senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.