Lessons Learned from the Tier Certification of an Operational Data Center

Telecom company Entel achieves first two Tier III Certification of Constructed Facility Awards in Chile

Empresa Nacional de Telecomunicaciones S.A. (Entel) is the largest telecommunications company in Chile. The company reported US$3.03 billion in annual revenue in December 2012, and an EBITDA around 40%. In Chile, Entel holds a leading position in traditional data services, growing through the integration of IT services, with significant experience.

The company also offers mobile and wireline services (including data center and IT services, Internet, local telephony, long distance and related services) and call center operations in Peru. Its standardized service offering to all companies is based on cloud computing.

To deliver these services, Entel has developed the largest mobile-fixed wire network in Chile, with 70 gigabyte per second (Gbps) of capacity. The company serves 9.3 million mobile customers and offers an MPLS-IP network of wide and national coverage with quality of service (QoS) with 8.8 Gbps peak traffic in 2011. The company also provides international network connectivity (internet peak traffic of 17.7 Gbps in 2011).

Entel is also a large data center infrastructure provider, with more than 13,800 square meters (m2) in seven interconnected data centers (see Figure 1):

  • Ciudad de Los Valles
  • Longovilo (three facilities)
  • Amunátegui
  • Ñuñoa
  • Pedro de Valdivia

These facilities host more than 7,000 servers and 2,000 managed databases. Entel currently plans to add 4,000 msup>2 in two steps at its Ciudad de los Valles facilities. As part of its IT services, Entel also provides an operational continuity service to more than 80,000 PCs and POS and terminals countrywide. Its service platform and processes are modeled under the ITIL framework, and Entel has SAS-70/II and COPC certifications.

Finally, Entel also provides processing services, virtualization, on-demand computing, SAP outsourcing and other services. Entel has seen rapid growth in demand for IaaS platforms, that it meets with robust and multiple platform offerings (iSeries, p Series, Sun Solaris and x86) and different tiers of storage. Finally, the company has a 38% market share of managed services to large financial service institutions.

Entel has defined its corporate strategy to enable it to continue to lead the traditional telco market as well as increase coverage and capacity by deploying a fiber optic access network (GPON). The company also wants to increase its market share in O/S, data center, and virtual and on-demand services and expand to other countries in the region, leveraging its experience in Chile.

Entel chose to invest in its own data centers for three reasons, two of which related directly to competitive advantage:

  • Investing in its own infrastructure would help Entel develop its commercial offering to corporate clients in Chile. Entel believed that having control over its basic infrastructure would enable it guarantee the operational continuity of customers.
  • Entel also wanted to add white space when it felt more capacity was needed. It feels this flexibility helped it win additional large deals.
  • The country does not have enough experienced personnel with the experience to manage/coordinate facilities like Entel’s. The company finds it a challenge to coordinate and supervise subcontractors responsible for various systems.
Scope-of-Entel-Operations

Figure 1. General Scope of Entel’s Operations

As part of this process, Entel determined that it would work with the Uptime Institute to earn Tier Certification for Constructed Facility at its Ciudad de Los Valles Data Center. As a result, Entel learned several important lessons about its operations and also decided to obtain the Uptime Institute’s Tier Certification of Operational Sustainability.

Project Master Plan

The initial phase of the project included the construction of a 2,000 m2 of floor space for servers. A second phase was recently inaugurated that added another 2,000 m2 of floor space for additional equipment. The complete project is intended to reach a total growth of 8,000 m2 distributed in four buildings, each one with two floors of 1,000 m2, and with a total of 26,000 m2 of building space. (The general layout of the data centers is shown in Figure 2.)

Entel-Facilities-Layout

Figure 2. General layout of the Entel facilities

The data centers had to meet four key requirements:

1. It had to house the most demanding technical and IT equipment: 1,500 kilograms (kg)/m2 in IT spaces and more than 3,000 kg/m2 in battery spaces, with special support for heavier equipment such as transformers, diesel generators, and chillers.

2. It had to achieve Uptime Institute Tier III requirements, meaning that the design includes two independent and separate power and cooling distribution paths, each able to support 100% of the capacity to allow Concurrent Maintainability so that all the critical elements can be replaced or maintained without impact in the service.

3. The building had to have sufficient capacities to meet high electrical and cooling demand (see Figure 3).

4. Service areas such as UPS and meet-me rooms had to be outside of the server rooms.

The structural design of the facility also had to address the threat of earthquake.

Entel decided to certify its data centers to meet its commercial commitments. Due to contractual agreements, Entel had to certify the infrastructure design and its facilities with an internationally recognized institution. In addition Entel wanted to validate its design and construction for audit processes and as a way to differentiate itself from competitors.

Entel-Facilities-Comparison

Figure 3. Comparison of two of Entel’s facilities.

The first step was to choose a standard to follow. In the end, Entel decided to certify Ciudad de Los Valles according to Tier Standards by the Uptime Institute because it is the most recognized body in the Chilean market and customers were already requesting it. As a result Entel’s facility was the first in Chile to earn Tier Certification of Constructed Facility.

Preparation

Ciudad de Los Valles Data Center is a multi-tenant data center that Entel uses to provide housing and hosting services, so its customers also had to be directly involved in project planning, and their authorization and approvals were important. At the time the Tier Certification of Design Documents began, the facility was in production, with important parts of its server rooms at 100% capacity. Modifications to the infrastructure had to be done without service disruptions or incident.

Tier III Certification of Constructed Facility testing had to be done at 100% of the electrical and air conditioning approved design load so coordination was extremely challenging.

As part of the design review, the Uptime Institute consultants recommended some important improvements:

  • Separate common electrical feeds from the electrical generators
  • Additional breakers to isolate bus bars serving critical loads
  • Separate water tanks (chilled water tanks from building tanks)
  • Redundant electrical feeders for air handling units

Additionally, it was essential that all the servers and communication equipment have redundant electrical feeders to avoid incidents during performance tests.

Risks and Mitigation

Recognizing that Tier Certification testing would be challenging, Entel developed an approval process to help it meet what it considered four main challenges:

  • Meeting the Tier Certification timeline with a range of stakeholders to inform and coordinate
  • Preventing incidents due to activities necessary to test the infrastructure
  • Avoiding delays caused by coordination problems and obtaining approvals and permissions to modify the infrastructure
  • The possible existence of unidentified/undocumented single-corded servers and telecom equipment

The approval process included a high-level committee, including commercial and technical executives to ensure communications. In addition every technical activity had to be approved in advance, including a detailed work plan, timing, responsibilities, checkpoints, mitigation measures and backtracking procedures. And for the most important activities, the plan was presented to the most important customers.

The Certification process proved to be an excellent opportunity to improve procedures and test them. The process also made it possible to do on-the-job training and check the data center facilities at full load.

In addition, Entel completed the Tier Certification process of both design and facility without experiencing any incidents. However changes made to meet Uptime Institute Tier III requirements led Entel to exceed its project budget. In addition, unidentified/undocumented single-corded equipment delayed work and the eventual completion of the project. In order to proceed, Entel had to visually check each rack in the eight data halls rack to identify the single-corded loads. Once the loads had been identified, Entel installed static transfer switches (STS) to protect the affected racks, which required coordinating the disconnection of the affected equipment and its reconnection to the associated STS.

Lessons Learned

Entel learned a number of important lessons as a result of Tier Certification of two data centers.

The most important conclusion is that is absolutely possible to Tier Certify an ‘in-service data center’ with good planning and validation, approval and execution procedures. At the Ciudad de Los Valles 1 Data Center, Entel learned the importance of having a good understanding of the Tier Certification process scope, timing and testing, and the necessity of having good server room equipment inventories.

As a result of Tier Certification of Ciudad de Los Valles 2 Data Center, Entel can attest that is easier to certify a new data center with construction, commissioning and certification all taking place in order before the facility goes live.

Next Steps

Entel’s next goal is to obtain the Uptime Institute’s Operational Sustainability Certification. For a service provider like Entel, robust infrastructure is not enough:

  • A good operation regime is as important as the design, construction and testing
  • It’s critical to have detailed maintenance plans and testing of the infrastructure equipment
  • Entel wants its own people to control operations
  • Entel needs well trained people

Entel is preparing for this Certification by adapting its staffing plan, maintenance procedures, training and other programs to meet Uptime Institute requirements.

In addition to its other ‘firsts’ in terms of Tier III Certification of Constructed Facility—twice over—Entel seeks to be the first data center to achieve Tier III Gold Certification.


Juan-Miguel-Duran-EntelMr. Juan Miquel Durán, electrical engineer, joined Entel Chile in September 2011. He has more than 20 years of experience, with strong knowledge of data centers, switching and telecommunication services.

He is responsible for Facilities operation and maintenance for Ciudad de los Valles TIER III Certified Data Centers. Mr. Duran is also responsible Entel’s Longovilo Data Centers.

He participated in the certification process team, and he has strong experience in data center operations (Negotiation and implementation of long terms housing contracts, according international standards), as well as in infrastructure mission critical data center projects planning and design, including tender formulation, designing, projects evaluation, management for different companies.

The data center in 2020 and beyond

In this keynote video from 2014 Uptime Institute Symposium, Andy Lawrence, Research Director at 451 Research, provides an overview of what a data center might look like in 2020 and beyond. This presentation covers how all the various new technologies might be combined with new data center services, along with extrapolated improvements in processing, storage and energy management.

Putting DCIM to Work for You

At Symposium 2013, Erik Ko of Twitter, Hewlett-Packard’s Ken Jackson, and James Pryor of Regions Bank discussed the experience of selecting and implementing DCIM solutions from the end user perspective. The panel was moderated by Uptime Institute’s Kevin Heslin.

Earlier that week, Symposium 2013 attendees heard presentations that focused on the potential for DCIM and some of the barriers to further adoption from 451 Research’s Andy Lawrence and Uptime Institute’s Matt Stansberry. These presentations conveyed a great deal of generalized information.

In this session, the three panelists shared information about the decision to implement DCIM, organizational goals, product selection, and implementation. Twitter, Hewlett-Packard, and Regions bank are very diverse organizations, in different industries, serving different customers, and with dissimilar IT needs. Yet each of the organizations had to make relatively judgments about cost, benefits, implementation times, and benefits to find which DCIM solution fit it best.

It turns out that it is hard to generalize too much about DCIM procurement and implementation, as each organization will have different goals, different pressure points, different needs, and different resources to put to bear.

This almost 45-minute presentation is worth watching in its entirety as the panelists examine the different paths they took to reach functioning DCIM implementations and what effort is required on an ongoing basis.

 

ATD Perspectives From Mainframes to Modular Designs

The experiences of Christopher Johnston, Elie Siam, and Dennis Julian are very different. Yet, their experiences as Accredited Tier Designers (ATDs) all highlight the pace of change in the industry. Somehow, though, despite significant changes in how IT services are delivered and business practices affect design, the challenge of meeting reliability goals remains very much the same, complicated only by greater energy concerns and increased density. All three men agree that Uptime Institute’s Tiers and ATD programs have helped raise the level of practice worldwide and the quality of facilities. In this article, the last of a series of three, Integrated Design Group’s Dennis Julian examines past practices in data center design and future trends, including modular designs.

Dennis Julian

Dennis Julian

Dennis Julian is principal – A/E Design Services at Integrated Design Group Inc. He is a Professional Engineer with more than 25 years experience in multi-discipline project management and engineering, including management of architectural, mechanical, HVAC, plumbing, fire protection, and electrical engineering departments for data center, office, medical, financial, and high technology facilities. Mr. Julian has been involved in the design and engineering of over 2 million square feet ft2 of mission critical facilities, including Uptime Institute Tier IV Certified data centers in the Middle East. He has designed facilities for Digital Realty Trust, RBS/Citizens, State Street Bank, Orange LLC, Switch & Data (Equinix), Fidelity, Verizon, American Express, Massachusetts Information Technology Center (the state’s main data center), Novartis, One Beacon Insurance, Hartford Hospital, Saint Vincent Hospital, University Hospital, and Southern New England Telephone.

Dennis, please tell me how you got your start in the data center industry?

I started at Worcester Polytechnic University in Worcester and finished up nights at Northeastern University here in Boston. I got into the data center industry back in the 1980s doing mainframes for companies like John Hancock and American Express. I was working at Carlson Associates. Actually at the beginning, I was working for Aldrich, which was a (contractor) part of Carlson. Later they changed all the names to Carlson, so it became Carlson Associates and Carlson Contracting.

We did all kinds of work, including computer rooms. We did a lot of work with Digital Equipment doing VAX and minicomputer projects. We gradually moved into the PC and server worlds as computer rooms progressed from being water-cooled, 400-hertz (Hz) power systems to what they are today, which is basically back to water cooled but at 120 or 400 volts (V). The industry kind of went the whole way around, from water-cooled to air-cooled back to water-cooled equipment. The mainframes we do now are partly water cooled.

After Carlson went through a few ownership changes, I ended up working for a large firm—Shooshanian Engineering Associates Inc. in Boston—where I did a larger variety of work but continued doing data center work. From there, I went to van Zelm HeyWood & Shadford in Connecticut for a few years, and then to Carter and Burgess during the telecom boom. When the telecom boom crashed and Carter and Burgess closed their local office, I went to work for a local firm, Cubellis Associates Inc. They did a lot of retail work, but I was building their MEP practice. And, when I left them about 8-1/2 years ago, I joined Integrated Design Group to get back into an integrated AE group doing mission critical work.

When I joined Integrated Design Group, it was already a couple of years old. As with any new company, it’s hard to get major clients. But they had luck. They carried on some projects from Carlson. They were able to get nice work with some major firms, and we were able to just continue that work. Then the market for mission critical took off, and we just started doing more and more mission critical. I’d say 90-95% of our work is for mission critical clients.

What was it like working in the mainframe environment?

Back in those days, it was very strict. The cooling systems were called precision cooling because many of the projects were based on the IBM spec. It was really the only spec out in those days, so it was ± 2° on cooling. The mainframes had internal chillers, so we brought chilled water to the mainframes in addition to the CRACs that cooled the room itself.

Also, the mainframes of the time were 400 Hz. They had their own MG (motor-generator) sets to convert the power from 60 to 400 Hz, which caused its own issues with regard to distribution. For instance, we had a lot of voltage drop due to the higher frequency of the power. So that made for different design challenges than those we see now, but watts per square foot were fairly low, even though the computers tended to be fairly large.

There was a lot of printing in those days as well, and the printers tended to be part of the computer room, which was a fire hazard and caused a dust problem. So there were a number of different issues we had to deal with in those days.

What are some of the other differences with today’s hardware?

Today’s systems are obviously high capacity. Even so the power side is much easier than the cooling side. I can go up a few wire sizes and provide power to a rack much more easily than I can provide more cooling to a rack.

Cooling with air is difficult because of the volumes of air, so we find ourselves going back to localized cooling using refrigerant gas, chilled water, or warm water. The specific gravity of water allows it to reject more heat than the same volume of air so it makes a lot more sense.

For energy efficiency, moving water is easier than moving large volumes of air. And with localized cooling, in-rack, or rear-door heat exchangers, we are able to use warm water so we either use a chiller system at 60°F instead of 44°F water or we run it directly off a cooling tower using 80°F or 85°F water. The efficiencies are much higher, but now you have made the HVAC systems more complicated, a bigger part of the equation.

Can you tell me about some of your current projects?

We’re doing a project right now, we’re just finishing construction, and we are ready to do testing for Fidelity’s West data center which is located in Omaha, NE.

It’s a combination of a large support building, a large mainframe wing, and another section for their Centercore project. The Centercore product is a modular data center, which we designed and developed with them. This project starts at 1 MW of data center space but has the capacity to grow to 6 MW.

The project is very interesting because it is a mix of stick built and modular. We’re using the local aquifer to do some cooling for us. We’re also using some free cooling and chilled water beams, so it’s very energy efficient. It’s nestled into the side of a hill, so it is low visibility for security purposes and blends in with the landscape. It’s about 100,000 ft2 overall.

 

Centercore

Fidelity’s Centercore Project

We’re designing mainly for about 6-7 kW per cabinet in the Centercore. In the mainframe it’s not as simple to calculate, but the area is 1.2 MW.

Industry people tend to associate firms like Integrated Design Group with stick-built projects, How did this project come about?

Fidelity asked Integrated Design Group to develop an off-site constructed data center to address the limitations they saw in other offerings on the market. It needed to be flexible, non-proprietary and function like any other stick-built data center. The proof of concept was a 500-kW Concurrently Maintainable and Fault Tolerant data center in North Carolina with 100% air-side economizer. It is a complete stand-alone data center with connections back to the main facility only for water and communications.

The next generation was the Fidelity West data center. We located the power units on the lower level and the computer room on the upper level so it would match the new support building being built on the site. It is Concurrently Maintainable and Fault Tolerant and uses a pumped refrigerant cooling system that provides 100% refrigerant-side economization.

Fidelity wanted an off-site constructed system that was modular and had the ability to be relocated. Under this definition it could be treated as a piece of equipment and would not need to be depreciated as a piece of real property. This could lead to substantial tax savings.  The other goal was to prevent overbuilding facilities that would not be occupied for several years, if ever.

We think of Centercore as more of a delivery system and not just a product. It can be customized to suit the customer’s requirements. ID has conceptualized units from 100 kW to multi-megawatt assemblies in any reliability configuration desired. The other goal was to prevent overbuilding facilities that would not be occupied for several years, if ever.  Given the difficulty in predicting where the IT systems will go in terms of size and power requirements, a “Load on Demand” solution was desired.

When did you get your ATD? Do you remember your motivation?

December 2009. I was foil number 147. At the time we were doing work in the Middle East, where there was a lot of emphasis on having Tier Certification. It was an opportune time to get accredited since ATD training had just started. Since then we have designed facilities in the Middle East, one of which (ITCC in Riyad, Saudi Arabia), has already received Tier IV Certification of Design Documents. We anticipate testing it early this summer for Constructed Facility. Several other projects are being readied for Tier Certification of Design Documents.

Are you still working internationally?

We have done several Tier III Certified Design Documents and Constructed Facility data centers for redIT in Mexico City. Well, it’s one facility with several data centers. And we’ve got several more under construction there. We’re still doing some work in Riyadh with Adel Rizk from Edarat, which is our partner over there. They do the project management and IT, so they become the local representation. We do design development and documents. Then we attend the test and do a little construction administration as well.

Is the ATD an important credential to you?

Yes. It helped bring us credibility when we teamed up with Edarat. We can say, “The designer is not only American-based but also an ATD.” When I see some of the designs that are out there I can see why customers want to see that accreditation, as a third-party verification that the designer is qualified.

What changes do you see in the near future?

Energy efficiency is going to stay at forefront. Electrically we are probably going to see more high-voltage distribution.

I think more people are going to colos or big data centers because they don’t want to run their own data centers, because it is just a different expertise. There is a shortage of competent building engineers and operators so it is easier to go somewhere they can leverage that among many clients.

You will see more medium voltage closer to the data center. You will see more high voltage in the U.S. I think you will see more 480/277-V distribution in the data center. Some of the bigger colo players and internet players are using 277 V. It isn’t mainstream, but I think it will be.

And, I think we are going to go to more compressor-less cooling, which is going to require wider temperature and humidity ranges inside the data centers. As the servers allow that to happen and the warrantees allow that to happen, eventually we’ll get the operators to feel comfortable with those wider ranges. Then, we’ll be able to do a lot more cooling without compressors, either with rear-doors, in-rack, or directly on a chip in a water-cooled cabinet.

Kevin Heslin

Kevin Heslin

This article was written by Kevin Heslin. Kevin Heslin is senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.

 

Uptime Institute Network Activities Around the Globe

No longer “the best kept secret” the Uptime Institute Network has gone international

These interviews with Uptime Institute Network directors Rob Costa, Sylvie Le Roy, and Mozart Mello highlight the qualifications of the Uptime Institute staff working with Network members across the globe and also portray the opportunities for growth and need for the Network in different business environments. National borders do not limit business operations to a single nation or even a handful of nations. Enterprises follow business opportunities wherever they find them and feel that they can make a profit. Hence, it follows that IT operations also transcend national borders and that the data centers required to support IT can be sited in any country that offers strong Internet connections, reliable electrical infrastructure, sufficient water and other advantages deemed essential to an enterprise’s 24 x 7 operation.

Nonetheless, it has long been true that the Uptime Institute Network has been rooted in North America, probably because most of the original members primarily maintained data center operations in the U.S.

Soon, this will no longer be the case. As Thomas L. Friedman, author of The World is Flat, made clear in a 2010 Symposium keynote, global connectivity is erasing the importance of national borders and time zones so that no enterprise and no individual is unaffected by worldwide competition.

The Uptime Institute Network, which supports many enterprise activities that span the globe, must itself adapt to the changing environment to meet its members’ needs. To that end, Uptime Institute has expanded the Network with groups in Europe, Middle East and Africa (EMEA); Latin America (LATAM); and Asia Pacific (APAC).

Following, then, are interviews with Rob Costa, Director, Network-North America; Sylvie Le Roy, Director, Network-EMEA; and Mozart Mello, Managing Director, Latin America, representing the Latin American Network.

Costa became director of the North American Network in July 2013. LeRoy became director of the EMEA Network in February 2013. The EMEA and Latin American Networks were founded in 2011 and 2012, respectively. The APAC Network also convened for the first time in 2012.

In these brief interviews, Costa, Le Roy, and Mello will talk about their experiences in Network leadership, the challenges facing Network members, and the plans to address the globalized mission of IT.

Rob CostaRob Costa joined the Uptime Institute as Director, Network-North America in May 2013. He is responsible for all activities related to its management, including overseeing content development for the conferences, membership retention and growth, and program development. Prior to that he was principal of Data Center Management Consulting, which provided on-site consulting services.

Mr. Costa developed an extensive body of experience as senior IT management at the Boeing Company, where he worked for almost 38 years. Boeing’s focus was on the improvement of data center availability, reliability, consolidation and cost efficiency. Mr. Costa and his team achieved over 7 years of continuous data center uptime through solid processes and communication between IT and Facilities.

Sylvie Le RoySylvie Le Roy is Director, Network-EMEA of the Uptime Institute. She joined the Uptime Institute recently after spending 12 years at Interxion, a well-known colocation provider, where she was customer service director.  

Sylvie Le Roy’s role with the Uptime Institute is to promote and develop the Network in EMEA, maximizing opportunities for network members as well as educating them on various topics with a focus on data center operations, innovation and sustainability. She seeks to hold conferences in a variety of countries to include the major stakeholders of that country and show members how each country works in respect to the data center industry.

Mozart MelloMozart Mello is the Uptime Institute Managing Director, Latin America. He joined Uptime Institute with experience in both IT and Facilities, and in many aspects of design, build, operations and migration processes.

Mr. Mello was responsible for the startup of Uptime Institute Brasil in May of 2011. In 2012, he started the Network Brasil, an independent and self-reliant knowledge community exclusively for data center owners and operators.

Just prior to joining Uptime Institute Brasil, Mr. Mello was the Senior Project Manager for CPMBraxis in Brasil. He was responsible for managing the design and construction of Data Center Tamboré of Vivo, a 50,000-ft2 data center that received Uptime Institute Tier III Certification of Constructed Facility. He also managed the team responsible for designing the new Data Center Campinas of Santander Bank.

Mr. Mello has a degree in Electrical Engineering at Maua Engineering University, Specialization of Systems at Mackenzie University, and an MBA in Business Strategy at Fundação Getúlio Vargas.

Welcome Aboard: New Captain at the Helm
Rob, you just joined the Uptime Institute as Director, Network-North America. I know you have been attending Network meetings, assessing programs and familiarizing yourself with Network operations. Still, as the former Network Principal from the Boeing Company, you have also seen a lot of familiar faces. Is the proper greeting welcome aboard or welcome back? Perhaps I should just ask you to introduce yourself to the entire operations community that benefits from the Network.

Thank you very much for the opportunity. I retired from Boeing in August 2011 after 37 years. I spent the last 20 managing Boeing Defense’s enterprise data centers.

In the beginning of those 20 years, I managed the primary two of those data centers, which were located in the Puget Sound area (Washington State). And not too long after that we consolidated many of the smaller server rooms into the two main facilities located in Bellevue and Kent, WA.

And then the years passed, and we eventually merged with McDonnell Douglas, and were part of many other smaller M&As. Each of those companies came with their own data centers, so we had a major program to bring those data centers into the Boeing family under a single point of contact. I wound up managing all the Boeing data centers throughout the U.S. and two small server rooms, one in Amsterdam and one in Tokyo.

In my last few years with Boeing, we undertook a program called Data Center Modernization, which set up a strategy to merge all the data centers into three new data centers across the U.S.

We completed the first phase of that program by migrating into a data center in the Phoenix area, and the second site is currently underway. That in a nutshell is my background with Boeing. On May 1 of this year I joined the Network and am excited to be here.

Have you had any previous experience with the Uptime Institute Network?

In 1996 Boeing joined the Uptime Institute Network. At that time it was the Uninterruptible Uptime Users Group (UUUG) with Ken Brill. And Boeing has been a member since 1996. I was a principal representing Boeing to the Network for probably about my last 10 years at Boeing. We hosted several meetings at our Boeing sites and found membership very valuable in managing enterprise data centers.

I think it is important to note that you had retired from Boeing in 2009. What made the director position so interesting that you came back to the Network?

I guess the main reason I came back to work was that I had attended so many conferences and watched all the previous directors at work that I always thought it would be interesting to be able to interact with all the members and be able to provide some value to data centers across the U.S. So, when that opportunity came to me, it was very attractive. It was something I just wanted to do.

What do you see as the Network’s strength? Can you say how you used the Network at Boeing?

We used the Network a lot. One of the biggest values we found was the ability to use email queries when we trying to improve our own sites and processes. We’d hit decision points along the way, and we’d always wonder what other members were doing with regard to a specific issue. And, one of the values we found in the Network was the ability to query the members.

We would form a series of questions, which Network staff would send to the entire membership of more than 70 enterprises. So we would send out a questionnaire we developed and the members responded. Not every member responded to every question, but we’d get a good percentage return on our questions. The responses from other members helped us formulate our decisions on where we wanted to go, and it was all information from members who were also running enterprise data centers.

I bet we probably did that 15-20 times through my career at Boeing. Beyond the queries, we also got a lot of value from attending the conferences and building relationships. When I was with Boeing, I’d develop relationships with other Network members and would look forward to seeing them at Network conferences a couple of times of year.

Because of those relationships, well, there is no resistance to picking up the telephone or sending an email about a specific issue or asking how another member might attack it. I’m not talking about the email queries; I’m talking about just picking up the phone.

In the Network, you develop those relationships, and some of them become very strong. Then you have very good discussion on the phone regarding specific issues. It’s one of the great values of the Network and you can utilize it to the fullest extent on an ongoing, even constant, basis.

What kinds of issues would you ask about in an email query?

The most recent one that comes to mind is that Boeing changed its strategy in regard to owning/operating its own data centers to moving into leased facilities, which generated many questions.

What we wanted to know is were there any members who went through the same process, and, if so, what was their decision making process and how did that turn out for them.

So we formed our questions around that topic and the Network staff sent it out. We got very good responses that guided us in that decision, and Boeing went forward with that change.

How has the Network changed?

A: Now that I am a member of the Network again, only from a different perspective, many of the companies are the same. They were members when I attended conferences as a member. But the faces have changed. The new people bring some new ideas with them. The Network is always getting refreshed, even though many companies have been there for many years. In many ways, the challenges addressed by the Network are continual: planning, continuous improvement, mitigating risk. The folks that are members are bringing their own ideas, so members are always getting a new view of facility operations, maintenance, etc.

Also IT involvement seems to be growing. I remember going to meetings and it would be all Facilities people with one or two IT people among them. Now it seems that the population of the IT side of the house is growing within the membership.

Can you share some of the Network’s plans for the future?

A: I think the main vision over the next three years is to grow the Network to infuse new ideas. I guess one of the big things we are looking to do is grow the Network. We’re at about 66 and would like to grow that to about 100 companies in three years. Among the main reasons to grow the Network is to bring additional new companies on board with new ideas. We’d have new facility managers on board with fresh ideas of how to operate major enterprise data centers. We’re also thinking about mixing the organization up just so that the members are meeting with new people making new relationships.

Customer Service Provides a Great Perspective for First EMEA Network Director

Sylvie, please tell me about your career and how it prepared you for your current role.

I joined the Uptime Institute in February 2013, and it started quite well because we had our first EMEA Network conference in Frankfurt and that was my first opportunity to meet with all the members. I used to work for Interxion, which is one of the main colocation providers in Europe.

I spent 12 years there, starting on the help desk and working my way up to customer service manager for the group. So I know quite a bit about data centers now. I was dealing with crisis management for all 33 sites in Europe.

I am very much in tune with what data centers need now, what they might need in the future and what the issues are.

Did you have any exposure to the Network before you became Director, Network-EMEA?

Interxion has been a Network member since 1997 (Interxion was a founding member of the EMEA Network). I used to work closely with Phil Collerton, who was the VP Operations at Interxion (now Uptime Institute’s Managing Director, EMEA) and Lex Coors, CEO there. We used to be the trio during crisis management, so I was very well aware of the Uptime Network before I joined.

What made the position at the Uptime Institute Network interesting to you?

Seeing the industry from another angle. From just the colocation point of view to all the different data center facilities that exist and what they are about, depending on the center, and all the different technologies that are used. I think its also interesting that the Uptime Institute Network is the original forum for the data center industry. For me, learning and exchanging information between data center experts is the best way to do it.

What are the most important values of the Network? What do you see as its strengths?

Well the Network in EMEA little different than in the U.S. I think our strength is that EMEA is not quite as developed so we have a little more leeway as to how we want to develop it.

In EMEA, the members, are on average, in a higher position, so they have more decision making power. And, they can influence more in their own company how things are done, or they can even have more impact on other members from either smaller or larger companies. It can work both ways.

Why has it developed that way?

I think the Uptime Institute Network in Europe, at least for me at Interxion, is perceived as prestigious. And in 2008, it was really the only body that was looking after all data center issues and best practices. Since then, there have been other organizations and various lobbying groups that have been formed in Europe. I think the Uptime Institute is still considered the original group and what we say matters.

Have you had time to establish goals for the Network or what direction you might go?

Yes, I think priority number one is to increase membership, so that allows us to develop more topics. There will always be M&E topics, but I’d like to see it go to another level.

Two, the Network could cover the IT part, the cloud and the diffusion of data centers and IT budgets.

And, three, in the future I would like to give more attention to the financial aspects of anything to do with data centers. Basically I don’t want it to be just M&E, but expand to really having anything to do with the data center because it is all interlinked. It’s all what our members are thinking about, especially when they are at a responsible level. So it’s not just how it works, but it’s how can we save more money doing this, and if we do things in a certain way what does it mean for the future. Data centers are a fast-moving environment, not necessarily the building itself, but the technologies that go with it. It is important to keep your finger on the pulse really.

Getting the Latin American Network Launched

Mozart, please tell me a little about your background.

I’m the Managing Director of Latin America and Director of its Network. I joined Uptime Institute with experience in both IT and Facilities and in many aspects of the design, build, operations and migration processes. I had been at the Fall Network meeting in San Antonio in 2011 before the first meeting in Brasil in 2012. It was very important to understand the methodology and process of the Network and to maintain the standard of Network meetings at Brasil/LATAM.

In May 2012, I helped start up Uptime Institute Brasil, which enabled us to introduce the full complement of Uptime Institute services to Brasil including Tier Certification, consulting and training. As director, I represent and lead the commercial interests of Uptime Institute Brasil in Latin America. In 2012, we started the LATAM Network Brasil.

As Managing Director in LATAM, what is your relationship to the Network?

I have the mission of spreading knowledge of the concept and benefits of the Network in LATAM and to build a Network group that includes different industries before hiring a Network Director for LATAM. To that end, I´m responsible for planning for LATAM Network with Uptime Institute COO Julian Kudritzki.

Please tell me about the early development of the LATAM Network.

I have 25 years of experience working with data centers, so I have a good relationship with the most important clients and users of data centers in Brasil. This experience helped us start the LATAM Network in March 2012. We held our first Network meeting in São Paulo, Brasil, in 2012, with our three founding members: Bradesco Bank, Itau-Unibanco Bank and Petrobras.

This meeting was very important even though we had only members from Brasil, because our clients could evaluate the real benefits of being a member of the Network.

The Brasilian members have already participated at the North American Network’s fall meeting in Atlanta in 2012.

The experience of the Brasilian members at the Fall Network meeting in the U.S., where they shared experiences with members from different cultures the growth of the data center industry in South America, encouraged us to invite members from other South America countries to join the LATAM Network. So, at our 2013 Network meeting in Brasil, we had six Network members, including our first member from Chile. Now we have members from banks, oil/gas, data center services providers and internet/mobile telephone services.

Growing from a Brasilian meeting to a Latin American meeting gave us the opportunity to better understand the data center industry in all of Latin America, and understand different cultures, difficulties and plans for future.

What Network services been offered to LATAM members?

We offer all the Uptime Institute Network services like AIRs, papers, exchanging information and questions with members, webinars, and two conferences, one meeting in Brasil and either a North America or EMEA fall Network meeting.

Which activities seem to best meet the information needs of LATAM members?

One of the most important is the ability to consult other members about news or implementation questions for new projects in Brasil, and then using these recommendations and experiences to avoid any mistake and to have better results.

Another important benefit is the ability to participate at the North America or EMEA Network fall meeting, exchanging experiences with people from other countries and visiting different data centers for the Network data center tour.

How do you expect the Network to grow in coming years?

The growth in all areas will be faster as members make use of the benefits. I believe that in the long term we will see the growth of a global Network.

The Uptime Institute has many global clients and Network globalization will help data centers reduce efforts and costs to have a better operation.

 

Kevin HeslinThis article was written by Kevin Heslin. Mr. Heslin is senior editor at the Uptime Institute. He served as an editor at New York Construction News, Sutton Publishing, the IESNA, and BNP Media, where he founded Mission Critical, the leading publication dedicated to data center and backup power professionals. In addition, Heslin served as communications manager at the Lighting Research Center of Rensselaer Polytechnic Institute. He earned the B.A. in Journalism from Fordham University in 1981 and a B.S. in Technical Communications from Rensselaer Polytechnic Institute in 2000.

Designing Netflix’s Content Delivery Network

David Fullagar, Director of Content Delivery Architecture at Netflix, presents at Uptime Institute Symposium 2014. In his presentation, Fullagar discusses the hardware design and open source software components of Netflix Open Connect, the custom-designed content delivery network that enables Netflix to handle its massive video-streaming demands, and explains how these designs are well-suited to other high-volume media providers as well.