Ahead of the current™
My Topology Is Better Than Your Topology!
Posted on: October 29th, 2012 by Bill

Nothing is as blind or discriminates like personal opinion.  When you assemble a group of engineers or owners, each will speak passionately about how their system is superior to the other, either in design or nuanced detail.  And that passion is typically fueled by their history, habits and worries.  What most don’t recognize is that the lens that each protagonist views their system through is data center by their experiences, and the expressed desire not to repeat past mistakes, unanticipated failures or trying to rationalize and value engineer their way out of a bad budget or business case.

What happens is topology and cost creep, where nuance, detail and pathway are piled atop each other in search of the system that is resistant to every conceivable circumstance.  This is admirable in intent, but typically a massive failure in cost and complexity.  What really matters is the system availability over time.  As we discussed with maintenance in the previous blog, simpler tends to be better.  The rub here is that ultimate system size has a large impact on the topology that you select.

Before we get to system architecture, let’s talk about the parts that we typically use, namely switchgear, generators and transformers.  When you look at the industry over the past ten years, a few habits begin to emerge:

  •  Main switchboards are evolving to match substation transformer and generator sizing, in the 2,000 kW to 3,150 kW range.
  • Arc flash hazard mitigation is forcing some changes to systems via the reduction of energy present in any given breaker cubicle.
  • Data centers are getting physically smaller on a per-unit load basis when viewed by kW/compute cycle and kW/TB (and a great friend of mine reminded me that it’s going to be the  kW/PB or petabyte as the new normal where storage is king).
  • Servers are sexy, but storage pays the bills.  You have to put all that data somewhere.
  • Data centers IT environments are usually isolated by load group and consist of several rooms or single user facilities of up to 10,000 SF raised floor each, with a few notable exceptions in the large-scale colocation, social media, content and search communities.  Smaller rooms allow for compartmentalization of systems, and this lends itself to the smaller “footprint” noted above.
  • The amount of large, one-of-a-kind facilities is plummeting.  Organizations tend to find what works and then stick with it.

Let’s be honest.  In the past ten years, the wholesale and retail data center and the search/media/social networking folks have pretty much built the lion’s share of the facilities on the planet.  This has driven a couple of fascinating phenomena.    First is the de facto industry standardization on the 2N topology in a wholesale facility, and the N+1 standard in most larger data centers, save for transaction processing.  Those who disagree might have forgotten that the first prescribed Tier IV system by The Uptime Institute was system-plus-system, 2N or 2N+1.  Second, with the amount of systems and data center space built to this wholesale 2N standard, there are now millions of hours in operating time on this 2N architecture over tens of millions of square feet of space.  And the performance data, excluding human factors, appears to favor wholesale’s business model.

The rules of thumb for the larger critical power loads (those exceeding 5 MW, the old school break between low voltage and medium voltage systems), a design might utilize multiple and paralleled systems, looped distribution and utility systems that appears more “grid like” in the topology.  In this case, those several utility, generator and UPS services collaborate to supply a given load any may offer multiple connections to loads thoughout the facility.  At that scale, this “grid” or “looped” topology is likely to be far more cost effective and operationally efficient to the more “siloed” 2 MW segregated rooms and load blocks.  The key is that both of those choices are mapped to a business’ need that exceeds your typical 1,100 kW-1,500 kW / 110 – 150 W/SF requirement in that 10,000 SF space.  The key to larger builds is that there is a known and disciplined connection of this type of system architecture with the business it supports.

Moving back to the wholesale data center model “2MW” system, they are nearly exclusively 2N or N+1.  One interesting point is how we define 2N.  For these smaller system, isn’t it simply 2N = N+1, or 1+N?  Sometimes it is, often it isn’t.  To slice through the semantics and to borrow from ANSI/BICSI DC Standards 002:

  •  2N is a fully redundant architecture consisting of two equally sized systems, each capable of carrying the total system load.
  • N+1 implies an N system capable of carrying the load, with a single system able to replace one of the systems comprising the “N” capacity in case of maintenance or failure.  When”N” =“1” and there are two systems, you have then have 2N.
  • 1+N implies either 2N or N+1, depending on system size.  For this reason, this term is not commonly used in the industry standards, TUI Tier Rating and ANSI/BICSI 002 Class Standards.

 The ability for a system to respond to failure or maintenance modes of operation is what determines its Tier or Class rating.  Recalling ANSI/BISCI 002, where performance-based criteria was first introduced for electrical and mechanical topologies, Class III allows for a single failure or maintenance event to render the system to an “N” state.  The wholesale data center industry has evolved to this Tier or Class III model, where the critical power architecture is 2N, allowing for complete redundancy from the UPS input switchboards to the PDU output, while using N+1 in the mechanical and supporting systems.  This did not happen by accident. 

 When examining the wholesale data center business model, what was introduced to the critical infrastructure building business was a direct connection between the capital cost to build and the business.  Wholesale data center, being essentially a high-tech real estate business, holds the sensibilities of cost discipline and functionality.

 When examining how to achieve cost effectiveness, one typically chooses the cheapest part, with the caveat in the data center business that the part also needs to be reliable.  That’s why we see 2,000 or 2,500 kW generators, 3,000A or 4,000A switchboards, 750 kVA UPS modules and Trane Intellipaks everywhere. 

While solid reliability at the component level is mandatory, it’s how those parts are procured, arranged and connected that result in system availability at the lowest.

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

Tags: