Ahead of the current™
What Are the Key Elements to Data Center Development?
Posted on: September 10th, 2014 by Bill

I’ve been offline for a while, working on a large project for the firm.  The great news is that it’s closed, and we’re fortunate to have a new client for both Rosendin and BladeRoom.  It will make our Twitter and social media outlets soon enough, so now I can return to a bit more normal pace of play.  A couple of months ago, Diane Alber called me up and offered to make me look 15 years younger.  How can anyone turn down an offer like that?  I couldn’t, and I will let you judge the quality of her work.

In exchange, she suggested a few topics as a guest blogger.  This is going up on her site first, with my post trailing by about a week.  Blogs typically discuss the nuances of what UPS system to buy or what is a better widget to employ in your data center, and I will confess that the Data Center Guru blog is no different.  So, I wanted to step back and speak on what really makes a data center successful.

The two key elements to data center development are:  availability & meeting and beating target benchmarks that are dead-matched to your business and goals.

Hersey!   No doctoral thesis on game theory and the mathematics of reliability!  No discourse on differing battery technologies!  No boring diatribe on something technical!  What is the world coming to?  Sheer madness.

The fact is a data center is simply nothing more than a factory that makes information.  Virtualization?  Nothing more than industrial optimization, albeit one you can’t touch.  PUE?  Merely using less energy or skylights in the old days.  Compaction?  Process reengineering.

And the purpose of the information factory is to make information continuously at the lowest cost per transaction or activity, just like any factory-made product.

When we embarked on the ANSI/BICSI 002 Data Center Standard in the early 2000’s, one of the glaring needs that had to be met was a performance-based set of metrics for the data center.  This would be for all four aspects of the data center – the facility, network, platform and application.

In all cases, we should cease to care about the constituent pieces of the ecosystem and start worrying about the end-to-end ecosystem.  What we all worry about for data is continuity – the continuous flow of data or the protection of stored data.  While most facility solutions focus on the particular engineer’s or owner’s design, the key thing to remember is that we have redundancy built in to all the major layers (facility, network, application and platform).  Redundancy is there, for the sole reason that things break, fail, don’t load right, somebody digs up your fiber with a backhoe, etc.  In short, redundancy allows for when things must be maintained or do fail.

Business-wise, executive leadership speaks to the output.  Technical-wise, stakeholders speak to the components.  There is vast difference.  The truth is what really matters, and what most of you are compensated on is the continuous operation and not the level of system availability in the facility, network, platform or application.  Redundancy is simply the manifestation of the failure analysis and business recovery model you are choosing to operate.

Back in the early 2000’s, I was commissioned to review the data processing facilities for a major high tech company.  Like many firms at the time, they were data center space-rich but infrastructure-poor.  Upon review, they had a series of solid Tier II and one Tier III facility.  The goal was to study the facility piece of the enterprise, to determine the work require to bring six live data centers to a Tier III stance.  Yep, all six, retrofitted live, to the tune in excess of $110M.

So, we asked the basic question, “How much space do you need?”  The reply, “For this program, not as much as we have.”  Ok, now we’re getting somewhere.  We didn’t ask about availability, yet.  Our observation was that this client had good application and platform diversity, older but well-maintained MEP systems and plants and a serviceable network.  We simply asked could you achieve the same result by hardening and enhancing the network, which had an added benefit of turbocharging their disaster recovery program.  In short, the failover would be more DR-ish on the network than forcing each site to stand-alone.  After a three-month period to mull it over, the answer was yes.  That day, we coined the term Enterprise Reliability (ask Jeff Davis).  While we went home without a contract for the design, we earned a life-long client.  And we saved them $95M in the process.

The contrasting allegory today is the large-scale social media, entertainment, portal, e-commerce and search firms (did I miss anyone?) approach things differently from many, as they have a processing, storage and network scale of great scope.  When an IT program grows to a specific scale, the failure analysis allows for portions of data centers, even entire sites or regions, to fail, without compromising SLAs to their customers.  What’s occurred is that all systems are now considered collaborating and integrated to the end result of the application availability.

Once you plow through what your internal and external SLAs need to be, the benchmarking, sell and price targeting are then brought to bear.  In the industrial case, you first have to decide on the product then how you are going to fulfill it.  Here’s the rub.  Unlike 15 years ago, where the only options were build-to-suit/operate-your-own and a limited amount of managed services, today there are a host of options.  Heck, 15 years ago, SaaS was mistaken for a Swedish car or British Tier 1 commando unit.  What may occur in the benchmarking phase is that the solution for the facility, software, network or platform solution may take a sharp turn from the initial assumptions and make you reconsider how you are going to achieve your solution.  What should not change is the cost and sell baselines or the SLA, only how you got there.

You might run down to see my friends Chris Crosby or Jim Smith for a facility solution.  You might call any of the cloud service providers for an IaaS solution on the virtual side of the business.  Or VM Ware might hop in and tune up your platforms, saving power, space and hardware.  You get the picture.  There are many tools, and the trick is which one you pick up to solve your problem to your specs.

The benchmarks are levels that your business expects to execute, profit and pay for services.  You start with the business model you’ve undertaken, then you work through the templates on benchmarks to narrow the cost/unit, cost/transaction and the operating carry.  The trick is don’t get to enamored with your first approach, as it’s likely to change.  Remember Sal Tessio’s line at the conclusion of The Godfather, “It’s was never personal, it was just business.”  But also remember when he said it that he was being driven off to be executed, because he did not stay loyal to the family business.  That is the result of misalignment between goals, requirements and expectations.

We’ll get into the benchmark builds in the next blog.

Salute’ a tutti!

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive