Case Study: How to build a datacentre in six months for $800,000
Home healthcare firm built datacentre that saves on space, power, cooling and IT effort.
By Joanne Cummings, Network World | Network World US | Published: 01:00, 07 January 2008
For years, Robert Wakefield and Dameon Rustin lived with the problems of keeping Snelling Staffing Service’s old, poorly designed datacentre up and running. Not only were the intricate cable runs and varied server makes and models difficult to keep straight, but the building itself tended to compound their management headaches.
“Our 15-ton air-conditioning unit was water-cooled, but the building [management] didn’t clean the cooling tower very often,” says Wakefield, vice president of IT at Snelling Staffing and Intrepid USA, a home healthcare firm also owned by Snelling’s parent firm, Patriarch Partners. “Muck would get in, clog up our strainers and shut down the AC unit to our datacentre. That was a big problem.”
In addition, the building owners would not give Snelling the OK to put in a diesel backup generator to power the datacentre. “Let’s just say they weren’t very helpful,” says Wakefield, who spoke about his datacentre project at the recent Network World IT Roadmap Conference and Expo in Dallas.
Things began to change quickly once Patriarch bought up Intrepid in 2006. Wakefield and Rustin, Snelling’s director of technology, were charged with building a brand-new datacentre that would not only solve the current Snelling problems, but also house Intrepid’s datacentre and be ready to support any future growth.
“We had to build expandability into it because Patriarch is a private investment firm, and their goal is to buy more companies and roll them in,” Wakefield says. “We were told to give ourselves about 100 percent growth room.”
The downside? They needed to do all that with a budget of $800,000 and a window of only six months. “It was a challenge,” Wakefield says.
But it was a challenge they met head-on. Today, Snelling and Intrepid’s new 1,100 square foot datacentre in Dallas efficiently houses a variety of equipment, including:
- A total of 137 servers (45 for Intrepid and 92 for Snelling), 37 of which are new dual-core, dual-AMD Opteron processor-based Sun Fire X-series Unix servers.
- Three EMC storage systems, including an EMC CX400, a CX3-20 iSCSI system and an old SC4500, as well as a Quantum tape library.
- A variety of networking components, including shared virus scanners and Web surfing control appliances.
- A Liebert 100kVA uninterruptible power supply (UPS).
- Two Emerson 10-ton and one Emerson 15-ton glycol-based AC units.
And even with all of that, Wakefield says he still has room to add nine more server racks.
Wakefield and Rustin first visited several datacentres to get an idea of what could and could not be done. They also looked at a number of different locations before deciding in January on the Dallas building. Then, the real planning began.
“Once we had the dimensions, everything else came from that,” Wakefield says. He and Ruston drew up 10 different floor plans and began calculating how many servers they’d need, and how much cabinet space. At that point, requirements began to fall into place. “High-density became a requirement; virtualisation became a requirement,” he says.