Bookmark and Share Email this page Email Print this page Print Pin It
Feed Feed

Oct 10, 201901:13 PMOpen Mic

Send us your blog for consideration!

3 lessons about disaster preparedness learned from the MG&E fire

At 7:30 a.m. on July 19, my team at 5NINES was preparing for a normal Friday. A few staff were already in the office, some were planning to work from home, and some were about to head in.

At about 7:40 a.m., all of that changed when two fires broke out at MG&E substations on the isthmus. When MG&E cut power to the entire isthmus at 8:10 a.m. to allow firefighters to put out the blaze, work ground to a halt at many downtown businesses.

The scale of the power outage was unprecedented. For many businesses located in the affected area, it meant thousands of dollars in lost productivity and revenue.

But at 5NINES, it was the beginning of a very exciting and rewarding day. Our data center is located in the Network222 building on West Washington Avenue. When it lost power, our data center immediately switched to backup power. None of our customers that rely on our core data services — including co-location, cloud infrastructure, web hosting, managed services, and fiber internet — experienced an outage.

We monitored the data center and the diesel generator in the basement of the building all day, preparing for an outage that could have lasted for several days. It was an intense day, but I’m very proud of my team. We learned three valuable lessons from the experience:

  • Make the investment. In the almost two decades that 5NINES has housed its data center in the Network222 building, we’ve never witnessed a complete power outage in the building. That’s not just good luck; the building is connected to a redundant power grid that also serves the Capitol and the square. This makes it a strategic location for a data center because the chances of a commercial power outage are slim. Just because you’ve never lost power in 20 years, doesn’t mean it won’t happen, and if you’re an IT services provider like we are, you can’t afford to let your data center go down. It would cause reputational damage and lost revenue. So even though the chances were extraordinarily slim of ever using the generator we installed, we still spent millions of dollars buying it and maintaining it regularly. The MG&E fire validated our investment.
  • Nail down your vendor contracts. Nothing about disaster preparation and business continuity planning is a set-it-and-forget-it deal. With a large diesel generator like ours, you have to do regular maintenance and testing. In the event of a power outage, you need to make sure you don’t run out of fuel. We had a fuel vendor lined up, but we didn’t know exactly what our fuel consumption rate would be because we had never been able to simulate the full electrical load of the data center during any of our tests. During the outage, we monitored our fuel consumption closely, and were prepared to do whatever it took to keep it supplied with fuel. We would have driven to the gas station to fill up portable canisters if it came to that. However, we realized that our fuel-delivery plan was not clearly defined. We weren’t sure how often the deliveries would happen or how much fuel we were guaranteed. So, we are taking the opportunity now to iron out our contract with our fuel supplier to get a guaranteed fuel delivery schedule to make sure that we won’t run out of fuel in a future emergency. We’re also considering arranging for a backup fuel provider, in case the next crisis also impacts our vendor’s ability to deliver the fuel.
  • Have a rock solid crisis-communication plan. During a crisis, your team needs to be able to make decisions quickly. The key to fast decision-making is clear communication. We have invested in alert systems that notify all team members on multiple forms of communication (email, text, phone call, etc.) when there’s a crisis. Following the guidelines in that plan, we opened a communication bridge through our Voice-over-IP system to allow all company leaders to stay on a conference call all day to make decisions together, no matter where they were.

We’ve rehearsed our crisis response plan over and over, and when it came time to respond to a real crisis, our preparation showed.

It can be difficult to prioritize disaster preparation. State-of-the-art backup power systems are expensive to install and maintain, and crisis-response plans take time and effort to rehearse and refine. But our customers rely on us for 24/7 access to their data, no matter what, and we take that seriously. We’re proud of every investment we’ve made, and plan to continue to do what it takes to guarantee that our customers never experience an outage.

Todd Streicher is president and CEO of 5NINES, a cloud service and solutions provider with headquarters in Madison. With more than 25 years of experience in the information technology sector, he has worked with businesses of all sizes, including Fortune 500 companies. Prior to founding 5NINES in 2001, Streicher served as a service management expert for the University of Wisconsin–Madison, Compuware Corporation, and Telephone & Data Systems. In addition, he managed global infrastructure projects for Philips Electronics while living in the Netherlands from 1998 to 2001.

Click here to sign up for the free IB ezine — your twice-weekly resource for local business news, analysis, voices, and the names you need to know. If you are not already a subscriber to In Business magazine, be sure to sign up for our monthly print edition here.

Add your comment:
Bookmark and Share Email this page Email Print this page Print Pin It
Feed Feed