Do You Have a Failover Plan?

I thought it was funny when I saw a Tweet come through from Alex Payne (aka @al3x) this afternoon. Alex is something along the lines of the Big Daddy Architect at Twitter. The tweet stated that power was out at Twitter HQ and that they had failed over to abacuses.

Picture 10.png

That’s not really funny, actually.

Actually, in my time as a contractor for some random alphabet soup government agency, we regularly went through “hotsite” drills where a core team would disappear to Chicago or New Jersey or somewhere offsite and in a different geographical region to perform disaster recovery drills.

After 9/11, the companies like JP Morgan that had decentralized their operations, were able to recover from the World Trade Center attacks much quicker than those who did not. Maybe those who did not were small businesses.

Which reminds me of the day the email died at the Wall Street Journal…

We’ve been through a fair bit ourselves at b5media. It was bad when our service provider, very early on and before funding, allowed a power surge to fry our servers. It was a “death to our enemies” moment when another power-related failure occurred two weeks later. Our question: Why the heck is there even a hint of power failures in a data center?

Sadly, that question never was answered before we moved to LogicWorks after taking funding.

But this is not the point.

As a small business – what are you doing to mitigate catastrophic loss? Are you relying on simple backups? Are you shipping data offsite in case you need to do a data recovery? What happens if your data center is in NYC and another terrorist attack happens and takes out your systems?

What do you do? Is it in your plans?

If all else fails, there are always abacuses.

2 Replies to “Do You Have a Failover Plan?”

  1. I’m the IT guy at my company (small place, with about 35 employees), and I have gotten all of the business and engineering servers virtualized (finally) so backing up is a matter of taking a differential snapshot of the vMachines at either location to a RAIDed backup server at the other location. That’s nightly. Once a week, I also swap out one of the RAID1 mirror drives and take it offsite to try and avoid catastrophic loss of data. It’s not quite as automated as I would like, but it works.

  2. Fun stuff… At a different government alphabet soup agency, I assisted putting together two unit COOPs and periodically ran those drills. Frickin’ stressful, but even if the building and the occupants aren’t destroyed – it’s still likely something out of your control will bring you down, like a block-wide power outages, ISP outage, flakey update. In both cases, we had warm sites not hot. Which was a liability justified via budget and usage.

    For my personal stuff, I have local images and remote cloud data backup. I’m not entirely pleased with my solution, but as long as I have access to a web browser I’m mostly in business.

Comments are closed.