The trick with any recovery is to expect the unexpected.
By Steve Pitcher
It was Friday morning at ABC Widget Manufacturing Company, and things were business as usual. The machines were humming, and the accountants were accounting. Everyone was typing away at their green-screen terminals with thoughts of a wonderful weekend ahead.
That was until someone noticed water seeping out from under the server room door. Water was to be expected only if there was a fire in the computer room. The most flammable things in there were the stacks and stacks of green bar paper that had been collecting since 1978. You see, at ABC, they pride themselves on always being able to locate a paper copy of a transaction. If they bought a part for their widget machine in 1978 and it broke, someone would be able to track down the original purchase order and order that same part again. A handy system.
The old green bar paper, arguably, was what absorbed most of the cooling from the 15-year-old water-cooled air conditioning unit. Since the little Power Systems 720 sitting in the back ran the whole business, there was plenty of room to stack so much paper. Owen (the guy who changed the LTO backup tapes) passed by the 720 every time he went through the computer room's unsecured back door to have a smoke three or four times a day. One of those times, a little piece of ash flicked on the green bar and ended up starting a fire.
Unfortunately, there were no smoke-activated fire alarms in the computer room. There were only old-school fire sprinklers. So when the noisy little 720 stopped, you'd have thought someone would have noticed the eerie silence.
Fast-forward to a week later. The server room has been dried out, and the 720 has been replaced with another little 720 from the used marketplace plus a brand-new second-hand external 3580 tape drive. Luckily, ABC was diligent in doing regular full-system saves every three years (sarcasm intended). The Go Save 21 was their ace in the hole for a disaster recovery. That's all you need, right?
Unfortunately, they had no LIC DVD to boot the new system.
Fast-forward another 48 hours. The LIC DVD has been found for IBM i 7.1 in Owen's basement, where he keeps the backup tapes.
Fast-forward yet another 24 hours. Bare-metal restores take a whole lot of time when you don't do them ever. Lots of manual research and researching manuals. Plus calls to anyone in the address book who's ever done a bare-metal restore.
Fast-forward 24 more hours. Ready to load the full-system-save tape. More manual research and researching manuals. Finally, the system is back up to where it was a little under three years ago. Too bad the LIC and the OS were at different PTF levels. The LIC was 7.1 vanilla, yet the OS was at Technology Refresh 8. Looks like they're going to have to burn a Technology Refresh 8 resave of the I-BASE DVD and slip the LIC. Someone has to figure out the ESS website and get past the entitlement issues with the new serial number. They could've gotten it from the old serial number entitlement, but nobody's ever logged into ESS and set themselves up.
Fast-forward 72 more hours. Owen is asleep and not answering his phone. Recovery is tiring. He's had a tough week. Too bad the backup tapes are in his basement. They'll have to wait until he comes in Monday.
Fast-forward to Monday. Time to restore from the nightly tape! Hang on! Every night they backed up only saved changed objects. They need Owen to go get all the tapes and explain what the heck tape goes next.
About this time, the salvaged remnants of the green bar paper are almost dry enough to flip through.
Of course, this is an extreme example of a horrible situation brought on by carelessness, unpreparedness, and just plain bad luck. The trick with any recovery is to expect the unexpected. Everything will go wrong at least once, and you hope someone learns from it.
I'm no expert in disaster recovery, but I've seen my share of things that have gone wrong. I've witnessed everything from exploding propane generators to under-powered uninterruptible power supplies dropping systems hard. I've watched seasoned professionals accidentally pull live disk drives from running systems while replacing failed drives, and I've seen fibre lines cut with circular saws. I've seen core network switches plugged into outlets that are tied into light switches that stop working when the server room lights get turned off. I've seen tarps and umbrellas and shingles and gutters inside server rooms to displace running water.
Yes.
I've seen gutters.
Gutters installed on racks.
Anything can and will happen because it repeatedly does, even to the best of them.
What doesn't happen repeatedly enough are controlled disaster recovery tests. Some companies do it. Not enough do. But those who do it regularly have it down to a science. It's part of the overall strategy. Mandated even so that everyone knows their role and is held accountable for the success of it.
One question I hear when talking about disaster recovery is always: How fast can you recover? It's simple. Assuming you have a disaster recovery plan, those who test often will recover much faster than those who seldom test or never test at all. You test until you get to a point where you know how long to expect in a failover if there are no problems. That's your normal. It could be two hours, or it could be two days, depending on the business and type of disaster recovery system used.
With security being such a hot-button item nowadays, the breaches and fallout we tend to read about are not just security failures but also recovery failures. A major security failure can be partially remedied by a solid, tested recovery plan (no recovery plan can resolve the legal issues). For instance, if you had a server that was the target of some malicious code that deleted or encrypted hundreds of thousands of files, you'll usually have a few questions:
- Can we recover?
- If yes, how fast can we recover?
- How can we prevent this from happening again once we've recovered?
In this situation, arguably question 3 needs to be figured out before the recovery effort starts. More unforeseen downtime.
I was dealing with a similar situation a few years ago: A user PC had encrypted the file shares on a critical Windows 2012 server. Hundreds of thousands of files were unusable. Recovering the system wasn't a problem. Locating and stopping the user's PC from encrypting those files again was a bigger priority. Determining it was a user's PC causing the problem took time as well. If I recall correctly, the better part of 10 hours was spent with all IT hands on deck before the infected PC was located and disconnected and the file recovery completed to 24 hours prior. All user work done that day was lost. That's a situation where the backup was readily available and tested and the right people were on hand to work. All it takes is an additional variable to throw a monkey wrench into a recovery plan.
Testing a static plan isn't enough either. We need to plan for the worst-case scenario by making recovery dependencies unavailable during a test. What if the tape drive in your cold-DR location isn't functional? How will you replace it? Remove it from the equation, and see how your team reacts to the problem.
Expect the unexpected. Test for the unexpected. Because disastrous situations seldom cooperate with a plan.
LATEST COMMENTS
MC Press Online