Timeline Of A Debacle, A Situation Report

Let’s recap the story so far.

Almost exactly a year ago, pressed for network storage space, we evaluated two solutions. One involved expanding the existing server’s capacity, the other involved putting our faith in new, unproven technology. Naturally we went with the unproven technology.

July of last year, we suffered the first crash of the Snap! server. Considering it a fluke, we soldiered on blithely.

Fall of last year, the monthly crashes began. Stress mounted.

Two months ago, the decision to go with the original “other” plan is reached, and we order the replacement drives only to find that we can’t upgrade the original server without also destroying the operating system partition in the process, thus losing us several days of productivity.

A month ago, we came up with the final possible option, replacement of the main server. The clever part was that we’d use the drives that couldn’t be safely used in the old server, thus getting our money’s worth out of ’em.

Today, the new server showed up.

Guess what? We’re still not out of the woods. You see, we can’t use the drives we purchased for the other server. Turns out, upon investigation, that you have to buy special “Hot Plug” drives for this kind of server. Lovely. That’s just effing wonderful.

My greatest fear right now is two-fold: One, that someone is going to wake up and realize what a mess I’ve caused and boot my ass out the door; two, that it’s not over yet, and even after we get the new drives in and the machine is running, something else will go horribly awry.

It’s a gift I have, this knack for calamity. Too bad I haven’t figured out how to turn my talent for disaster to my actual benefit, eh?

Comments

One response to “Timeline Of A Debacle, A Situation Report”

  1. celina Avatar
    celina

    It will all work out!! *hugs* And when it is all done, you can kick the snap! server for good.