It began as a simple series of tasks: Back up the server’s drive, back up the database, click “Install” on the vendor-provided patch update tool, wait for completion.
Since you’re reading this you already know that it became un-simple.
I started at 9pm. The disk backup was the only part which went according to plan. Backing up the database resulted in an error, something to do with a broken full-text index. So I looked at the backup logs and discovered something even worse: The target drive for the backup files? Completely full.
Well, kudos to our maintenance plan and our error reporting system!
Some furious clearing of disk space later, I tried the backup again and ran into that full-text index problem. Research led me to a solution. With that implemented I was able to make the backup. And that’s that, right?
Wrong!
The upgrade installer complained about not finding one of the two full-text indexes. Yes, the one I’d rebuilt. Forty minutes of poking, prodding, rebuilding, recreating, rebuilding, repopulating and cussing later, I was able to make that particular patch happy… only to bump into another patch with another error. This time it claimed that a particular column in the database wasn’t full-text indexed.
I’ve come to hate SQL Server 2005 and full-text indexing, by the by. I know, you’re shocked.
I looked. Yes, indeed, that column is being full-text indexed. Bite me, updater! But alas, nothing I did could get past that update patch… and they simply must be done in order, don’t you know?
“Okay,” I thought, “maybe the product is in a usable state.”
You can guess the answer.
Sure, we could open up the company display, or look at time sheets, or activities… but the service board? Not so much. A giant error dialog full of cryptic SQL-looking gibberish appeared, and from what I was able to tease out of it, the problem was caused by a missing view in one of the database tables. What’s more, this is a view that didn’t exist yet. So, one patch creates a dependency on said view, which is created by a later patch? Brilliant, guys!
At that point I was two hours into the job. I gave up and punted to vendor support, leaving them a pair of voice messages (because I was tired & frazzled, I left out one key piece of data in the first message) and waited for the call back.
And waited. And waited. And… yeah.
Finally I gave up and went to bed, with my phone handy in case vendor support did as I asked: “Call me any time!” Stress kept me awake well past 2am.
Stress got me right back out of bed at 7am, even though I was groggy and cranky. I checked email, I looked at my phone, found nothing and nothing. “Fine,” I thought, “I’ll open a ticket via email.”
Then I received a couple of alerts via email that key services on the machine in question had been restarted. How odd, since I hadn’t been on it yet… so I checked the event logs (Have I mentioned that I love Kaseya? Captured event logs rock.) and oh look! Someone had been working on my server… since 6am!
It was nice of them to call me… OH WAIT.
So by 8am the server was working, and the tech finally called me, but only stayed on long enough for me to answer the question, “Is your server working now?” I couldn’t get any details out of him, nor could I ask why he hadn’t called me when he’d started work so I wouldn’t be left wondering if anyone had heard my message!
Oh well. All’s well that ends… something-something.