' Y2K v. 2: Time for Triage
By Jim Seymour
Several issues back, I jumped up and down about the Y2K crisis barreling down on a largely blas‚ world. In that column I urged you to stick a pin in the right people at your company now, lest you come in to work on Monday morning, January 2, 2000, to find some very unpleasant surprises.
I want to follow up with some suggestions about what you can do at this relatively late date to push your company, large or small, toward the right decisions and right actions to minimize Y2K disasters. I also want to pass along some of the information I've been gathering on how the Y2K problem will affect your own PC and software and what you can do about that. This time we'll look at survival strategies, circa mid-1998; look for the follow-up column "Y2K and Your PC," coming to this space soon.
I've watched reactions to the impending Y2K crunch go through five distinct phases. First there was classic, bury-your-head-in-the-sand denial: It ain't gonna happen.
Second, smart early-response companies developed and sometimes put into place comprehensive plans to test and fix everything having to do with Y2K date errors in their software and hardware well before the end of 1999.
Third, we moved into a period of shock and dismay as we discovered just how much of that old spaghetti code is still Out There and in daily use and how large and daunting the problems really are.
Fourth, we went through the "don't fix it, dump it" stage, when companies decided they had to abandon that legacy code and the creaking boxes that ran the code by moving on to modern hardware and software in client/server architectures. That was a smart and ultimately cost-effective strategy indeed, which did wonders for the likes of Baan, SAP, and others while there was still time to design, implement, and test those new systems.
Today we have moved into the fifth stage, which I can only call triage. Companies are recognizing that unless their Y2K-fix programs are already well underway and they're starting to get into testing their new code, they haven't a chance of fixing everything in time. So they're engaging in battlefield triage, deciding which of their vital systems they're going to let go boom! on January 1, 2000, and which absolutely, inescapably must--and still can--be fixed by then.
It's a bloody, ugly process. Telling a manager you've decided one or a dozen of the systems his troops rely on will probably go south in eighteen months, and you've decided you can't even try to fix the problem by then, leads to very tense sessions. I know; clients have been calling me to come sit at the table while they tell managers this unpleasant truth.
We do this in tiers, first briefing top management on the progress of the triage process and the specifics of what we believe can (and hope will) be saved, as well as what cannot be salvaged. This is both good management--remember, you manage up as well as down--and also, frankly, a survival technique: Every manager who's told that some of the systems his people need will go awry in eighteen months immediately goes to his boss, and then to his boss's boss, to get the decision overturned. When he or she finally hears that the CEO and board have signed off on the decision, it helps calm things down a little--superficially, at least.
If you're offended by my use of the grisly battlefield term triage, you have my apologies. But I know of no other word that accurately describes the nature and temper of these decisions. Often the meetings in which we try to make the decisions take days, with a large number of restarts. We'll spend a day trying to whittle down a list of 275 programs to an agreed-upon target of, say, 90; and at the end of that long and wearying day, we'll find we've succumbed to so many special pleas to keep just this one, or just that one, that our list still includes 211 proposed survivors. It doesn't work that way, so the next morning (or continuing that night over bad take-out food and bitter coffee) we tackle the list again.
I describe this not to win your sympathy--I'm paid well for this work and do it by choice, so I deserve what I get--but to cast the die for the path ahead of you if you try to push your company toward a rational response to the Y2K challenges. You may become a hero in a couple of years, but you are not going to be seen as Mr. Nice Guy for the rest of this century.
If you see some movement in your company, but not much, toward anticipating and solving the Y2K dilemma, you're not alone. A recent report by Triaxis Research captures the situation today very well: At the 250 largest public U.S. companies, which together have total projected Y2K-fix expenditures of $30-billion plus, only 20 percent of that amount has been spent so far.
This is turning into the biggest endgame ever for U.S. businesses. Add in the cost and complexity of programming Eurodollar conversions--the EU has demanded a remarkably complex triangulation method for determining national currency-conversion rates--and we are in big trouble, indeed. Even the consultants and others who will make out like bandits in this next frenzied year and a half to two years take little delight in the last-minute nature of so much of the work. It will be bloody and expensive, and we still won't get all the work done in time. Probably not even most of it.
.....
And there are very few competent, experienced people available to help you fix the old systems. I've seen elderly COBOL programmers, who retired in the 1970s after making $32,000 at their peak, brought back in 1997 as $85,000-a-year contract employees. And last month a client hired a 27-year-old who was heading one of several Y2K teams at a competitor to run that client's Y2K teams--for a five-year, no-cut contract at $200,000 a year, plus a $50,000 signing bonus. He was a nervy guy and asked for the million bucks on a take-it-or-leave-it basis. And he got what he wanted.
Makes signing the POs for the new stuff a little easier, doesn't it?'
zdnet.com |