SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : TAVA Technologies (TAVA-NASDAQ) -- Ignore unavailable to you. Want to Upgrade?


To: Bill Wexler who wrote (22561)8/21/1998 8:21:00 PM
From: Larry Voyles  Read Replies (5) | Respond to of 31646
 
Pardon my interjection here. I'd like to throw an issue at you. FYI, I'm currently long TAVA, but strictly as a momentum play. I doubt I'll be holding this stock this time next month. I don't fall in love with my stocks.

I'm doing Y2K evaluation and remediation for a small piece of a "major airline". I'd seriously like to know how to turn the evaluation and remediation of 250,000 lines of custom-written mixed-language code, JCL, several diverse data sources and other procedures into a simple maintenance issue. These systems interrelate and exchange data with many, many other systems that will also require remediation. In many, many cases, these systems are currently exchanging 2-byte years in the data.

There's just too much existing code and interrelated hardware and system dependencies that someone is going to have to look at, and there just aren't enough someones to go around. Plus, somebody has to write the "bridges" between the date-exchanging systems, since not all the systems will magically be upgraded instantaneously. We also have to keep the planes fueled up and full of peanuts and stale food, not to mention meeting the time-sensitive regulatory requirements of the FAA.

Warm bodies are very expensive nowadays, and we just can't seem to hire enough bodies that have smarts to do the job without all that annoying and time-consuming training and orientation. Remediation firms, such as ALYD and TAVA are really good at spotting possible Y2K "boo-boos" in the code and data, but they don't know the business process and "motivations" behind the systems. So, we outsource the "data mapping" process and some of the brain-dead remedation to specialty firms. That's how such firms are making a mint off of us. Our needs for outsourcing are increasing as we get further into the process.

That still leaves a bunch of code (like my 250,000 lines) that somebody is going to have to eyeball, not to mention the supporting databases and inter-system dependencies. We also have to test (and repair) the supposedly-remediated code that the specialty firms have sent back to us. Rarely is it 100% perfect. It's that final 1% that seems to take two-thirds of the effort. It's not a real brain-racking process, but it does take time and effort to make it happen.

Is there a simple solution that doesn't involve outsourcing?




To: Bill Wexler who wrote (22561)8/23/1998 3:09:00 PM
From: Joe T  Read Replies (1) | Respond to of 31646
 
Here is my reply to your Message 5560537.

This is possibly boring and long winded. Its at pretty at the level of what you learn in programming 101. But its the only way I knew how to make my point. I am trying to explain why your following assertion is NOT true:

"The Y2K problem is a hoax designed to make a millenial crisis out of a relatively straightforward software/hardware upgrade and maintenance issue."

Changing one line of code in one program A can create problems for a dozen other programs which read in the data that was updated/created by program A. Lets calls these dozen other programs B1 thru B12. If program A's output file contains some 'bad' data, such as a field that was updated incorrectly (due to faulty Y2K date logic), then B1 thru B12 all have the potential of have problems of their own as a result of the bad data. Programs B1 thru B12 could also each be creating/updating data that is read in by a dozen other programs. Lets call these 144 other programs C1 thru C144. Programs C1 thru C144 each have the potential for having some problems now because they could be now be updating/creating bad data as a result of the bad data being fed into them from the programs B1 thru B12. So in theory you have one bad line of code in program A1 having the potential to cause problems in 156 other programs (B1 thu B12 + C1 thru C144).

It is possible that programs B1 thru B12 will not have any problems when the bad data from A1 goes through them. But then one of the C1 thru C144 programs could have a problem. The bad data from A1 passes thru the B1 thru B12 programs without any negative affect but then it does not pass through C1 thru C144 programs without problems. So say you now have a problem in program C120. Where did the problems originate ? You know that B1 thru B12 updated or created the data going into C120 but you can't see any problems with these programs. So then you have to look at A1 and hope you can find the problem there.

Figuring out these relationships is tedious and time consuming. Looking for problems in a program is tedious and time consuming. The more changes you make the more potential you have for problems that are 'once removed', 'twice removed' and 'three times removed'. Sometimes you have expert staff who have worked with the system for years and understand all these relationships but more often than not these people left for higher pay. You have tools to help you find these relationships but they don't provide 100% coverage.

Normal maintenance might involve changing five or six programs and then testing to make sure everything still works OK. The above scenario is manageable in this situation. Y2K might require that you make changes to 300 out of 1200 programs in a system. As you add more changes the complexity grows exponentially.

Im sorry Mr. Wexler but thier is no way I will ever agree with you that what we are talking about here is a relatively straightforward software/hardware upgrade and maintenance issue.