SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Discuss Year 2000 Issues

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Cheeky Kid who wrote (1667)5/5/1998 8:44:00 AM
From: otter  Read Replies (1) of 9818
 
I'm one of those people that built business software systems using a 2 digit date. I and everybody else I knew did it for all the reasons you've heard. We didn't care very much about problems in the year 2,000 because we knew then that the software systems we were creating would be obsolete well before that (A system built in 1965 would be 35 years old. One built in 1975 would be 25 years old). Moreover, in those rare instances when we needed to deal with dates so far in the future that they referred to a time in the new century, we dealt with it. This was in an era when the accepted wisdom, reinforced by experience, was that the average information system lasted roughly 3 years before a major overhaul. And at the time, that expression wasn't wrong.

As one of the better mainframe assembler language programmers around (I wasn't bad at COBOL, either), the date conventions in the systems and databases we developed; and the mathematical and other processes we developed were in the context of the specific application's requirements - and relatively straightforward. We used:

A binary date, formatted as mmddyy allowing it to fit into a 4 byte field.
A 'julian date' of the format yyddd. Using 'packed decimal', allowing that field to fit into a 3 character field. Very useful if we were going to perform mathematical computations or sort it.
A 'gregorian date' formatted as yymmdd (if we were going to sort the database on that field), or mmddyy if we were going to primarily display it. Sometimes, we stripped the sign (positive or negative) from the field in order to be able to store it as packed decimal in a 3 byte field.
I remember dealing with one system where in the interest of saving space, we had a 1 digit year and a 1 digit month (months 10-12 were represented by the leters a, b, and c - a system from the mid-60s)

We also created date handling subroutines (as did everybody else at the time) that performed standard mathematical and other processes (including the logic involved in handling dates from before the turn of the century (the 1800s).

The point of all of this is that (1) the routines we created were in fact relatively straightforward; (2) the processes were common, and while certainly not standardized, (3) easy to recognize and (4) deal with - assuming the levels of competence with computer languages and systems that were required a the time. And, most importantly, done in the context of the requirements of the system and the use to which it would be put.

A monolithic solution approach (e.g. all date expressions should have a 4 character date) sounds convenient; and might be for many applications. It is, however, counter-intuitive if your requirement is that people always enter dates with a 4 digit year. Human nature being what it is, any expression of a date of the form 10/22/05 means what it means, only in context - but in context - it's most often obvious. If it is the birthdate of a person who is alive in the year 2002, then it's obvious to the viewer that the person was born in 1905. If, on the other hand, it's an amortization schedule, the safe assumption is that it refers to the year 2005. No value to either domain is added by either requiring the entry or displaying the first two digits of a 4 digit year. In my opinion, it only becomes a hard requirement for those software systems whose date handling requirements span 100 or more years.

Quicken's approach, given the context of the system, isn't necessarily wrong (it's based on current PC architectures that support date conventions until the year 2027). Requiring that all systems, however, adopt a single format isn't, however, right - and might at the end of the day be the most costly and least easily adopted of all of the available solutions.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext