Re: Vertex 2000 and Bigits (explained?)
I understand what Bemer is trying to do, but since I haven't read an explanation that I think a non-programmer can understand, I thought I'd give it a shot...
To understand how computers operate, one merely has to understand the terms "on" and "off"-- either a particular spot on a chip has an electronic charge or it does not. If it is on, we represent that with a "1", if off, a "0". Since there are only two numeric choices, this is called binary notation.
We should have learned somewhere in our lives that the computer term for a character (a letter, number, symbol etc.) is called a "byte". However, since computers think in binary, scientists had to find a way to represent bytes as a bunch of 0s and 1s. The big dilemma was just how many combinations would be required to represent all the characters we wanted computers to be able to recognize. The committee, which I presume included Bob Bemer, decided on limiting things to 256 possible choices, which, in binary, meant from "00000000" to "11111111". Notice that there are 8 bits that make up a byte. [Those that remember high school math know that 2 to the 8th power (2*2*2*2*2*2*2*2) = 256. (gg)]
The next task was simply to make a two-column list. The first column consisted of all the characters scientists thought were important for the computer to know, and the second column consisted of a unique number from 0-255 so there would never be any confusion. The result was an (A)merican (S)tandard (C)ode for (I)nformation (I)nterchange -- ASCII for short.
For whatever reason, the numbers 0-9 were assigned an ASCII code of 48-57. If we translate those numbers into binary, we get from 0011000-00110009. Now, notice that the first four characters, "0011", remain constant. Therefore, when a computer sees that pattern, it automatically knows the byte is a number.
So, now the computer knows it has found a number. Herein lies the $600B question: let's say, for example, the computer is given the digits "9" and "7" together (in binary form, of course), which means the software is trying to do something with the decimal number "97". How can the computer be sure that "97" is, well, just plain old 97, as opposed to shorthand for the year "1997"? Bemer says "simple", come up with other four character combinations (than "0011") to designate the missing part of the date, i.e. four character binary code to designate the 19th century, 20th century etc. The result is we still have only 8 bits in our byte, but we have created a new type of byte, which Bemer calls a "bigit".
So, what Bemer is trying to do is absolutely possible, not a bunch of computerese designed to fool people.
However, in order for it to work, sweeping modifications have to be made to the software that processes object code: the compilers and runtime systems. Not only that, in order to accurately process data that represents dates, you have to modify it (make it a bigit) and then store it in its new state. As I've said before, the axiom in the industry is "don't mess with my data". If that's not bad enough, in order to avoid problems identifying printed data as a bigitized date, you have to use special fonts. For example, Bemer might specify "97" as meaning "1997" by putting a line under the number "9" (of "97").
Lastly, let's assume for sake of argument that Bemer can convince IBM to modify its compilers, linkers, runtime systems, fonts etc to conform to his specifications. We still are left with the original premise that he can indeed identify a date in the first place. Considering the major Y2K tool vendors are still uncovering new convoluted schemes to manipulate dates, that's a pretty tall order, IMO. And time keeps ticking away...
- Jeff |