SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Y2K (Year 2000) Stocks: An Investment Discussion -- Ignore unavailable to you. Want to Upgrade?


To: TEDennis who wrote (7051)10/23/1997 2:26:00 PM
From: Gerald Underwood  Read Replies (2) | Respond to of 13949
 
Ted,

While I certainly am grateful for your educated reply and generally respect your tech opinion, I would have to say this.
If you with your some 25 years experience can come up with several gotchas about Bemers solutions within a time span of a couple of hours, it would seem logical to me that these gotchas would have occured also to Mr. Bemer with his 48 plus years experience at some time in the months he has been testing his concept, and who using common logic would either have solved these gotchas before committing his software to beta in a market environment or worked on them in silence until he did solve them to his satisfaction. JMO.

Best Wishes,

Gerry



To: TEDennis who wrote (7051)10/23/1997 2:53:00 PM
From: Jeffrey S. Mitchell  Read Replies (5) | Respond to of 13949
 
Re: Vertex 2000 and Bigits (explained?)

I understand what Bemer is trying to do, but since I haven't read an explanation that I think a non-programmer can understand, I thought I'd give it a shot...

To understand how computers operate, one merely has to understand the terms "on" and "off"-- either a particular spot on a chip has an electronic charge or it does not. If it is on, we represent that with a "1", if off, a "0". Since there are only two numeric choices, this is called binary notation.

We should have learned somewhere in our lives that the computer term for a character (a letter, number, symbol etc.) is called a "byte". However, since computers think in binary, scientists had to find a way to represent bytes as a bunch of 0s and 1s. The big dilemma was just how many combinations would be required to represent all the characters we wanted computers to be able to recognize. The committee, which I presume included Bob Bemer, decided on limiting things to 256 possible choices, which, in binary, meant from "00000000" to "11111111". Notice that there are 8 bits that make up a byte. [Those that remember high school math know that 2 to the 8th power (2*2*2*2*2*2*2*2) = 256. (gg)]

The next task was simply to make a two-column list. The first column consisted of all the characters scientists thought were important for the computer to know, and the second column consisted of a unique number from 0-255 so there would never be any confusion. The result was an (A)merican (S)tandard (C)ode for (I)nformation (I)nterchange -- ASCII for short.

For whatever reason, the numbers 0-9 were assigned an ASCII code of 48-57. If we translate those numbers into binary, we get from 0011000-00110009. Now, notice that the first four characters, "0011", remain constant. Therefore, when a computer sees that pattern, it automatically knows the byte is a number.

So, now the computer knows it has found a number. Herein lies the $600B question: let's say, for example, the computer is given the digits "9" and "7" together (in binary form, of course), which means the software is trying to do something with the decimal number "97". How can the computer be sure that "97" is, well, just plain old 97, as opposed to shorthand for the year "1997"? Bemer says "simple", come up with other four character combinations (than "0011") to designate the missing part of the date, i.e. four character binary code to designate the 19th century, 20th century etc. The result is we still have only 8 bits in our byte, but we have created a new type of byte, which Bemer calls a "bigit".

So, what Bemer is trying to do is absolutely possible, not a bunch of computerese designed to fool people.

However, in order for it to work, sweeping modifications have to be made to the software that processes object code: the compilers and runtime systems. Not only that, in order to accurately process data that represents dates, you have to modify it (make it a bigit) and then store it in its new state. As I've said before, the axiom in the industry is "don't mess with my data". If that's not bad enough, in order to avoid problems identifying printed data as a bigitized date, you have to use special fonts. For example, Bemer might specify "97" as meaning "1997" by putting a line under the number "9" (of "97").

Lastly, let's assume for sake of argument that Bemer can convince IBM to modify its compilers, linkers, runtime systems, fonts etc to conform to his specifications. We still are left with the original premise that he can indeed identify a date in the first place. Considering the major Y2K tool vendors are still uncovering new convoluted schemes to manipulate dates, that's a pretty tall order, IMO. And time keeps ticking away...

- Jeff



To: TEDennis who wrote (7051)10/23/1997 11:03:00 PM
From: Hardware Heister  Read Replies (1) | Respond to of 13949
 
'Mr. Bemer's comments definitely oversimplified it, which is why I thought it necessary to post this. His solution will work on some very simple COBOL programs where there were entry level programmers who knew nothing but 'display' format dates. I think it will run into the gotcha's noted above in the more typical programs encountered that have various date formats that must be handled.'

TEDennis: While I respect your opinion, I think you're incorrect in your analysis of Mr. Bemer's work. I don't think he's using Abends; I suspect he's going to use straight jumps. I also think he has enough experience in the programming world to understand the variety of internal representations and must be working on a way of dealing with this. I'm not going to guess as to how it is finally received, but I believe the reason he is keeping fairly quiet as to how it works is that he is in the process of applying for a patent.

I am guessing, as you are, but I am going to give him the benefit of the doubt. From reading his writing, he strikes me as quite intelligent, and much of his earlier work leapt barriers that other people prior to him hadn't been able to find a way around. If a unique way is going to be found to 'fix' this problem (even if it's only on IBM MVS systems: they pay the best anyway), it may well be Mr. Bemer's work.



To: TEDennis who wrote (7051)10/26/1997 7:54:00 PM
From: tom rusnak  Read Replies (2) | Respond to of 13949
 
TED, RE: your analysis of Vertex in post 7051,
Even though you offer the disclaimer that these are only your
opinions, much of that is lost on the reader by the time they get to the end or after your message is repeated on several other threads. You have assumed Vertex is using a "jump" technique by inserting invalid OPcodes into the program and then go on to describe the difficulties with such a method. But if that is NOT what Vertex is doing, the opinions you put forward are still with the investors, potential investors, or even prospective clients in search of a solution.

The invalid OPcode is a good technique where you limit the
amount of code that you overlay within the executing program
to 2 bytes. That certainly is necessary if you don't know
the type of instruction that you are "hooking" as it could
be a two byte instruction. Perhaps if you recognize that
you will only want to insert a jump on a Storage to Storage
instruction you have 6 bytes to utilize, plenty of room for
a Load and BALR combination to take you elsewhere. There
are also more extreme techniques such as SVC subsystem
screening for the LOAD, LINK, ATTACH, and XCTL Supervisor
calls so that a product can have control over an executable
load module in main memory before the program begins
execution. This is currently used by some date simulation
products to replace the Store Clock instruction at execution
time. And there is this tantalizing tidbit from the BMR
website: "If the computer hardware is smart enough to
understand object code, cannot a program imitating it in
many ways do the same? Software interpreters existed
decades ago". And I'm sure there are tricks of the trade
that you and I haven't seen before. Well, at least that I
havent seen anyway.

Now I do know for a fact that Vertex 2000 does not insert
invalid opcodes in the proram to generate program checks. I
simply asked them. Unlike our friends at IVXR who made wild
claims and havent produced any technical details, the folks
at BMR software are quite approachable. I have emailed
several querries off to info@bmrsoftware.com and i have
received back very prompt and very detailed responses. I
have not yet asscertained the true technique used for
'jumping' but I have received a response telling me that
they do not use the method you described.

As for what happens to the bigits when you PACK an
instruction. I assume that Vertex inserts a 'jump' before
the PACK instruction, and then probably does some form of
software emulation of the hardware instructions, so that it
is able to reinsert the correct bigits when it subsequently
unpacks the data.

Most of you reading this are unaware of this fact, but I
know TED personally, in fact I spent 4 years working for him
directly, so this is certainly a case of the wag ever so
delicately trying to tail the dog. I held you in the
highest regard back then and I still have the utmost respect
for you and your opinions. However, I must say that I am a
bit underwhelmed of your treatment of Mr. Bemer and his
work. I agree that a 'cutesy' name such as Bigits doesnt
exactly scream out Rocket Science and Doctorates, perhaps he
just wanted some name recognition after all these years. Do
you recall someone in your past creating a Trace Print
Routine(TPR) module after his own initials? You did know
that didn't you? I know that you typically treat others
with respect and you've probably crossed over that grey area
between fun and games with FBN and the seriousness of y2k,
as well as the fragile nature of some investors who look
single handedly to you as the 'techie' advice for investment
decisions. Please be a bit more sensitive to the potential
damage that you can cause by misinformation based on
assumptions and by belittling someone's efforts at helping
to solve this tremendous problem.

I know that you are a purist who would like to see every
one expand everything to 4 digits. But I'm sure that
budgetary constraints and timing constraints are going to
cause a lot of folks to look for alternatives. Certainly
any solution including complete expansion, may have
drawbacks and by all means point out those drawbacks based
on the facts, but please take off your FBN hat every now and
again and be TEDennis and treat others with the courtesy and
respect that you normally would. Perhaps you may receive
and unsolicited reply or two to some of your concerns.

As for my take on the solution, I'm still trying to come
to terms with the non 'human readable' aspects of the data
after it has been modified. I suppose this situation also
arises in anyone advocating 'program encapsulation'. But I
am a firm believer that solutions without requiring the
source code are possible, are being developed, have been
piloted, and will be used.

Respectfully yours, (as always)
wishing you and yours a white sedona xmas,

tom rusnak (lower case as always)