SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Discuss Year 2000 Issues -- Ignore unavailable to you. Want to Upgrade?


To: C.K. Houston who wrote (5408)4/9/1999 8:10:00 PM
From: Cheeky Kid  Read Replies (1) | Respond to of 9818
 
More doom and gloom. And I don't read Westergaard's site anymore, as it is way too negative for my tastes.

ZDNET is very balanced and a great source of info.
zdnet.com

In my opinion.

BTW,
April was another critical date for Y2K, where are the problems?



To: C.K. Houston who wrote (5408)4/9/1999 8:39:00 PM
From: flatsville  Read Replies (1) | Respond to of 9818
 
Cheryl--Excellent find and post. (eom)



To: C.K. Houston who wrote (5408)4/12/1999 8:09:00 AM
From: J.L. Turner  Respond to of 9818
 
Cheryl I thought this response was interesting.

Steve Peltz responds to the secondary clock problem as proposed by Bruce Beach:First, one quote out-of-order, because there's a problem with
internal consistency with what you say:

>Wait a minute! This water works had 90,000 chips that need testing?
>And 1,000 that needed something done!!!!!

Which is around 1%, not the 25% you seem to be worried about...

And now, on to the rest:

>they do NOT understand the embedded processor SECONDARY clock issue.

No one is talking about what you call the "PRIMARY" clock (of which
there are often several...). There are several different types of Real
Time Clocks (anything that clicks over in units that can be related to
real seconds).

Some keep track of the Year/Month/Day/Hour/Minute/Second (and,
consequently, need to account for leap year in the logic!). Many of these
only use two digits for the year, and thus can confuse a BIOS that isn't
expecting it. The simple fix for a BIOS is to change it to treat all
values before a certain value to be 20xx instead of 19xx. Currently, you
could even set that point to be 99, which means such a BIOS will be good
until 2099. If the BIOS isn't fixed, but passes on the 2 digits unchanged,
the OS can easily correct it in a similar fashion. The real problem is
with a BIOS that figures that if the year is 00, the RTC must not be
set, so silently reports that it is 1980 or some other default date.
An OS could detect that, but would have to get direct access to the
clock to do anything about it.

There are other schemes that could have Y2K dependencies, such as a
YYDDD value (with day-of-year). They still need leap-year logic, but it
is simpler.

Other clocks simply keep a second ticker. The size of the counter,
and the base date (i.e. what the value "0" means) can vary. The Mac,
for instance, has a 32-bit unsigned counter where 0 is 1/1/1904 (which
means that all of the dates it can represent can ignore the 100 and
400-year leap-year rules).

Even in embedded systems, it is usual to only read the RTC occasionally,
often only at power-on. You use another mechanism to keep track of the
current time, usually a millisecond counter (or even microsecond counter
if you need smaller increments) maintained either by hardware or software.
Sometimes that counter is used to generate a periodic interrupt that
the software uses to maintain the current time-of-day.

>Many of these processors have either a built-in SECONDARY clock,
>or, as is the more usual case, associated with them in a second chip,
>a SECONDARY clock whose time and date is maintained by the first clock
>just as the wheels of old mechanical time pieces were governed

This is just not true. Such a clock is almost always independently clocked
and powered, so that it maintains the time while the power is off (which
is really its only function).

>Among the instructions executed may be that of maintaining a SECONDARY
>clock.

Only to read it (usually only on power-up, or periodically to sync up
the time), or to change it in order to set the current date/time.

>The answer is that these devices require VERY little power for their
>own function.
>The power might be maintained for a while in a built-in capacitor for
>such a low level function,
>but for a more reliable long term source many of the microprocessors
>have associated with them a TINY very LONG TERM battery that lasts for
>years.

And the ONLY thing the backup battery maintains is the current time,
and sometimes a small amount of non-volatile memory (although that is
more often done now with FLASH memory, which doesn't need any power to
maintain). If the microprocessor were required to "service" the clock,
then it wouldn't keep time while the power was off.

>The companies that sold them the EPROM
>programming equipment, also provided the users' engineers
>with libraries of programs and information about
>how they could combine the programs and modify them.
>This became know as the LADDER concept.
>You started with a number of steps and then you added
>another step on the ladder.

Deh? Such software libraries would be tied to the microprocessor and
devices connected to it, not based on the EPROM programmer. They're
usually distributed as source code, since they usually need to be
customized. I've never heard of anything called "LADDER", but I've only
done development on a couple of embedded systems.

>Someone else wanting to do much of what you had done,
>then could use what you had done
>and add another step on the ladder.
>The intial LADDERs were much like Truth Tables.

Sounds more like you're talking about logic gates, not programs. They
typically wouldn't care in the slightest what the time actually was, and
a roll-over of a counter from 99 to 00 wouldn't be a problem. That's if
they bothered. If it rolls over to A0, then some things might get a bit
confused, but it still wouldn't be likely to crash anything.

>Later more programming capabilites were added in mnemonic form,
>and most recently greater levels of abstraction have been obtained
>by a hybrid relationship between the original LADDERs and
>programs like Visual BASIC.

Visual BASIC? In embedded processors? Blech!

>Even in the companies that designed the EPROM burning equipment to
>begin with, and no one really knew, or knows to this day,
>what is hidden back down the ladder or is in the EPROM or PROM program.

Very unlikely. Extremely unlikely. It is quite possible that what is in
a particular device, now, after being programmed, is no longer available,
but the information was certainly available as it was being designed. Most
likely, the source code was available, modified, compiled or assembled,
and burned. Nothing hidden there.

>Even if we don't use it ourselves, we don't know that someone else
>on some other rung of the ladder or in some earlier version level of
>the program didn't use it.

About the only thing that would access a clock like that, without being
intentional about it, would be a mass of initialization code. Such code
should have ABSOLUTELY NO PROBLEM regardless of what the date chip says.

Now, I will say that many embedded systems use a small OS, and that OS
may set up interrupts and try to keep the time available for programs
that want to know the time. It is even possible that such time-keeping
could crash the OS when the date rolls over, even if no program ever
tries to access the date/time functions. However, that is EXTREMELY
unlikely. I've written such time-maintenance code, and it is quite
straightforward. Normally, you'd just increment a counter (that either
starts at zero, or starts based on the current date/time) and leave
the interpretation of the current wall time until a program checks it.
EVEN IF the count wraps around (which is the Unix 2038 problem, not the
Y2K problem), the OS wouldn't crash.

>they set its time as a part of the manufacturing process.
>The time was set in accordance with some original date
>of design of the clock as programmed into the original code
>and updated in accordance with the current time of when and where it
>was manufactured.

Maybe, maybe not. Since the clock is not particularly accurate, there
needs to be a way to set it, so why bother setting it initially?

>Chips often control things based on intervals.
>Such as checking every so often (perhaps in milliseconds)
>as to whether they should check to see if it has sensed a train coming
>and should lower the gates, or that the train has passed and that it
>should raise the gates.

This is true. Interval clocks are very important.

>For this purpose it may, and very often does,
>use the SECONDARY internal clock to keep track of how much time has
>passed and whether or not it should check for the presence of a train.

Nope, accessing the clock is usually a slow process. You'd access either
an interval counter (e.g. in milliseconds) or have an interval timer
that goes off periodically and increment a memory location. Then you
compare those.

Depending on the word length, there can be a problem when such a value
wraps around. Treating it as an unsigned value and simply subtracting
the old value from the current value takes care of such problems. Such
a wrap-around can occur quite frequently; with a millisecond clock and
32-bit counter, it happens about 7 times a year.

>No one knows what logic the engineer designing the gate control used,

That's true. He could have been an idiot, requested an alpha date/time
value, and computed the number of seconds from another alpha date/time
value stored earlier. If so, there will be a problem if that format
only has a 2-digit year. Of course, he probably blew the logic anyway,
so there will be a problem once a year, or once a month, or once every
Feb 29, so they schedule maintenance on the device regularly and the
problem mysteriously never occurs (because powering it down reset the
date/time chip, because they never installed a battery backup because
the device never needed to know what time it was).

>Something which will happen ONE TIME ONLY and that is when
>the SECONDARY clock flips over to zero zero for the year Y2K.

Only if the clock is stored with a year value, and only if the program
reads the hardware to keep track of the current time. What would normally
happen in such an instance (where an interval timer is being checked)
is that the timer will never go off - this will often lead to the device
becoming effectively hung. Any critical system ought to have a deadman
timer to reset the device if it becomes inoperative. Any GOOD deadman
code, in a device that is relying on timers to trigger critical events,
will do it by scheduling a timer event on the deadman, rather than simply
hitting it every time through the main loop. Resetting it, or doing a
power-cycle, will have it re-read the hardware clock. If it thinks it
is 1900, no big deal, as all relative times will be correct.

Where that might cause a problem is if the device has a maintenance
schedule or some other date trigger built in, and that date is
entered/stored as an alpha string. In such a case, you might have
two results: entering a trigger date for Y2K or beyond will trigger
immediately, OR the trigger will never go off. Either could obviously
cause some problems, but that also will only happen in devices that are
explicitly date-dependent, so it won't be a hidden problem.

>The matter of leap year calculation in the Year 2000,
>is of some particular novel interest. The following are the IEEE:
>W2715 Rules for determining whether a year
>is a Leap Year
>(G3.4.3)
>(a)If the number of the year is divisible by 4
>then the year is a Leap Year, except
>(b)If the number of the year ends in 00
>then the year is not a leap year;
>however
>(c)If the number of the year ends in 00
>and the year is divisible by 400
>the year is a Leap Year.
>Most people are aware of rule a
>and most programmers should know rule b
>but most people and many programmers overlook rule c.

It actually takes a pretty dumb programmer to get (b) right and ignore
(c). Just using (a) gets us through 2099.

Incidentally, the 400-year rule will be off by a day every 3200 years or
so. Much better than a 100-year rule would have been a 128 year rule,
which would have only required one additional adjustment every 86,400
years (a 100/400/3200 year rule also requires an 86,400 year adjustment).
csy2k 4-12-99
J.L.T.