SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC)
INTC 50.59+4.9%Feb 6 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Martin Atkinson-Barr who wrote (21780)5/14/1997 2:07:00 AM
From: Loren Konkus   of 186894
 
Thanks for posting those patent numbers. I've been looking for those
ever since waking to find my #1 holding was suing my #2 holding.

In an attempt to understand this better, I fetched the abstracts
from the US Patent web site uspto.gov

I haven't seen the details posted before, so here they are:

4,755,936

A cache memory unit is disclosed in which, in response to the
application of a write command, the write operation is performed in
two system clock cycles. During the first clock cycle, the data
signal group is stored in a temporary storage unit while a
determination is made if the address signal group associated with
the data signal group is present in the cache memory unit. When the
address signal group is present, the data signal group is stored in
the cache memory unit during the next application of a write
command to the cache memory unit. If a read command is applied to
the cache memory unit involving the data signal group stored in the
temporary storage unit, then this data signal group is transferred
to the central processing unit in response to the read command.
Instead of performing the storage into the cache memory unit as a
result of the next write command, the storage of the data signal
in the cache memory unit can occur during any free cycle.

4,847,804

In a multi-processor unit data processing system, apparatus and
method are described for providing that only the most recent
version of any data signal group will be available for manipulation
by a requesting data processing unit. A "multiple" state for a data
signal group is defined by the presence of a particular data signal
group stored in the cache memory units of a plurality of data
processing units. The "multiple" state is associated with each copy
of a data signal group by control signals. When a data signal group
is changed by the local data processing unit, an "altered" state is
associated with the new data signal group. The simultaneous
presence of an "altered" state and "multiple" state is forbidden
and requires immediate response by the data processing system to
insure consistency among the data signal groups. In addition to
apparatus for identifying and storing the state of the data signal
groups, apparatus must be provided for communication of the
selected states to the data processing units.

5,091,845

The invention provides a system for controlling the storage of
information in a cache memory and features a processor to be
connected to a bus, the bus including information signal transfer
lines for transferring information signals and a cache control
signal transfer line for transferring a cache control signal
having a plurality of conditions, the processor including a
cache memory and a bus interface circuit connected to the cache
memory and for connection to the bus, the bus interface circuit
including: i. an information signal transfer circuit for
performing a read operation in which it receives information
signals from the information signal transfer lines, the
information signal transfer circuit transferring the received
information signals to the cache memory; and ii. a cache control
circuit connected to the cache memory and the information signal
transfer circuit and for connection to the cache control signal
transfer line for controlling whether the received information
is to be stored in the cache memory in response to the condition
of the cache control signal.

5,125,083

An operand processing unit delivers a specified address and at
least one read/write signal in response to an instruction being
a source of destination operand, and delivers the source operand
to an execution unit in response to completion of the
preprocessing. The execution unit receives the source operand,
executes it and delivers the resultant data to memory. A "write
queue" receives the write addresses of the destination operands
from the operand processing unit, stores the write addresses,
and delivers the stored preselected addresses to memory in
esponse to receiving the resultant data corresponding to the
preselected address. The addresses of the source operand is
compared to the write addresses stored in the write queue, and
the operand processing unit is stalled whenever at least one of
the write addresses in the write queue is equivalent to the
read address. Therefore, fetching of the operand is delayed
until the corresponding resultant data has been delivered by
the execution unit.

5,148,536

A load/store pipeline in a computer processor for loading data
to registers and storing data from the registers has a cache
memory within the pipeline for storing data. The pipeline
includes buffers which support multiple outstanding read request
misses. Data from out of the pipeline is obtained independently
of the operation of the pipeline, this data corresponding to the
request misses. The cache memory can then be filled with the data
that has been requested. The provision of a cache memory within
the pipeline, and the buffers for supporting the cache memory,
speed up loading operations for the computer processor.

5,179,673

A method and arrangement for producing a predicted subroutine
return address in response to entry of a subroutine return
instruction in a computer pipeline that has a ring pointer
counter and a ring buffer coupled to the ring pointer counter.
The ring pointer counter contains a ring pointer that is changed
when either a subroutine call instruction or return instruction
enters the computer pipeline. The ring buffer has buffer locations
which store a value present at its input into the buffer location
pointed to by the ring pointer when a subroutine call instruction
enters the pipeline. The ring buffer provides a value from the
buffer location pointed to by the ring pointer when a subroutine
return instruction enters the computer pipeline, this provided
value being the predicted subroutine return address.

5,197,132

A register map having a free list of available physical locations
in a register file, a log containing a sequential listing of
logical registers changed during a predetermined number of cycles,
a back-up map associating the logical registers with corresponding
physical homes at a back-up point in a computer pipeline operation
and a predicted map associating the logical registers with
corresponding physical homes at a current point in the computer
pipeline operation. A set of valid bits is associated with the maps
to indicate whether a particular logical register is to be taken
from the back-up map or the predicted map indication of a
corresponding physical home. The valid bits can be "flash cleared"
in a single cycle to back-up the computer pipeline to the back-up
point during a trap event.

5,394,529

A pipelined CPU executes instructions of variable length, and
references memory using various data widths. Macroinstruction
pipelining is employed (instead of microinstruction pipelining),
with queueing between units of the CPU to allow flexibility in
instruction execution times. A branch prediction method employs a
branch history table which records the taken vs. not-taken history
of branch opcodes recently used, and uses an empirical aglorithm
to predict which way the next occurrence of this branch will go,
based upon the history table. The branch history table stores in
each entry a number of bits for each branch address, each bits
indicating "taken" or "not-taken" for one occurrence of the
branch. The table is indexed by branch address. A register stores
the empirical aglorithm, and upon occurrence of a branch its
history is fetched from the table and used to select a location
in the register containing a prediction for this particular
pattern of branch history.

5,430,888

A load/store pipeline in a computer processor for loading data
to registers and storing data from the registers has a cache
memory within the pipeline for storing data. The pipeline
includes buffers which support multiple outstanding read request
misses. Data from out of the pipeline is obtained independently
of the operation of the pipeline, this data corresponding to the
request misses. The cache memory can then be filled with the
requested for data. The provision of a cache memory within the
pipeline, and the buffers for supporting the cache memory, speed
up loading operations for the computer processor.

5,568,624

A high-performance CPU of the RISC (reduced instruction set) type
employs a standardized, fixed instruction size, and permits only
simplified memory access data width and addressing modes. The
instruction set is limited to register-to-register operations and
register load/store operations. Byte manipulation instructions,
included to permit use of previously-established data structures,
include the facility for doing in-register byte extract, insert
and masking, along with non-aligned load and store instructions.
The provision of load/locked and store/conditional instructions
permits the implementation of atomic byte writes. By providing a
conditional moveinstruction, many short branches can be eliminated
altogether. A conditional move instruction tests a register and
moves a second register to a third if the condition is met; this
function can be substituted for short branches and thus maintain
the sequentiality of the instruction stream.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext