Sélection de la langue

Search

Sommaire du brevet 1116756 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Brevet: (11) CA 1116756
(21) Numéro de la demande: 1116756
(54) Titre français: CIRCUIT DE COMMANDE D'ANTEMEMOIRE
(54) Titre anglais: CACHE MEMORY COMMAND CIRCUIT
Statut: Durée expirée - après l'octroi
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • G6F 13/00 (2006.01)
  • G6F 13/16 (2006.01)
(72) Inventeurs :
  • RYAN, CHARLES P. (Etats-Unis d'Amérique)
(73) Titulaires :
(71) Demandeurs :
(74) Agent: SMART & BIGGAR LP
(74) Co-agent:
(45) Délivré: 1982-01-19
(22) Date de dépôt: 1978-12-12
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Non

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
861,228 (Etats-Unis d'Amérique) 1977-12-16

Abrégés

Abrégé anglais


52E2720
ABSTRACT
Apparatus and method for providing a buffer stage, or cache memory
command circuit, between a cache memory unit and a main memory unit. The
transfer of data between a main memory unit and a cache memory unit can be
complicated because the circuits utilized in the cache memory unit and/or
the main memory unit in effectuating the data transfer can be pre-empted.
In addition, the data transfers must be executed in sequential order.
According to the present invention, the transfer of data is divided
into two portions, a portion involving cache memory unit and a portion
involving the main memory unit along with associated interface units. The
cache memory unit stores the data transfer commands and the associated data
in sequential order. The cache memory unit and the main memory and interface
units can execute their respective portions of the data transfer independently
permitting overlapped instruction execution. The cache command buffer insures
that the operations involving the two units of the data processing unit are
executed in sequence. When data transfer has been completed, the cache
command circuit continues to the execution of the next data transfer in
sequence.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. In association with a system interface unit and a
cache memory unit of a data processing system, a cache memory
command buffer unit for permitting overlapped data transfer
of information signals comprising: a plurality of memory
locations for storing said information signals being trans-
ferred to said system interface unit and to said cache memory;
means coupled to said system interface unit, to said cache
memory, and to said memory locations for storing said informa-
tion signals into said memory locations; first means for ex-
tracting said information signals from said memory locations
in a sequential order of storage in said memory locations for
delivery to said cache memory unit of said data processing unit;
and second means for extracting said information signals from
said memory locations in said sequential storage order for
delivery to said system interface unit of said data processing
unit, wherein said first extracting means can operate indepen-
dently of said second extracting means.
2. The cache memory command buffer unit of claim 1,
wherein said information signals stored include memory read
signals and memory write signals.
3. Memory buffer apparatus for sequentially controlling
transfers of data groups to a cache memory unit and to a main
memory unit in a data processing unit, wherein the improve-
ment comprises: cache data group storage apparatus for storing
into a plurality of storage locations in response to first
control signals from said data processing unit data groups to
be entered in said cache memory unit received thereby; main
19

memory data group storage apparatus for temporarily storing into
said storage locations in response to second control signals
from said data processing unit data groups to be entered in
said main memory unit received thereby; apparatus coupled to
said cache data group storage apparatus and to said main memory
data group storage apparatus for storing said data groups in a
sequential order; and apparatus coupled to said storage locations
for transferring stored data groups in said sequential order to
said cache memory unit and to said memory unit in response to
third control signals from said data processing unit.
4. A cache memory command buffer for a data processing
system temporarily storing data groups being transferred to a
cache memory unit and to a main memory unit, comprising: a first
plurality of memory locations coupled to said cache memory unit
and to said main memory unit for storing said data groups
received from said data processing system; a memory stack unit
coupled to said first plurality of memory locations and to said
data processing system; said memory stack unit including a
second plurality of memory locations for storing first memory
location addresses of said data groups stored in said first
plurality of memory locations; said memory stack unit stores
each of said first memory location addresses in one of said
second memory locations, each memory location address being
stored in a predetermined sequence; and apparatus coupled to
said second memory location for sequentially addressing said
second memory locations in response to control signals from
said data processing unit, said apparatus addressing said one
of said second memory locations to produce a data group
transfer from said first memory location identified by said

addressed second memory location, said data group transfer
proceeding to said main memory and to said cache memory.
21

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


52E2720
, .,
~67~6
BACKGROUND OF THE INVENTION
Field of the Invention
This invention relates generally to a cache memory unit utilized by
a data processing system and more particularly to a buffer stage between
the cache memory and the main memory unit.
Description of the Prior Art
It is known in the prior art to utilize a cache memory unit to provide
improved performance in a data processing unit. The performance of a data
processing unit is determined, at least in part by the time required to
achieve data from the system main memory un;t. The period of time required
to retrieve data from the main memory can be minimized by implementing these
circuits in the technology currently providing the highest speed. Because of
the increasing memory requirements of modern data processing systems, this
partial solution can be unacceptably expensive. In addition, delays caused
by the physical distance between the central processing unit and the main
memory can be unacceptable.
Because of these and other considerations, it has been found that a
cache memory unit, associated with the central processing unit, provldes a
satisfactory compromise for providing -the central processing unit with a
requisite data availability. The cache memory unit is a high speed memory of
relatively modest proportions which is conveniently located in relation to
the central processing unit. The contents of the cache memory are selected
to be those for which there is a high probability that the central processing
unit will have an immediate requirement. To the extent that the algorithms
of data processing system have transferred data, required by the central
processing unit, from the main memory to the cache memory unit, prior to the
actual requirement by the central processing unit, the manipulation of data
by the data processing system can be efficiently accomplished.
However, the transfer of the data from the main memory to the cache
memory can be complicated. In the modern data processing system, an interface
unit, which can be referred to as a system interface unit, can be inter-
posed between -the main memory and the central processing unit. The system

52E2720
.
111675~i
interface unit is in effect a complex electronic switch controlling the
interchange of data between the main memory (which may comprise several
independent units), the central processing unit, and peripheral devices,
which may be utilized in entering data into or retrieving data from the
data processing unit. Thus the circuits in the system interface unit
necessary to process the data transfer between the main memory and the cache
memory may be unavailable, at least temporarily. Similarly, the central
processing unit may have initiated activity in the cache memory unit which
would similarly render the cache memory temporarily incapable of participating
in the data transfer.
In situations where the two units or resources in a data processing
system can be independently unavailable for data processing activity, such
as a data transfer, it is known in the prior art to provide circuitry,
which interrupts present activity of the required units or which prohibits
future activity of the two units according to predetermined priority
considerations, thereby freeing the resources or units of the data processing
system for execution of the data transfer. This type of resource
reservation can impact the overall efficiency of the data processing system
by delaying execution of certain data manipulations at the expense of other
types of manipulations.
It is also known in the prior art to provide circuitry to permit the
partial execution of a data transferj a storing of the data at an inter-
mediate location and then the completion of the execution at a later time,
i.e., when the system resource becomes available. Thus, a buffering between
the main memory unit and the cache memory unit can be accomplished,
permitting the two units to operate in a generally independent manner. This
type of data manipulation execution has the disadvantage that, after
completion, the succeeding data transfers are again limited by the availab11ity
prior to contlnuation of the sequence of data transfers, of each resource
necessary to the completion of the data transfer.
It is therefore an object of the present invention to provide improved
transfer of data between a main memory unit and a central processing unit
of a data processing system.

52E2720
6756
It is a further object of the present invention to provide lirmproved
transfer of data between a main memory unit and a cache memory unit in a
data processing system.
It is still a further object of the present invention to provide a
buffer stage, associated with the cache memory unit which controls the
transfer of information between the main memory unit and the cache memory unit.
It is a more particular object of the present invention to provide a
buffer stage between the cache memory and the system interface unit.
It is still another particular object of the present invention to
provide a buffer stage associated with the cache memory which perrnits
sequential execution of data transfer activity between the system interface
unit and central processing unit.
It is yet another object of the present invention to provide a buffer
stage associated with the cache memory un;t which perrnits sequential execution
of data transfer instructions stored in the buffer stage while permitting
execution of the activity involving the cache memory unit and the activity
involving the system interface unit to be completed independently for the
stored instructions.

756
SU~MARY OF THE INVENTION
-:
The aforementioned and other objects are accomplished,
according to the present invention, by a cache memory command
buffer which includes a series of storage registers, for storing
read and write data transfer commands and associated data,
apparatus for providing sequential executlon of the portion of ~ -
a stored instruction involving the system interface unit,
apparatus for providing sequential execution of a portion of
the stored instruction involving the cache memory unit, and
apparatus for signaling the completion of stored instruction.
The independent execution of the portion of the stored
instruction involving the system interface unit and the por-
tion of the instruction involving the cache memory permits
overlapped instruction execution. In addition, the complete
instruction will be executed in the sequential order received
by the cache memory command buffer.
In accordance Mith the present invention there is
provided in association with a system interface unit and a
cache memory unit of a data processing system, a cache memory
command bufPer unit for permitting overlapped data transfer
of information signals comprising: a plurality of memory
;
locations for storing said information signals being trans-
ferred to said system interface unit and to said cache msmory;
means coupled to said system interface unit, to said cache ~
memory, and to said memory locations for storing said informa- ~ `
tion signals into said memory- locations; first means for ex-
tracting said information signals from said memory locations in
a sequential order of storage in said memory locations for
delivery to said cache memory unit of said data processing
unit; and second means for extracting said information signals
B

756
.
from said memory locations in said sequential storage order
for delivery to said system interface unit of said data pro-
cessing unit, wherein said first extracting means can operate
independently of said second extracting means.
In accordance with the present invention there is
also provided memory buffer apparatus for sequentially con-
trolling transfers of data groups to a cache memory unit and
to a main memory unit in a data processing unit, wherein the
improvement comprises: cache data group storage apparatlls for
storing into a plurality of storage locations in response to
first control signals from said data processing unit data
groups to be entered in said cache memory unit received there-
by; main memory data group storage apparatus for temporarily
storing into said storage locations in response to second con-
trol signals from said data processing unit data groups to be
entered in said main memory unit received thereby; apparatus
coupled to said cache data group storage apparatus and to said
main memory data group storage apparatus for storing said data
groups in a sequential order; and apparatus coupled to said
storage locations for transferring stored data groups in said
sequential order to said cache memory unit and to said memory
unit in res-ponse to third control signals from said data pro-
cessing unit.
In accordance with the present invention there is
also provided a cache memory command buffer for a data pro-
cessing system temporarily storing data groups b.eing trans-
ferred to a cache memory unit and to a main memory unit, com-
prising: a first plurality of memory locations coupled to said
cache memory unit and to said main memory unit for storing
3Q said data groups received from said data processing system;
- 7a.-
~,,

Ei756
a memory stack unit coupled to said first plurality of memory
locations and to said data processing system; said memory stack :
unit including a second plurality of memory locations for stor-
ing first memory location addresses of said data groups stored
in said first plurality of memory locations; said memory stack
unit stores each of said first memory location addresses in
one of said second memory locations, each memory location
address being stored in a predetermined sequence; and apparatus
coupled to said second memory location for sequentially
addressing said second memory locations in response to control
signals from said data processing unit, said apparatus address-
ing said one of said second memory locatlons to produce a data
group transfer from said first memory location identified by
said addressed second memory location, said data group transfer
proceeding to said main memory and to said cache memory.
These and other features of the invention will be
understood upon reading of the following descritpion along with
the drawings.

52E2720
~1675~
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a schematic block diagram of a data processing system
utilizing a cache memory unit.
Fig. 2 is a schematic diagram of the address format utilized by the
data processing system as organized for use in the cache memory unit.
Fig. 3 is a schematic block diagram of the cache memory storage unit
showing the general organizational structure.
Fig. 4 is a schematic diagram of the organization of the cache command
circuit storage locations according to the preferred embodiment.
Fig. 5A is a schematic diagram of the apparatus controlling the
operation of command circuit storage locations.
Fig. 5B is a schematic diagram of a possible stack memory configuration
for the cache command buffer circuit according to the preferred embodiment.

~ 52E2720
L67Si~i
DESCRIPTION OF THE PREFERRED EMBODIMENT
Detailed Description of the Figures
Referring now to Figure 1, the general organization of a data processing
system utilizing a cache memory unit is shown. A central processing unit 50
is coupled to a cache memory unit 100 and to a system interface unit 60.
The system interface unit is coupled to memory unit 70. The central
processing unit 50, the memory unit 70, and the system interface un;t 60 can
be comprised of a plurality of individual units, all appropriately coupled
and controlled for accurate execution of signal manipulation.
Referring next to Figure 2, the format of a data address, compr~sed of
24 binary bits of data, utilized by a data processing system is shown. The
first 15 most significant bits identify a page address of data. Each page
address of data is comprised of 512 data words. In the present embodiment each
word is composed of 40 binary data bits" this number being a matter of design
choice. Of the 512 data words identified by the remaining 11 binary bits of
each data page, each group of the next 7 binary bits of data is associated
with aZlocation of groups of memory storage cells in the cache memory and is a
locat;on address ;n the cache memory. That ;s, there are 128 memory locat;ons
in the cache memory, and each location is identified with a combination of
binary bits in the second most significant bit assemblage. The four least
significant b;t assemblages of the address format, in the present embodiment,
are not utilized in identifying a word addressilin the cache memory unit. For
efficient exchange of data between the cache memory unit and the memory unit,
a block of four data words is transferred with each data transfer operation.
Because the data transfer occurs ;n blocks, there is no need to utilize the
least significant bits in identifying the transferred information to the
main memory. The four words comprising the block will, in normal data transfer,
always be present in any event. In the illustration in Fig. 2, the address
format beg;ns at bit posltion zero. ~lowever, this ;s a matter of deslgn
choice and other address formats can be utilized. Similarly, the address
format can contain additional information, such as parity or status designa-
tions, when the address -format is larger (i.e., than 2~) group of binary data
b;ts.

` 52~2720
S~
Referring next to Fig. 3, a schematic block diagram of the principal
components of a cache memory unit of a data processing system is shown. The
data signals in the cache memory unit are stored in cache memory storage unit
101. This memory is comprised of random access memory devices in which data
signals can be both read or stored into addressed memory cells and extracted
From addressed memory cells. The organization of the cache memory storage
unit 101 is such that there are 128 locat;ons, LOCATION O through LOCATION
127. For each location, there are four groups of blocks of memory cells
labelled BLOCK O through BLOCK 3. Each of the four blocks can contain four
memory words labelled WORD O through WORD 3. Four data words from a selected
block of a selected location in the memory storage unit 101 can be applied to
the instruction buffer circuit 300 and for subsequent transfer to the data
processing unit. Data signals are entered into the stora~e unit 101 by a
data register 140, which is under the control of the cache memory control
circuits 200. The cache memory control circuits 200 also control the address
register 130. Address register 130 is coupled to the cache memory storage
unit 101, the cache memory directory 102, and the cache memory directory
control circuits 150. The cache memory directory 102 is divided into four
blocks and each block contains 128 storage cells and structure in a manner
similar to the storage unit 101, without, however, the additional ~ORD structur~-
The cache memory directory is also comprised of random access memory circuits.
The contents of the blocks of an addressed location in the memory directory
102 are appl;ed respectively to four comparison networks 111 through 114.
The output signals of the comparison networks are applied to the data status
decision network 120. The output signals of the data status decision network
120 can be applied to the four blocks of storage cells in the cache memory
storage unit and to the four blocks of storage cells located in the cache
memory directory in order to activate the block recei~ing the appropriate
signals. The output signals of data status decision network 120 are also
applied to the cache memory directory control circuits 150. The address
register 130 is also coupled to the four blocks of memory cells of the cache
memory directory 102 and to the comparison networks 111 through 114. The

---~ 52E2720
6756
cache memory directory control circuits 150 are divided into a directory
control register and directory control circuits.
Referring to Fig. 4, the cache memory control circuits include two
buffer register units, a four register read buffer memory unit 220 and a
four register write buffer memory unit 230. The memory units can store data
in an addressed location and can deliver signals to two sets of output
terminals from memory locations at two independently addressed locations.
The stack sequence control logic 210 is coupled to both memory unit 22Q
and memory unit 230. Each buffer memory receives from the central processing
unit address/data and command signals in response to signals from the stack
sequence control logic and stores these signals in address locations
determined by the control logic. The output signals of either buffer
memory unit in r~sponse to other signals from~-the stack sequence control
unit 210, can be applied to either the cache circuits and/or applied to the
system interface unit circuits, depending on how the memory units are
addressed. The stack sequence control logic 210 receives signals from the
syst~n interface unit and signals from the cache memory unit. The stack
sequence control logic issues status signals for utilization by the data
processing unit.
Referring next to Figure 5A, the stack sequence control logic 210
is shown. The control logic includes an 8-address, 3-position memory stack
211, in which one group of data can be entered into an addressed location
and two groups of memory stack signals can be simultaneously extracted,
independently from addressed locations. One group of memory signals from
stack 211 are coupled to first address enable address for read buffer memory
220 and write buffer 230 while a second group of memory signals are coupled
to second enable address apparatus associated with read buffer memory 220
and write buffer memory 230. The output signals of counter 213 enable a data
write for stack 211 at the addressed location. Output signals of coun~er 214
enable a first group of memory signals from stack 211 and output signals of
counter 215 enable a second group of memory signals from stack 211. Counter
214 has signals from the cache unit applied thereto, whi-le counter 21S has
signals from the system interface unit applied thereto.

52E2720
6~56
Address decision network 212 receives signals from buffer storage
mem~ries 220 and 230 and applies address signals to stack memory 211
and status signals portions of the data processing system. Address
decision network 212 receives signals from counter 213, counter 214,
counter 215 and counter 216. Counter 216 has signals applied thereto
from address decision network 212, counter 214 and counter 215, and appl;es
signals to write buffer storage memory 230.
Fig. 5B illustrates the format in which data is stored in stack 211
and further illustrates the use of pointers for the stack.
Operation of the Preferred Embodiment
The basic use of a cache memory unit is to make available to the
central processing unit data stored in the main memory unit without the wait
normally associated with retrieval of the memory unit data. The cache
memory is therefore a high speed memory which contains data required with
some immediacy by the central processing unit for uninterrupted operation.
As shown in Fig. 1, the cache memory is electrically coupled to a central
processing unit and to the system interface unit. Similarly, the central
processing unit can be coupled directly to the system interface unit in
certain data processing systems. rhe actual utilization of the electrical
paths coupling the system components is dependent on the method of operation,
for example, in some data processing systems data can be delivered directly
to the central processing unit in certain circumstances. In other systems,
the data required by the central processing unit must always be delivered
to the cache memory unit before being transferred to the central processing
unit. As will be clear to those skilled in the art, there are a variety of
methods by which the data processing unit can utilize the cache memory for
more effective operation.
In the preferred embodiment, an address format of the form shown in
Fig. 2 is utilized for defining an address in the main memory unit. The most
significant (15) blts,~indicate a page address, the second most significant
(7) bits indicate a location address, while the 2 least significant bits in
conjunction with the other 22 bits identify a specific word or group of data
signals stored in main memory. In the preferred embodiment, the least

- 52E2720
756
significant bits are not used by the main memory unit in normal operation.
In the typical data transfer, four data groups or words are transferred
with the issuance of one instruction. Thus after the central processing
unit has developed the main memory address, only the 22 most significant
bits are utili~ed and all of the four words thereby identif;ed are transferred.
After the central processing unit has developed the address of the
required data in main memory, that main memory address is delivered to the
cache memory control circuits 200 and entered in address register 130.
At this point the cache memory control circuits 200 begin a directory
search cycle. The directory search cycle searches for the address of the
data requested by the central processing unit in the cache memory unit.
The main memory address is entered in address register 130 as the most
significant 15 bits, the page address portion of the address is applied to
the four comparison registers 111 - 114.
Simultaneously the 7 bits of the location address portion of the main
memory address are applied to the related one of the 128 locations in the
cache memory storage unit, the cache memory directory 102 and the cache
- memory directory control reg;ster of the directory control circuits. The
location address enables circuits containing four blocks of data in the cache
d;rectory and the directory contents are applied to comparison circuits 111 -
114. The contents of the 4 blocks of the cache directory are 15 bit page
main memory addresses. Thus, when the page address portion of the main
memory address in the address register is found in one of the four blocks of
the cache directory, a "hit" signal is applied to the data status decision
network 120. The "hit" signal indicates that the desired data is stored in
the related block of the same location address in the memory storage unit.
The location address portion of address register 130, when applied to
the directory control circuits 150, enables the register cell storing status
signals and applies these status signals to the decision network 120. In
the preferred embodiment, types of status signals utilized are as follows
1) a full/empty indicator which is a positive signal when valid data is
stored in the corresponding cache memory storage unit; 2) a pending bit
indicator which is positive when data is in the process of being transferred

~ 52E2720
1~ Ei7S~i
from main memory to the cache memory storage unit so that page address
has already been entered in the cache memory directory; and 3) a failing
block indicator which is positive when the related one of the four blocks
of memory storage cells has been identified as producing errors in data
stored therein.
Assuming that the status signals are appropriate when a "hit" is
determined by data status decision network, then the Yalid data is in the
cache memory storage unit. The location address of address register 130 has
enabled four blocks of data (each containing 4 words), related to the
location address in the cache memory directory. The "hit" in page address
one of the four blocks of the cache memory directory indicates that the
four data words are located ;n the related block of the cache memory data
storage unit. The data status decision network applies a signal to the
appropriate block of the storage un;t. The four required data words are
deposited in th~ instruction buffer and are retrieved by the central
processing unit.
The operation of the cache memory command buffer circuit can be under-
stood as follows. In response to signa~s from the central processing unit,
the stack sequence control logic 210 determines an address in the buffer
memory unit 220 or in the buffer memory unit 230. The stack sequence
control logic then enables the storing, at the determined address, of
address/data signals and command signals from the central processing unit.
When the central processing unit signals a read operation, then the signals
are stored in read buffer 220, and when a write operation is signaled by
the central processing unit, then the signals are stored in write buffer 230.
In the preferred embodiment, the read buffer has four possible locations
and the write buffer has four locations, but only three are utili7ed. It
can be necessary to execute certain classes of write commands in the
preferred embodiment which require three data group locations for complete
specification. Therefore, in the cache memory command buffer locations
there are a total of five possible operations which can be identified in
the locations at one time, four read operations and a write operation.

52E2720
~IL67S6
It will be clear to those skilled in the art that for each operation
identified in the cache command buffer memory locations, manipulations
involving four sets of apparatus are understood in each case. For example,
the data requested by the central processing unit can be in main memory
and the cache memory or in main memory alone. A command can involve the
search in cache memory for a given set of data and/or the extraction from
main memory via the systPm interface unit of that data if unavailable.
Because the system interface unit and/or the cache memory can be busy with
operations involving a higher priority, it is advantageous for the operation
in the system interface unit or in the cache to proceed independent of the
aYailability of the other component involved in the transfer. For example,
a write operation involves both the cache unit and the system interface
unit portions of the data processing system. It is necessary that the
commands be executed in sequence in order to avoid generation of erroneous
data, and in addition that the portions of the command involving the cache
unit or the system interface unit be individually performed in sequence.
Therefore, the stack sequence control logic provides pointer signals
- controlling the sequential operation of a series of commands, pointer signals
controlling the sequential execution of the portion of command involving the
cache unit, and pointer signals controlling the sequential execution of the
portion of the command involving the system inter~ace unit. The pointer
signals, in each case, are applied to the memory stack by counters.
To store data in the command buffer memories, the address decision
network, in response to signals from the read and write buffers, determines
the address of the next available location in the buffer. This apparatus
signals the availability of a command buffer memory locat~on to the central
processing unit. When the address decision network signals to the central
processing unit that a command buffer memory location is free, i.e., there
is no write operation present and/or there are less than four read operations
stored in the command buffer memory, the counter 213 will provide in
pointer signals which enahle signals to be entered in the stack memory
in the next sequentlal location addressed by the cownter. Upon receipt of a

~2E2720
756
address/data command and signals from the central processing unit, the
address decision network will enter the command buffer memory address into
which the signals are to be stored in stack memory:211. If a write
operation is to be entered, a positive signal is entered ;n the first
(of three) position of the stack memory. If a read operation is to be
entered, the logical address of the next empty location in the command read
buffer is entered in the last two stack memory locations. The address
entered in the stack memory activates the corresponding buffer memory locations
so that address~data signals and command signals are entered in the
location identified by the stack memory. After the signals are entered in
the buffer memory, and if the stack rnemory is not filled, the counter 213
is incremented and the in pointer identifies and can enable the next location
in the stack memory.
The cache pointer signals are generated by counter 214 and the
system interface unit pointers are generated by counter 215. When the
counter 214 receives a cache signal indicating that the cache unit is
ready to execute a command, the output signals from counter 214 are activated
and the location addressed in the stack memory is enabled. When the location
in th~ stack memory is enabled, the output signals of stack memory associated
cache operation activates the associated address in the command buffer memory
units. The address/data and the command signals are thereby activated
and these signals are applied to appropriate portions of the cache unit and
the operation is executed. At the completion of the execution, the counter
214 increments to a value indicating the next sequential location and waits
until enabled by an appropriate signal from the cache unit. However, the
address decision network includes the logical apparatus for preventing the
cache pointer (counter 214) from advancing beyond the position in the
stack memory indicated by counter 213.
The system interface unit pointer from counter 215 opcrates in
analogous manner to execute sequentially the commands delivered from the
command memory units which control operation of the system interface unit.

52E2720
~167S~
The write buffer memory 230 has a write buffer pointer provided by
counter 216 which controls the sequential operation of the contents of the
write buffer memory. When the write command stored in the write buffer
memory has more than one location associated there~lith, the write buffer
pointer activates the location in correct sequential order.
Fig. 5B illustrates schematically a potential configuration of the
stack memory. The first location is empty, the second location has a
read operation for read buffer memory location 00. The cache pointer
is shown addressing that memory location. The next stored memory location
contains a read operation located at address 01 in read buffer memory. The
cache pointer will increment to this addre~s when the current
operation involving the cache unit is complete. The fourth s-tack memory
location indicates a read operation at address 10 in the read buffer memory
and the system interface unit pointer is enabling this stack memory
location. The fifth stack memory address contains a write operation. Be-
cause only one write operation can be stored in the buffer mernory in the
preferred embodiment, and one group of locations is always utilized for the
wr;te operation, no further address is necessary. The system in~erface unit
pointer will enable this stack memory location next. The sixth stack
mernory location identifies a read operation of read buffer memory address
11. The in pointer remains at this location in the stack memory until
the operation identified in the second stack location is complete. Then
the in pointer will increment to the seventh stack m~mory location, enabling
a writing of address/data and command signals in this address. This
illustration sllggests the utilization of the read buffer memory locat~on is
controlled by a sequential or round-robin algorithm in the address decision
network. It will be clear however that another algorithm could be utilized.
Utilizing the apparatus of the preferred embodiment, it is possible to
provide sequential and overlapped execution of a plurality of operations
involving both the cache unit and the system interface unit. In addition,
the cache unit portions of the conYnand execution can be operated in sequence
independent of the sequential execution of the command in the system inter-

~ace unit of the data processing system. In a normal read operation,
the apparatus of the pre~erred embodiment would not permit extraction by
the system interface unit of data frQm the main memory until a determinatiQn
had been made that the data was not available in the cache StQrage units.
05 Similarly, when the data is available in the cache storage units, the
operation involving the system interface unit is aborted. However, the
write command can be executed independently in the system interface unit
and the cache memory unit and certain read commands, such as a read
command which invalidates data in the cache storage unit, while obtaining
data from main memory via the system inter~ace unit can be executed
independently.
~ he above description is included to illustrate the operation of the
preferred embodiment and is not meant to limit the scope of the invention.
The scope of the invention is to be limited on~y by the following claims.
lS From the above discussion, many variations will be apparent to one skilled
in the art that would yet be encompassed by the spirit and scope o~ the
invention.
What is claimed is:
-18-

Dessin représentatif

Désolé, le dessin représentatif concernant le document de brevet no 1116756 est introuvable.

États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Inactive : CIB de MCD 2006-03-11
Inactive : Périmé (brevet sous l'ancienne loi) date de péremption possible la plus tardive 1999-01-19
Accordé par délivrance 1982-01-19

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
S.O.
Titulaires antérieures au dossier
CHARLES P. RYAN
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document (Temporairement non-disponible). Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(yyyy-mm-dd) 
Nombre de pages   Taille de l'image (Ko) 
Abrégé 1994-01-31 1 26
Revendications 1994-01-31 3 79
Page couverture 1994-01-31 1 15
Dessins 1994-01-31 4 98
Description 1994-01-31 17 665