Language selection

Search

Patent 2325135 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2325135
(54) English Title: ASYNCHRONOUS TRANSFER MODE LAYER DEVICE
(54) French Title: DISPOSITIF A COUCHE DE MODE DE TRANSFERT ASYNCHRONE
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04L 12/935 (2013.01)
  • H04L 12/823 (2013.01)
  • H04L 12/865 (2013.01)
(72) Inventors :
  • CORNFIELD, DAVID W. (Canada)
  • GILDERSON, JAMES A. (Canada)
  • LEVESQUE, MARC (Canada)
  • KHAILTASH, AMAL (Canada)
  • DALVI, ANEESH (Canada)
(73) Owners :
  • SPACEBRIDGE SEMICONDUCTOR CORPORATION (Canada)
(71) Applicants :
  • SPACEBRIDGE NETWORKS CORPORATION (Canada)
(74) Agent: BORDEN LADNER GERVAIS LLP
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2000-11-06
(41) Open to Public Inspection: 2001-05-05
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
2,288,513 Canada 1999-11-05

Abstracts

English Abstract





This disclosure encompasses the presentation of an ATM layer device addressing
ATM layer queuing and routing functionality pre- and post- switching. A
layered apparatus is
presented as a method of isolating the arrival and departure processes of
queuing, while
separating network layer communication from ATM layer operation. The structure
examined
is useful in the ingress and egress segments of input and/or output-queued ATM
switching
applications and offers benefits associated with parallel processing and
process
independence.


Claims

Note: Claims are shown in the official language in which they were submitted.



-13-



What is claimed is:


1. An asynchronous transfer mode layer device for interfacing between a
plurality of
physical layer devices and an asynchronous transfer mode switch fabric, the
asynchronous
transfer mode layer device comprising:
(a) a asynchronous transfer mode processing sub-layer for performing
asynchronous transfer mode cell processing functions; and
(b) a device management sub-layer, in communication with the asynchronous
transfer mode processing sub-layer, for performing network layer communication
functions.
2. The device according to claim 1, wherein the asynchronous transfer mode
cell
processing functions include a priority based queuing function.
3. The device according to claim 2, wherein the priority based queuing
function includes
a set of concurrent queuing processes, and wherein the asynchronous transfer
mode
processing sub-layer includes independent managers for segregating said
concurrent queuing
processes.
4. The device according to claim 1, wherein the network layer communication
functions
include an management information base statistics collection function for
traffic
management.
5. The device according to claim 1, wherein the asynchronous transfer mode
cell
processing functions include a route management function.
6. The device according to claim 1, wherein the asynchronous transfer mode
cell
processing functions include a congestion management function.
7. The device according to claim 6, wherein the congestion management includes
a
feedback-based selective discard in an arrival process.
8. The device according to claim 1, wherein the network layer communication
functions
include a manipulation function for manipulating performance tuning
parameters.


-14-

9. The device according to claim 1, wherein the asynchronous transfer mode
cell
processing functions include a route management function and a congestion
management
function in a form of feedback-based selective discard in an arrival process,
and wherein the
network layer communication functions include a manipulation function for
manipulating
performance tuning parameters.
10. The device according to claim 9, wherein the asynchronous transfer mode
processing
sub-layer comprises independent managers including:
(a) a route manager for performing route management,
(b) a buffer manager, in communication with the route manager, for resolving
buffer first-in-first-out contentions;
(c) a queue manager, in communication with the route manager and the buffer
manager, for resolving queue occupancy contentions;
(d) a discard manager, in communication with the route manager and the queue
manager, for determining discarding decisions in the arrival process; and
(e) a scheduler, in communication with the queue manager and the buffer
manager, for determining scheduling decisions in a departure process.
11. The device according to claim 10, wherein the discard manager returns a
cell discard
flag to the route manager in response to an eligibility request signal from
the route manager
and a queue occupancy signal from the queue manager.
12. The device according to claim 10, wherein the scheduler determines a queue
for
transmission in response to a request from the buffer manager in view a queue
status
provided by the queue manager.
13. The device according to claim 10, wherein the route manager provides an
queue
identification for the cell to the queue manager.
14. The device according to claim 9, wherein network layer communication
functions are
provided by an address-mapped distributed-database structure in conjunction
with a


-15-

configuration-and-control interface.
15. A method for processing cells in an asynchronous transfer mode layer of an
asynchronous transfer mode switch, comprising:
(i) performing asynchronous transfer mode cell processing functions in a
asynchronous transfer mode processing sub-layer; and
(b) performing network layer communication functions in a device management
sub-layer.
16. The method according to claim 15, wherein the asynchronous transfer mode
cell
processing functions include a priority based queuing function.
17. The method according to claim 16, wherein the priority based queuing
function
includes segregating a set of concurrent queuing processes for processing by
independent
managers.
18. The method according to claim 15, wherein performing the network layer
communication functions includes determining management information base
statistics
collection function for traffic management.
19. The method according to claim 15, wherein performing the asynchronous
transfer
mode cell processing functions includes route management.
20. The method according to claim 15, wherein performing the asynchronous
transfer
mode cell processing functions includes congestion management.
21. The method according to claim 20, wherein the congestion management
includes a
feedback-based selective discard in an arrival process.
22. The method according to claim 15, wherein performing the network layer
communication functions includes manipulating performance tuning parameters.


Description

Note: Descriptions are shown in the official language in which they were submitted.



CA 02325135 2000-11-06
PAD-0Ol-0001 _ 1 _ ~Attomey Docket No.: PAT 1994-0
ASYNCHRONOUS TRANSFER MODE LAYER DEVICE
FIELD OF THE INVENTION
This invention relates to telecommunications networks. In particular, this
invention
relates to route management and congestion management in fast packet switches,
such as
Asynchronous Transfer Mode (ATM) switches.
BACKGROUND OF THE INVENTION
A key component of any communications network is the switching sub-system that
provides interconnectivity between users. Fast packet switching requires a
switching sub-
system that can provide, in an efficient and fair manner, the various
qualities of service (QoS)
required by different applications. In the field of fast packet switching, the
interconnection of
multiple inputs to a single output of a fabric must handle an aggregate input
rate greater than
its output rate. Solutions to this problem can be generally classified into
the following
1 S categories: input queuing to reduce the aggregate input rate in the event
of fabric congestion;
fabric speed-up to increase throughput to match or exceed the aggregate input
rate; and
output queuing to increase congestion tolerance at the bottleneck. Typically a
sub-set of the
preceding solutions are implemented for a given fast packet switch.
Queuing describes two simultaneously occurnng processes: an arnval process and
a
departure process. The function of the arnval process is to distribute cells
into queues on the
bases of priority and destination. Since queues have finite depth, feedback is
required to
determine if a cell must enter a queue or whether it can be discarded. The
function of the
departure process is to decide which of the queues is to be served next. The
decision must
consider service fairness in light of a requested QoS.
In general, all processes should be independent of one another, because the
dependency of one process imposes unnecessary restrictions on the operation of
the others.
Furthermore, the problems in one process can have a compounding effect on
other processes,
ultimately resulting in larger problems with an unidentifiable source. For
example, a delay in
one process can impose delays in neighboring processes.
Difficulties arise because independent processes typically have to communicate
with
one another. The co-ordination of accurate and reliable communication is no
easy task and
effective management of this communication is essential in order~ to
successfully isolate


CA 02325135 2000-11-06
PAD-001-0001 _ 2 _ Attorney Docket No.: PAT 1994-0
processes. The most common means of achieving such communication is to use a
shared
storage resource. This is the fundamental concept of a queue. However, queue
access is
typically subject to contention between processes. Thus the successful
negotiation of access
becomes key to effective communications management.
In queuing, the arrival and departure processes contend with one another for
storage
resources. First, access to queue memory has to be time-shared between
processes. Secondly,
processes contend in a similar manner for access to occupancy information
stored in memory.
This information is required in the arnval process as feedback to make a
decision as to
whether or not a queue should be entered. It is also required by the departure
process in the
decision of which queue should be serviced next. The careful management of
access to these
two areas of shared memory are key in isolating the arnval and departure
processes.
Prior solutions do not suitably address the isolation of the arrival and
departure
processes in respect of contention. For example, a switching device described
in U.S. Patent
No. 5,528,592 to Schibler et al., entitled "Method and Apparatus for Route
Processing
Asynchronous Transfer Mode Cells", fails to mention this aspect of contention.
One
weakness of the switching device of Schibler et al. is that there is no
scheduling performed
across priorities and destinations, nor is there any selective cell discard on
the basis of
feedback. Both the arrival and departure processes must contend for access to
a single queue:
ICELL memory. In the arrival process of this device, the cell loader stores
cells in the ICELL
memory based on pointers found in a free cell First In First Out (FIFO)
buffer. In the
departure process, cells are scheduled for transmission by an active chain
managed by an
ingress controller in a call table. An active chain is a linked list of cells
in ICELL memory.
The ingress controller establishes the links as a low priority function by
modifying a next
pointer field of each continuation-of message cell stored in ICELL memory. The
additional
shared memory access required for the establishment of links in the departure
process
impinges on the operation of the arnval process. The result is that in
conditions of heavy
traffic, primarily composed of large packets, data will likely be lost in the
arrival process as
the departure process monopolizes the ICELL memory with link establishment.
Another weakness of the switching device of Schibler et al. is that processes
responsible for complex decisions are typically made using a computer
processing unit (CPU)
to lower cost. The problem here is that processes must contend for CPU
processing time.
Thus processes must also vie for processing time, which becomes more
problematic as


CA 02325135 2000-11-06
PAD-001-OOOI _ 3 - ~Attomey Docket No.: PAT 1994-0
decisions become more complex. Each queuing process requires a decision. In
the arnval
process, a decision is required to drop or pass a cell. In the departure
process, a decision is
required to slate the next queue for servicing in a fair manner. These
decisions are non-trivial,
but very manageable when presented with the appropriate information to assess.
For example,
a scheduling decision generally requires only information about the present
occupancy of
queues and the current state of congestion. A typical discard decision
requires only
information on the occupancy of the destination queue and cell eligibility. In
the switching
device of Schibler et al. these queuing decisions are made by the CPU. This
approach
counteracts the isolation of each of the processes, since the unit becomes
another point of
resource contention in need of management. The overhead of this management
generally
causes performance degradation.
Another example of an ATM layer device is disclosed in U.S. Patent No.
5,889,778 to
Huscroft et al., entitled "ATM Layer Device", in which the scope of queuing is
limited and
selective cell discard is not performed. Again, a centralized processor is
used that controls
various stages of cell processing in conjunction with other portions of the
ATM layer device.
In particular, the cell processor is responsible for several functions, such
as an external
random access memory (RAM) address look-up, microprocessor RAM arbitration,
microprocessor interface, microprocessor cell buffering, and auxiliary cell
FIFO buffering.
Thus, the cell processor must essentially manage all devices of contention
management
(across queuing processes and across layers) in addition to performing route
re-mapping.
It is, therefore, desirable to provide an ATM layer device that offers greater
isolation
of the arrival and departure processes.
SUMMARY OF THE INVENTION
It is an object of this invention to obviate or mitigate at least one
disadvantage of
previous methods and systems. Accordingly, it is an object of this invention
to provide an
improved ATM layer device for use in an ingress or egress configuration that
has higher
reliability and better performance than many of its predecessor designs. The
increase in
quality is directly accomplished through greater segregation of queuing
processes.
In a first aspect, the present invention provides an asynchronous transfer
mode layer
device for interfacing between a plurality of physical layer devices and an
asynchronous
transfer mode switch fabric. The asynchronous transfer mode layer device
comprises an


CA 02325135 2000-11-06
PAD-001-0001 - 4 _ 'Attorney Docket No.: PAT 1994-0
asynchronous transfer mode processing sub-layer for performing asynchronous
transfer mode
cell processing functions, and a device management sub-layer for performing
network layer
communication functions. The asynchronous transfer mode processing functions
are
implemented by independent managers. The independent managers include a route
manager,
a queue manager, a discard manager, a buffer manager and a scheduler. The
network layer
communication functions are provided by an address-mapped distributed-database
structure,
corresponding to the managers, in conjunction with a configuration-and-control
interface.
In a further aspect, the present invention provides a method for processing
cells in an
asynchronous transfer mode layer of an asynchronous transfer mode switch. The
method
consists of performing asynchronous transfer mode cell processing functions in
a
asynchronous transfer mode processing sub-layer; and performing network layer
communication functions in a device management sub-layer. The sub-layers have
the
configuration described above, such that routing, discarding, queuing,
scheduling and
buffering functions are segregated and independent.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example
only, with reference to the attached Figures, wherein:
FIG. 1 is a basic block diagram of a typical ATM switch;
FIG. 2 is a block diagram illustrating the layering structure of the present
ATM layer
device in the context of the Open Systems Interconnection (OSI) reference
model and ATM
Forum recommendations;
FIG. 3 is a block diagram illustrating the interconnection of the present ATM
layer
device in the context of a fast packet switch, implementing both input and
output queuing;
FIG. 4 is a block diagram of the ATM processing sub-layer of the present
invention;
FIG. 5 is a mapping diagram illustrating the mapping of the ATM processing
layer
functions to the ATM device management sub-layer of the ATM layer device of
the present
invention.
DETAILED DESCRIPTION
Before a description on the ATM layer device and method of the present
invention is
presented, a brief explanation of an ATM switch is presented. This invention,
however, is by


CA 02325135 2000-11-06
PAD-OOl-0001 _ 5 _ Attorney Docket No.: PAT 1994-0
no means restricted to use in an ATM switch. Furthermore, descriptions of
embodiments of
the invention are made with reference to ATM terminology and ATM switching
equipment
for switching ATM cells, but, as will be understood by those of skill in the
art, the scope of
the invention includes other fast packet switching systems for switching other
types of data
units. Therefore, the term "cell" in this description is used generally to
mean any type of
fixed length data unit with a header, an example of which is an ATM cell. The
switching
systems referred to herein are intended to refer to satellite, terrestrial and
other data switching
systems.
A block diagram of a typical ATM switching system 20 is shown in FIG. 1.
Packet
switching in modern high-speed telecommunication networks is generally
implemented in a
switching system having one or more input (ingress) interface cards 22, a
switch fabric 24,
and one or more output (egress) interface cards 26. The ingress card is
responsible for
processing incoming traffic of ATM cells arnving at the input ports 28 for
internal routing.
Prior to routing traffic through the forward path 30, the ingress card 22
appends additional
information onto the header portion of the ATM cell, such as its internal
identifier (ID), cell
type, cell class, and the designated egress cards) 26. This information is
typically stored in a
table that is indexed by the external ID of the cell. The ingress card 22
performs other
functions as well, such as the buffering, scheduling, queuing, monitoring, and
discarding of
cells. The ATM switch fabric 24 is primarily responsible for cross-connecting
all traffic
arnving from the ingress cards) 22 to the egress cards) 26. The switch fabric
24 can also
performs other functions such as the buffering, scheduling and queuing of
cells, depending on
the system architecture. Finally, the egress card 26 is responsible for
processing the traffic
received from the switch fabric for onward transmission through the output
ports 32. This
process involves the removal of the information appended to the cell header
and the
reinsertion of new information for delivery to the next destination. In
addition, the egress card
26 performs management functions similar to the ingress card 22, and can also
send
information to the switch fabric 24 (and ultimately to the ingress card 22)
regarding traffic
congestion at the egress card 26 through a feedback path 34. Ingress and
egress cards 22 and
26 can be separate cards, but can also be the same card operating in ingress
and egress
modes, or both, as appropriate.
In any switching network several packets intended for a single destination may
arrive
at the switch 20 simultaneously over a plurality of input ports 28. For
example, in FIG. 1, a


CA 02325135 2000-11-06
PAD-001-0001 - 6 - ~Attomey Docket No.: PAT 1994-0
plurality of cells arriving at the ingress card 22 via separate input ports 28
may be destined
for a single output port 32 whose transmission capacity may only handle one
cell at a time.
The other entering cells must therefore be stored temporarily in a queue(s),
or buffer(s), 36.
The ATM layer device 40 of the present invention resides in the ingress and
egress cards 22
and 26, and is primarily concerned with the cell buffering, scheduling,
routing, queuing,
monitoring, and discarding functions that are performed in the ingress and
egress cards 22
and 26.
FIG. 2 shows the location and basic configuration of the ATM layer device 40
of the
present invention in relation to the Open Systems Interconnection (OSI) model,
and the
model recommended by the ATM Forum. In the OSI model, data streams arnve and
depart
from a physical layer 50. The conversion of data streams into packets is a
progression from
the physical layer 50 to a data link layer 52. Packets are associated with a
destination (or set
of destinations, in the case of multicast and broadcast connections) and are
switched in the
data link layer 52. For switch nodes, traffic management, namely, the
acceptance and routing,
or rejection of connections based on loading, occurs in a network layer 54.
Higher layers (not
shown) are primarily for reformatting information in edge devices to
standardize
communication between source and destination. As used herein, an edge device
is a device
that connects the source or destination of information to the network. (ie., a
telephone set is
an edge device in a telecommunications network). In ATM networks, fixed-length
packets,
called cells are used. As such, the data link layer 52 is sub-divided into two
sub-layers: an
ATM Adaptation Layer (AAL) 56 and an ATM layer 58. The AAL 56 is primarily
used in
edge devices for the segmentation and re-assembly (SAR) of packets into ATM
cells.
The ATM layer 58 processes cells. Primarily, this involves the implementation
of
route management, congestion management and queuing functionality. In ATM
switches, the
ATM layer 58 also presents loading information to the network layer 54 for the
purpose of
traffic management. The ATM layer device 40, in this context, whether in
ingress or egress
mode, is responsible for presenting a management information base (MIB) to the
network
layer 54 for traffic management.
Since cell processing is a repetitive local task, and traffic management
involves
complex global decisions, the ATM layer device 40 according to the present
invention is
configured as a layered device within the ATM layer 58. An ATM processing sub-
layer 60 is
responsible for implementing route management, congestion management, and
queuing


CA 02325135 2000-11-06
PAD-001-0001 _ ~ _ Attorney Docket No.: PAT 1994-0
functionality, while a device management sub-layer 62 is responsible for
managing the
communications between the network layer 54 and the ATM processing sub-layer
60.
Refernng to FIG. 3, the interconnection of the present ATM layer device 40 in
the
context of a generalized fast packet switch 66, implementing both input and
output queuing,
is shown. Data arrives and departs from the switch 66 over transmission pipes
70.
Transmission pipes 70 are the physical links between network nodes. Incoming
data streams
arnving over a transmission pipe 70 are received and converted into ATM cells
by a physical
layer device 72.
Cells are passed to an ingress ATM layer device 40a (i.e. ATM layer device 40
operating in ingress mode) over a Universal Test and OPerations Interface for
ATM
(UTOPIA) cell bus 74. The ingress ATM layer device 40a is responsible for
queuing in the
event of congestion, and for inserting routing information and passing cells
to the ATM
switch fabric 24 over cell bus 74. The ATM switch fabric 24 is responsible for
routing cells
from the ingress ATM layer device 40a to an egress ATM layer device 40b (i.e.
an ATM
layer device 40 operating in egress mode) in accordance with the information
contained in the
cell header.
Cells are passed to the egress ATM layer device 40b via cell bus 74. The
egress ATM
layer device 40b is responsible for queuing to reduce the likelihood of
congestion and for
routing cells to the required physical layer device 72. Cells are passed from
the egress ATM
layer device 40b to the physical layer device 72, again over UTOPIA cell bus
74. Outgoing
cell streams are framed and transmitted over a transmission pipe 70 by the
physical layer
device 72. The ATM layer device 40a, 40b, in this context, whether in ingress
or egress
mode, is responsible for implementing route management, congestion management
and
queuing functionality.
Referring to FIG. 4, there is shown a block diagram of the ATM processing sub-
layer
62 of the present invention. Generally, the ATM processing sub-layer 62
divides route
management, congestion management and queuing functionality amongst a cluster
of five
managers in the ATM processing sub-layer 62 with the intention of isolating
queuing
processes. Route management functionality occurs strictly in a route manager
80. Congestion
management is a selective discard, based on feedback. The decision to discard
is made by a
discard manager 82, while feedback is provided in the form of queue occupancy
from a queue
manager 84. Queuing functionality occurs across three managers. A scheduler 86
decides


CA 02325135 2000-11-06
PAD-001-OOOI _ g _ ~Attomey Docket No.: PAT 1994-0
which queue to service next in the departure process. The decision is based on
all queue
occupancies presented by the queue manager 84. A buffer manager 88 is
responsible for
managing the arnval and departure processes that contend for access to buffer
FIFOs 90.
Note also that the queue manager 84 is responsible for managing access
contention for arrival
and departure process queue occupancy.
As shown in the mapping diagram of FIG. 5, the communication between the ATM
processing sub-layer 60 and the ATM device management sub-layer 62 is bi-
directional and
dual-facetted. For general device management (configuration), a distributed
database
structure (DDS) 100 is used to communicate error flags upward and tuning
parameters
downward. For traffic management (control), loading information is
communicated upward
via MIB statistics integrated into the DDS 100, while downward communication
is done via
routing table entries. The DDS 100 is a bank of registers 80r, 82r, 84r, 86r,
and 88r that are
distributed according to the different managers 80, 82, 84, 86, 88 in the ATM
processing sub-
layer 60, with each register corresponding to a manager occupying a portion of
the address
space and only communicating information pertinent to its local operations.
The content of
information exchanged will be presented later in conjunction with the ATM
processing sub-
layer 60 functionality. A configuration and control (CC) interface 102 is used
to present the
address space to the network layer 54 via a serial bit-stream 104. The
message, in this
embodiment, is transmitted with the most significant bit (MSB) first, and
formatted to include
a start-bit, a read/write bit, a 24-bit address, a 3-bit message length field
and a data field
generally varying between 0 and 120 bits.
The operation of the ATM layer device 40 of the present invention will now be
described with reference to FIGS. 4 and 5. In the arnval process, with the ATM
layer device
40 in ingress mode, a cell is passed from an input interface 120 to the route
manager 80 via
UTOPIA cell bus 74. The route manager 80 performs a table look-up based on the
Virtual
Path Identifier/Virtual Connection Identifier (VPI/VCI) values and maps in a
new cell route
R' 124, consisting of a connection state vector S, a destination queues) for
the cell, and the
service class and, optionally, other quality of service (QoS) conditions for
the cell, from an
external connection memory 126. Simultaneously, a request is made by the route
manager 80
to the discard manager 82 to determine if the cell should be dropped on the
basis of the
volume of presently queued traffic for the given service class and output.
This request is
made via an eligibility request signal E 128 that, in a presently preferred
embodiment,


CA 02325135 2000-11-06
PAD-001-OOOI _ 9 _ r(ttorney Docket No.: PAT 1994-0
consists of a the connection state vector S, Cell Loss Priority (CLP), Early
Packet Discard
(EPD), and Partial Packet Discard (PPD) eligibility vectors for the current
connection, and an
end of packet (EOP) marker. The discard manager 82 returns a sendldiscard flag
and a
modified state vector S' 130 to the route manager 80. The route manager 80
drops the cell if
the discard manager 82 asserts a discard flag in response to the eligibility
request signal 128.
Otherwise, the route manager 80 passes the cell, via cell bus 74, to the
buffer manager 88 for
queuing in the external buffer FIFOs 90. Since discarding is connection-based,
modified state
vector S' 130 is returned to the external connection memory 126 via the route
manager 80.
The route manager 80 also sends a mutual exclusion signal 158 to the CC
interface 102. The
route manager 80 flags erroneous routes to its respective address space 80r in
DDS 100 by
storing the erred address in last recently used (LRU) fashion.
In order to make a connection-based discard decision, the discard manager 82
requires
eligibility information and queue occupancy feedback. The eligibility
information is
dispatched directly from the route manager 80, as described above for
eligibility request
signal E 128. The queue occupancy feedback is dispatched as a vector from the
queue
manager 84: Qocc signal 134. The feedback information selected is based on the
destination
queue(s), provided by route manager 80 as a queue signal Q 136 when the
request is initiated.
Queue signal Q 136 consists of an encoded queue identification and a request
from route
manager 80. If discard manager 82 determines that a cell for a given
connection is eligible
and the occupancy of the destination queue exceeds some threshold that would
otherwise
compromise QoS, then the route manager 80 is advised to drop the cell. The
discard manager
82 does so by asserting a discard. In the event that all queues are full, as
indicated by an
overflow flag asserted by the queue manager 84 in conjunction with the Qocc
signal 134, the
discard manger 82 automatically asserts a discard. The CLP, EPD and PPD "on"
and "off'
thresholds for each queue are stored in the DDS 100 for tuning purposes.
Furthermore the
discard manager 82 stores counts of transmitted cells (total and per port) and
discarded cells
(total and per queue on the basis of CLP, EPD and PPD eligibility) in the it
respective address
space 82r in DDS 100 for traffic management purposes.
In the case where a cell is not dropped, it is passed from the route manager
80 to the
buffer manager 88 for queuing via the cell bus 74. The buffer manager 88
extracts the
destination queue and accordingly stores the cell in the buffer FIFOs 90 by
writing over the
FIFO bus 138. Cell addresses are manipulated in a FIFO manner, and the cell is
written to a


CA 02325135 2000-11-06
PAD-OOl-0001 _ 10 _ Attorney Docket No.: PAT 1994-0
common external memory space. Optionally, an error correction field can be
appended to the
cell to protect against memory errors. The change in queue occupancy is
dispatched to the
queue manager 84 via the ~Q signal 140 signal, where 0Q includes the encoded
queue
identification and an increment/decrement signal. FIFO boundaries are stored
by the buffer
manager 88 in its respective address space 88r in the DDS 100 to allow for
queue depth
tailoring at start-up while buffer FIFO 90 memory errors are flagged to the
address space 88r
in DDS 100 for tracking purposes as a count in conjunction with the erred
address in LRU
fashion.
Concurrently in the departure process the buffer manager 88 requests a queue
from
the scheduler 86 via a request signal 142. The queue slated for transmission
is then supplied
by the scheduler 86 via a dispatch Q signal 144 signal, where Q represents the
queue slated
for transmission. The buffer manager 88 subsequently fetches a cell from the
buffer FIFOs 90
via the FIFO bus 138 and then passes the cell to an output interface 150 via
cell bus 74. The
change in queue occupancy is then dispatched to the queue manager 84 via the
~Q signal 140
by asserting a decrement in conjunction with Q. Optionally, the decrement
assertion can be
routed to the scheduler 86 as an acknowledgement to prompt recalculation.
Furthermore, an
error check is performed in the case where an error correction field is
appended to the cell in
the arrival process. As previously mentioned, FIFO boundaries are stored in
the DDS 100 to
allow for queue depth tailoring at start-up, while buffer FIFO 90 memory
errors are flagged
to the DDS 100 for tracking purposes as a count in conjunction with the erred
address in LRU
fashion.
The scheduler 86 then decides which queue to service next. To assess fairness
in light
of requested QoS, the scheduling decision requires prioritization information
in conjunction
with the current queue occupancy information. The queue manager 84 permanently
presents
this per-queue occupancy information to the scheduler 86 via a Qstatus signal
154. Tunable
service class and output prioritization parameters are presented as weights
stored in the DDS
100. Furthermore the decision considers downstream congestion feedback, as
indicated by a
one-hot encoded 8-bit congestion signal 156. The scheduler 86 supplies the
next queue slated
for transmission on the dispatch Q signal 144 upon assertion of the request
signal 142 by the
buffer manager 88.
As will be apparent to those of skill in the art, the FIFO bus 138 can be a
point of
resource contention between processes. Also, the downward communication of
routing table


CA 02325135 2000-11-06
PAD-00l-0001 _ 1 1 _ Attorney Docket No.: PAT 1994-0
entries contends with the route manager 80 for access to connection memory
126. Therefore,
the route manger 80 is given greater priority by default as a result of the
layering, and is the
master of connection memory access. The management of this inter-layer
contention is
achieved in one of two manners. First, contention management can be
implemented as a part
of the route manager 80, with the route table entries integrated into the DDS
100. Secondly,
as illustrated in FIG. 4, contention management can be implemented as a memory
access
device on the back-end of one eighth the address space of CC interface 102,
with the route
manager 80 retaining a mutual exclusion signal indicating continued connection
memory
access. Optionally, a buffer FIFO memory access device can be similarly
implemented for
added test support.
The queue manager 84 can also be integrated in the scheduler 80 in the case
where
congestion management functionality does not require feedback. The use of the
discard
manager 82 is optional in the case where congestion management functionality
is not
required at all. Furthermore, iri such a case, the route manager 80 can be
located after the
buffer manager 88 in the departure process, making it an input queued device.
Optionally, queuing memory usage can be increased with sharing. Each queue is
only
guaranteed a small allocation of memory space, but may exploit an entire
shared portion. The
shared portion is the remaining memory after queue allocations, or any set of
divisions
thereof. In the case where sharing is implemented, the queue manager 84 is
responsible for
ensuring that the shared space does not overflow. This sharing function is not
apparent in
occupancy and is transparent to the operation of the other managers. To
achieve this, a
minimum queue size in conjunction with a maximum share size is held on a per
queue basis
in the queue manager address space 84r of the DDS 100.
Referring to FIG. 3 and FIG. 4, the input interface 120 and output interface
150
couple the ATM layer device 40 to the ATM switch fabric 24 and the plurality
of physical
layer devices 70. In ingress mode, the input interface 120 is couplable to a
plurality of
physical layer devices 70 via a UTOPIA cell bus, while the output interface
150 is high speed
cell bus 74 compatible with the ATM switch fabric 24. In egress mode, the
input interface
120 is high speed cell bus 74 couplable to the switch fabric 24, while the
output interface 150
is couplable to a plurality of physical layer devices 70. In a typical ATM
switch, the
interfaces reformat cells to 60- bytes for internal use compatible with the
ATM switch fabric
24, in which case the cell bus 74 is 30 data bits long and can be transmitted
in conjunction


CA 02325135 2000-11-06
PAD-001-0001 _ 12 _ ~Attomey Docket No.: PAT 1994-0
with a 50 MHz clock pulse and two parity bits, providing a total maximal
throughput of 1.325
Gbps. In addition, an eighth of the address space of the CC interface 102 is
dedicated to
interfaces for flagging interfacing errors in the DDS 100 and configuring
device mode.
The main advantage of the device and method of the present invention include
the
manner in which the isolation of processes is accomplished. The ATM layer
device 40 of the
present invention addresses the issue of storage resource contention by
assigning a manager
with the sole purpose of managing contention. The issue of contention in
processing
resources is addressed in a similar manner - a manager is assigned to each
point of decision-
making with the sole purpose of making such decisions. Coupled with a layered
structure,
this managerial assignment provides greater algorithmic independence in that
the manner in
which any one manger implements its functionality is completely independent
from the way
neighboring managers implement their respective functionality. The ensuing
advantages of
algorithmic independence are: independent and parallel development; ease of
fault detection,
isolation and handling pre- and post- deployment; algorithm substitution; and
independent
tuning pre- and post- deployment.
Furthermore, by assigning distinct managers to each decision point, the
present
invention eliminates processing resource contention. This feature offers the
advantage of
parallel processing since decisions are made concurrently, thereby increasing
throughput.
Moreover, with the assignment of distinct managers to each point, the
concurrent decisions
can be made in an asynchronous manner. This asynchronous feature eases system
development and integration efforts in that front-end and back-end designs
remain uncoupled
with respect to queuing processes.
The above-described embodiments are intended to be examples of the present
invention. Alterations, modifications and variations may be effected to the
particular
embodiments by those of skill in the art without departing from the scope of
the invention,
which is defined solely by the claims.

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2000-11-06
(41) Open to Public Inspection 2001-05-05
Dead Application 2006-11-06

Abandonment History

Abandonment Date Reason Reinstatement Date
2005-11-07 FAILURE TO PAY APPLICATION MAINTENANCE FEE
2005-11-07 FAILURE TO REQUEST EXAMINATION

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Registration of a document - section 124 $100.00 2000-11-06
Application Fee $300.00 2000-11-06
Maintenance Fee - Application - New Act 2 2002-11-06 $100.00 2002-11-06
Registration of a document - section 124 $50.00 2003-06-20
Maintenance Fee - Application - New Act 3 2003-11-06 $100.00 2003-11-06
Maintenance Fee - Application - New Act 4 2004-11-08 $100.00 2004-11-03
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
SPACEBRIDGE SEMICONDUCTOR CORPORATION
Past Owners on Record
CORNFIELD, DAVID W.
DALVI, ANEESH
GILDERSON, JAMES A.
KHAILTASH, AMAL
LEVESQUE, MARC
SPACEBRIDGE NETWORKS CORPORATION
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Abstract 2000-11-06 1 15
Description 2000-11-06 12 779
Drawings 2000-11-06 5 98
Cover Page 2001-04-20 1 36
Claims 2000-11-06 3 132
Representative Drawing 2001-04-20 1 9
Assignment 2000-11-06 6 215
Assignment 2003-06-20 6 256
Fees 2002-11-06 1 21