Language selection

Search

Patent 2301433 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2301433
(54) English Title: METHOD AND SYSTEM FOR FLOW CONTROL IN A TELECOMMUNICATIONS NETWORK
(54) French Title: METHODE ET SYSTEME DE CONTROLE DU FLUX DANS UN RESEAU DE TELECOMMUNICATIONS
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04L 47/10 (2022.01)
  • H04L 47/24 (2022.01)
  • H04L 47/263 (2022.01)
  • H04L 47/30 (2022.01)
  • H04L 47/33 (2022.01)
  • H04L 49/253 (2022.01)
  • H04L 12/835 (2013.01)
  • H04L 12/861 (2013.01)
(72) Inventors :
  • WIBOWO, EKO ADI (Canada)
  • HUANG, JUN (Canada)
(73) Owners :
  • SPACEBRIDGE SEMICONDUCTOR CORPORATION (Canada)
(71) Applicants :
  • SPACEBRIDGE NETWORKS CORPORATION (Canada)
(74) Agent: BORDEN LADNER GERVAIS LLP
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2000-03-20
(41) Open to Public Inspection: 2001-02-09
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
2,279,803 Canada 1999-08-09

Abstracts

English Abstract




A system and method for reducing the overall cell loss ratio in switching
networks, while
maintaining service fairness during periods of traffic congestion. The method
includes
controlling data flow in a data switching system having an ingress and an
egress. The
method commences by detecting an egress congestion condition. Data flow in the
switching system is then decreased in response to the detected egress
congestion
condition. If an ingress congestion condition is then detected, data flow in
the switching
system is increased. The switching system has an ingress and an egress, and
includes a
switch fabric for switching data units from the ingress to the egress. An
egress card has a
first feedback link to the switching fabric. The egress card has an egress
congestion
monitor for detecting an egress congestion condition and for notifying the
switch fabric,
through the first feedback link, to decrease data flow to the egress upon
detecting the
egress congestion condition. An ingress card has an alternate forward link to
the switch
fabric, and an ingress condition monitor for detecting an ingress congestion
condition and
for notifying the switch fabric, through the alternate forward link, to
increase data flow
from the ingress upon detecting the ingress congestion condition.


Claims

Note: Claims are shown in the official language in which they were submitted.



We claim:

1. A method for controlling data flow in a data switching system having an
ingress and
an egress, the method comprising the steps of:
(i) detecting an egress congestion condition;
(ii) decreasing data flow in the switching system in response to the detected
egress
congestion condition;
(iii) detecting an ingress congestion condition; and
(iv) increasing data flow in the switching system in response to the detected
ingress
congestion condition.
2. A method according to claim 1, wherein the step of detecting the egress
congestion
condition includes monitoring occupancy of an egress buffer and comparing the
occupancy to an egress congestion threshold.
3. A method according to claim 1, wherein the step of detecting the ingress
congestion
condition includes monitoring occupancy of an ingress buffer, and comparing
the
occupancy to an ingress congestion threshold.
4. A method according to claim 2, wherein the step of monitoring occupancy of
the
egress buffer is performed for each class of service in use.
5. A method according to claim 3, wherein the step of monitoring occupancy of
the
ingress buffer is performed for each class of service in use.
6. A method according to claim 2, wherein the egress congestion threshold has
a
predetermined value.
7. A method according to claim 3, wherein the ingress congestion threshold is
periodically adjusted based on a frequency of detecting ingress congestion
loss.
8. A method according to claim 7, wherein the ingress congestion threshold is
decreased when ingress congestion loss is detected, and is increased in
absence of ingress
congestion loss detection.
9. A method according to claim 3, wherein the ingress congestion threshold is
adjusted

31


at periodic intervals, such that at an end of each interval the ingress
congestion threshold is
decreased by a predetermined amount if ingress congestion loss is detected,
and increased
by another predetermined amount otherwise.
10. A method according to claim 1, wherein the step of decreasing data flow
includes
activating congestion weights, and the step of increasing data flow includes
activating
normal weights.
11. A method according to claim 1, wherein the step of decreasing data flow
includes
reducing a traffic transmission rate by a rate-reduction factor.
12. A method according to claim 11, wherein the rate-reduction factor is
substantially
one half.
13. A method according to claim 11, wherein the rate-reduction factor has
substantially a
value of (1/2)", n being a positive non-zero integer.
14. A method according to claim 11, wherein the step of decreasing data flow
for a class
of service includes incrementing a counter each time data traffic from the
class of service
is chosen for transmission, resetting the counter to zero when the counter
exceeds an
interval value, and stopping transmission of data traffic from the class of
service when the
counter value is zero.
15. A method according to claim 1, wherein including a step of stopping the
transmission
of data traffic during egress congestion detection.
16. A method according to claim 1, wherein the step of decreasing data flow
includes
limiting the number of transmitted data units during egress congestion
detection.
17. A switching system for a telecommunications network, the switching system
having
an ingress and an egress, the system comprising:
a switch fabric for switching data units from the ingress to the egress;
an egress card having a first feedback link to the switching fabric, the
egress card
having an egress congestion monitor for detecting an egress congestion
condition and for

32



notifying the switch fabric, through the first feedback link, to decrease data
flow to the
egress upon detecting the egress congestion condition; and
an ingress card having an alternate forward link to the switch fabric, the
ingress card
having an ingress condition monitor for detecting an ingress congestion
condition and for
notifying the switch fabric, through the alternate forward link, to increase
data flow from
the ingress upon detecting the ingress congestion condition.

18. A switching system according to claim 17, further including a traffic
scheduler in the
switch fabric for regulating transmission of data units.

19. A switching system according to claim 17, wherein the egress congestion
monitor
and the ingress congestion monitor detect egress and ingress congestion
conditions,
respectively, for each class of service in use.

20. A switching system according to claim 17, wherein the ingress congestion
monitor
includes adjustment means to periodically adjust an ingress congestion
threshold.

21. A switching system according to claim 20, wherein the adjustment means
decrease
the ingress congestion threshold when detecting ingress congestion loss, and
increase the
ingress congestion threshold when detecting no ingress congestion loss.

22. A switching system according to claim 18, further including weight
activation means
for activating congestion weights in the traffic scheduler when ingress
congestion is
detected, and for activating normal weights in the traffic scheduler when no
ingress
congestion is detected.

23. A switching system according to claim 18, wherein the traffic scheduler
decreases
transmission of data units by reducing a traffic transmission rate by a rate-
reduction factor.

24. A switching system according to claim 23, wherein the rate-reduction
factor is
substantially one half.

25. A switching system according to claim 23, wherein the rate-reduction
factor has
substantially a value of (1/2)n, n being a positive non-zero integer.


33



26. A flow control system as in claim 23, wherein the traffic scheduler
includes a counter
for determining the rate-reduction factor for a class of service by
incrementing the counter
each time traffic from the class of service is chosen for transmission, by
resetting the
counter to zero when the counter exceeds an interval value, and by stopping
transmitting
data units from the class of service when the counter value is zero.
34~

Description

Note: Descriptions are shown in the official language in which they were submitted.



CA 02301433 2000-03-20
,
METHOD AND SYSTEM FOR FLOW CONTROL IN A
TELECOMMUNICATIONS NETWORK
FIELD OF THE INVENTION
The present invention generally relates to a method and system for flow
control in a
telecommunications network. More particularly, the present invention relates
to a method
and system for flow control in a data switching unit in a telecommunications
network.
BACKGROUND OF THE INVENTION
A typical prior art switching system 20 is shown in Fig. 1. Packet switching
in
modern high-speed telecommunication networks is generally implemented in a
switching
system having an input (ingress) interface card 22, a switch fabric 24, and an
output
(egress) interface card 26. In a typical Asynchronous Transfer Mode (ATM)
switch, the
ingress card is responsible for processing incoming traffic of ATM cells
arriving at the
input ports 28 for internal routing. Prior to routing traffic through the
forward path 30, the
ingress card 22 appends additional information onto the header portion of the
ATM cell,
such as its internal identifier (ID), cell type, cell class, and the
designated egress cards)
26. This information is typically stored in a table that is indexed by the
external ID of the
cell. The ingress card 22 performs other functions as well, such as the
buffering,
scheduling, queuing, monitoring, and discarding of cells. The ATM switch
fabric 24 is
primarily responsible for switching all traffic arnving from the ingress
cards) 22. It also
performs other functions such as the buffering, scheduling and queuing of
cells. Finally,
the egress card 26 is responsible for processing the traffic received from the
switch fabric
for onward transmission through the output ports 32. This process involves the
removal of
the information appended to the cell header and the reinsertion of new
information for
delivery to the next destination. In addition, the egress card 26 performs
management
functions similar to the ingress card 22. The egress card 26 may also send a
signal to the
switch fabric 24 (and ultimately to the ingress card 22) regarding traffic
congestion at the
egress card 26 through the feedback path 3
In any switching network several packets intended for a single destination may
arrive
at the switch simultaneously over a plurality of input ports. For example, in
Fig. 1, a
plurality of cells arriving at the ingress card 22 via separate input ports 28
may be destined
for a single output port 32 whose transmission capacity may only handle one
cell at a time.
The other entering cells must therefore be stored temporarily in a buffer 36
(or buffers). In
1


CA 02301433 2000-03-20
. ","
peak traffic periods some of these cells could be lost if the switch does not
have sufficient
space to buffer them.
A flow control strategy is therefore established between the ingress cards)
22, the
switch fabric 24, and egress cards) 26 to allow for optimum transfer of
traffic from the
input ports 28 to the output ports 32. The purpose of any flow control
strategy is to reduce
the amount of loss due to congestion at a queuing point in the switch. Flow
control can be
categorized as either feed-forward or feedback. Feed-forward flow control
attempts to
alleviate traffic congestion by notifying up-stream queuing points of an
imminent increase
in the amount of traffic transmitted. Conversely, feedback flow control
attempts to ease
traffic congestion by signalling downstream queuing points to reduce the
amount of traffic
transmitted.
Methods for controlling traffic congestion can generally be grouped into three
layers,
based on the length of the traffic congestion period: the call layer, the
burst layer, and the
cell (or packet) layer. A call layer is specifically designed for congestion
that lasts longer
than the duration of a call session and is programmed to cancel the call if
the required
network bandwidth is unavailable. Call layer methods include Connection
Admission
Control and dynamic routing schemes. A burst layer, on the other hand, is used
for
congestion that lasts for a period comparable to the end-to-end round trip
delay of the
network. Burst layer methods include credit-based and rate-based flow control.
Finally, a
cell or packet layer method is targeted for congestion periods that are
shorter than the end-
to-end round trip delay. Cell layer methods include internal switch congestion
control and
flow control schemes such as backpressure flow control and selective cell
discard
methods.
In stop-and-go, or backpressure, flow control, the egress card 26 sends a
signal to the
ingress card 22 to stop transmission of cells until congestion is alleviated.
This flow
control method is the simplest to implement and is quite effective when the
period
between the time of notification to the time of transmission cessation is
short. An example
of such backpressure flow control is disclosed in U.S. Patent No. 5,493,566 to
Ljungberg
et al., entitled "Flow Control System for Packet Switches".
In credit-based flow control, the egress card 26 periodically notifies the
ingress card
22 to limit the number of cells it is transmitting. While this process is more
complex than
the backpressure method, it is able to guarantee that there will be no
congestion loss,
regardless of the propagation delay caused by the passage of the cells through
the switch
20. Examples of credit-based schemes are disclosed in M.G.H. Katevenis, D.
Serpanos
2


CA 02301433 2000-03-20
,.
and E. Spyridakis, "Switching Fabrics with Internal Backpressure using the
ATLAS 1
Single-Chip ATM Switch", Proceedings of IEEE Globecom, November, 1997; A.
Pomportis and I. Vlahavas, "Flow Control and Switching Strategy for Preventing
Congestion in Multistage Networks", Architecture and Protocols for High Speed
Networks, Kluver Academics Publishers, 1994; and U.S. Patent No. 5,852,602 to
Sugawara, entitled "Credit Control Method and System for ATM Communication
Apparatus".
In rate-based flow control, the egress card 26 notifies the ingress card 22 of
egress
traffic congestion and signals it to adjust its transmission rate. This flow
control signal
may specify the sending rate of transmission, or it may simply request the
ingress card 22
to reduce its rate of transmission. Rate-based flow control is also more
complex than the
stop-and-go scheme, but unlike the credit-based method, it cannot prevent
congestion loss.
Examples of rate-based implementation are described in F. Bonomi and K.
Fendick, "The
Rate-based Flow Control Framework for the Available Bit Rate ATM Service",
IEEE
Network, March/April 1995; and R. Jain, "Congestion Control and Traffic
Management in
ATM Networks: Recent Advances and a Survey", Computer Networks and ISDN
S sy terns, 28 October 1996.
In most terrestrial switching applications, buffers 36 in either ingress 22 or
egress 26
cards are designed sufficiently large to absorb most peak traffic, and can,
therefore,
employ one of the previously outlined flow control methods. However, this is
not the case
for other applications, such as on-board satellite switches or very high-speed
terrestrial
switches, where the weight and/or size of buffers 36 cannot exceed certain
limits. In order
to accommodate the same heavy traffic load for such switches, a more efficient
flow
control method is required to optimize sharing and use of the buffers 36 in
ingress card 22,
egress card 26 and switch fabric 24.
Another traffic management function often performed in switching networks is
traffic scheduling. Advanced traffic schedulers can provide a broad range of
Quality-of
Service (QoS) guarantees, which are essential for the support of multiple
services and
applications. A popular group of advanced traffic schedulers is the rate
scheduling family.
Traffic scheduling methods that belong to this family of schedulers are
intended to
guarantee a minimum transmission bandwidth to certain services or
applications.
Examples of such traffic scheduling methods are described in D. Stiliadis and
A. Varma,
"A General Methodology for Designing Efficient Traffic Scheduling and Shaping
Algorithms", Proceeding of IEEE Infocom, April 1997; and H. Zhang and D.
Ferrari,
3


CA 02301433 2000-03-20
w
"Rate-Controlled Service Disciplines", Journal of High Speed Networks, 1994.
While such scheduling schemes can play a vital role in traffic flow control,
they often
achieve their QoS objectives by favouring transmission bandwidth isolation
over
transmission bandwidth sharing, which can result in higher congestion loss.
One prior art
technique used to alleviate this problem is the application of congestion
weights to the
scheduling method based on the size of traffic queues such that the larger the
size of the
queue, the greater the weight assigned to that queue. This larger weight
effectively
allocates more bandwidth (or cell rate) to the more congested queues at the
expense of the
other less congested queues, and increases the availability of space in the
congested
queues more quickly. This scheme has the effect of better managing the overall
traffic
flow by allowing proportional sharing of bandwidth between longer and shorter
traffic
queues.
Unfortunately, the congestion weight scheme does little to support service
fairness
between traffic queues. The longer a weight is assigned to a congested queue,
the longer
the unfair service exists. Moreover, in situations where the buffers are
smaller and easier
to congest, weights are assigned more frequently and in much closer intervals,
thus
causing more frequent occurrences of unfair service. The issue of service
fairness arises
when a small number of traffic queues dominate the use of common buffer space.
This
situation can occur during heavy traffic when some of the egress cards 26 are
continually
congested. Feedback flow control schemes address the problem of traffic
congestion by
holding up traffic in the ingress card 22 long enough that the common buffer
space in the
ingress card 22 is used exclusively for traffic designated to the congested
egress card 26.
As a result, traffic queues designated to lightly-congested or non-congested
egress cards
have limited or no access to the common buffer space and are therefore
subjected to
increasing congestion loss.
Flow control in systems implementing Class of Service (COS) can also be
problemetic. Class of Service is the ability of switches and routers to
prioritize traffic in
different classes. It does not guarantee delivery, but uses best efforts, and
the highest class
has the best chance of succeeding. Generally, traffic behaviour for a Class of
Service that
has little or no service guarantee is very bursty. There are two explanations
for such
traffic behaviour. First, such Class of Service is typically used for delay-
insensitive data
applications, such as most Internet applications, which are known to possess
very bursty
traffic behaviour. Second, such Class of Service typically has relaxed traffic
contracts
and, as a result, the traffic policer in the switching network cannot reduce
the burstiness
4


CA 02301433 2000-03-20
behaviour. Traffic from such Class of Service is therefore more easily
congested.
Furthermore, research results have shown that non-idling traffic schedulers
used in the
ingress cards 22 and the switch fabric 24 can build up traffic burstiness
entering the egress
cards 26, thereby increasing egress buffer requirements. Applying feedback
flow control
in the egress card 26 does not, by itself, solve the congestion problem. In
fact, such flow
control schemes shift the congestion rapidly onto the ingress cards 22.
It is, therefore, desirable to provide a system and method for reducing the
overall cell
loss ratio in switching networks, while maintaining service fairness during a
traffic
congestion situation, particularly in switching networks where buffer size is
at a premium,
such as on-board satellite switches and very high speed terrestrial switches.
SUMMARY OF THE INVENTION
It is an object of the present invention to obviate or mitigate at least one
disadvantage
of the prior art. In particular, it is an object of the present invention to
provide a system
and method for reducing the overall cell loss ratio in switching networks,
while
maintaining service fairness during periods of traffic congestion. Such a
system and
method has particular applicability to switching networks where buffer size is
at a
premium, such as on-board satellite switches and very high speed terrestrial
switches. It is
a further obj ect of the present invention to provide a system and method that
combines
flow control with complementary traffic scheduling for use in a switch fabric
of a
switching unit, in order to reduce the overall cell loss ratio and to maintain
service fairness
during periods of traffic congestion.
In a first aspect, the present invention provides a method for controlling
data flow in
a data switching system having an ingress and an egress. The method commences
by
detecting an egress congestion condition. Data flow in the switching system is
then
decreased in response to the detected egress congestion condition. If an
ingress
congestion condition is then detected, data flow in the switching system is
increased.
Presently preferred embodiments of the method of the present invention provide
that
the egress congestion condition is detected by monitoring occupancy of an
egress buffer
and comparing the occupancy to an egress congestion threshold. Similarly, the
ingress
congestion condition is detected by monitoring occupancy of an ingress buffer,
and
comparing the occupancy to an ingress congestion threshold. The monitoring of
occupancy of the egress and ingress buffers can be performed for each class of
service in
use. Generally, the egress congestion threshold has a predetermined value,
while the
5


CA 02301433 2000-03-20
ingress congestion threshold is periodically adjusted based on a frequency of
detecting
ingress congestion loss. Typically, the ingress congestion threshold is
decreased when
ingress congestion loss is detected, and is increased in absence of ingress
congestion loss
detection, or the ingress congestion threshold is adjusted at periodic
intervals, such that at
an end of each interval the ingress congestion threshold is decreased by a
predetermined
amount if ingress congestion loss is detected, and increased by another
predetermined
amount otherwise.
Typically, a rate reduction factor is used to decrease data flow according to
the
method of the present invention, which can also provide for activating
congestion weights,
and activating normal weights. The rate-reduction factor is generally
substantially one half
the data transmission rate, or a value of (%2)n , n being a positive non-zero
integer.
Decreasing data flow for a class of service includes incrementing a counter
each time data
traffic from the class of service is chosen for transmission, resetting the
counter to zero
when the counter exceeds an interval value, and stopping transmission of data
traffic from
the class of service when the counter value is zero. The method can also be
implemented
with a stop-and-go flow control procedure or a credit-based flow control
procedure.
In a further aspect of the present invention, there is provided a switching
system for a
telecommunications network. The switching system has an ingress and an egress,
and
includes a switch fabric for switching data units from the ingress to the
egress. An egress
card has a first feedback link to the switching fabric. The egress card has an
egress
congestion monitor for detecting an egress congestion condition and for
notifying the
switch fabric, through the first feedback link, to decrease data flow to the
egress upon
detecting the egress congestion condition. An ingress card has an alternate
forward link to
the switch fabric, and an ingress condition monitor for detecting an ingress
congestion
condition and for notifying the switch fabric, through the alternate forward
link, to
increase data flow from the ingress upon detecting the ingress congestion
condition.
Presently preferred embodiments of the switching system of the present
invention
include a traffic scheduler in the switch fabric for regulating transmission
of data units.
Typically, the egress congestion monitor and the ingress congestion monitor
detect egress
and ingress congestion conditions, respectively, for each class of service in
use. The
ingress congestion monitor includes adjustment means to periodically adjust an
ingress
congestion threshold by decreasing the ingress congestion threshold when
detecting
ingress congestion loss, and increasing the ingress congestion threshold when
detecting no
ingress congestion loss. The system of the present invention can further
include weight
6


CA 02301433 2000-03-20
activation means for activating congestion weights in the traffic scheduler
when ingress
congestion is detected, and for activating normal weights in the traffic
scheduler when no
ingress congestion is detected.
Generally, the traffic scheduler decreases transmission of data units by
reducing a
traffic transmission rate by a rate-reduction factor of substantially one
half, or a value of
substantially (%z)", n being a positive non-zero integer. The traffic
scheduler can include
a counter for determining the rate-reduction factor for a class of service.
The counter
increments the counter each time traffic from the class of service is chosen
for
transmission, resets the counter to zero when the counter exceeds an interval
value, and
stops transmitting data units from the class of service when the counter value
is zero.
BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary embodiments of the present invention will now be described, with
reference to the attached drawings, in which:
Figure 1 is a block diagram of a typical prior art ATM switch;
Figure 2 is a block diagram of feedback and feed-forward flow control paths in
a 2-
by-2 data switching system in accordance with an embodiment of the present
invention;
Figure 3 is a diagram of a data format for the egress and ingress congestion
notification signals used in accordance with an embodiment of the present
invention;
Figure 4 is a flow chart of a general egress and ingress congestion decision
process in
accordance with an embodiment of the present invention;
Figures Sa, Sb and Sc are flow charts of a feedback egress congestion
notification
process in accordance with an embodiment of the present invention;
Figures 6a, 6b and 6c are flow charts of a feed-forward ingress congestion
notification process in accordance with an embodiment of the present
invention;
Figures 7a, 7b and 7c are flow charts of a process for adjusting ingress
congestion
notification threshold levels in accordance with an embodiment of the present
invention;
Figures 8a , 8b, 8c, 8d, 8c and 8f are flow charts of a traffic scheduling
process in
accordance with an embodiment of the present invention;
Figures 9a, 9b, 9c, 9d, 9e and 9f are flow charts of a traffic scheduling
process in
accordance with an embodiment of the present invention using egress stop-and-
go flow
control;
Figures 10a, lOb, lOc, lOd, and l0e are flow charts of a traffic scheduling
process in
accordance with an embodiment of the present invention using egress credit-
based flow
7


CA 02301433 2000-03-20
control; and
Figure 11 is a graph of simulation results for UBR traffic through (a) a prior
switching system employing no flow control and (b) a switching system
employing the
flow control in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
Generally, the present invention provides a system and method for reducing the
overall cell loss ratio in switching networks by providing both feed-forward
and feedback
to a switch fabric, together with the use of congestion notification
thresholds, while
maintaining service fairness during periods of traffic congestion. The system
and method
have particular applicability to switching networks where buffer size is at a
premium, such
as on-board satellite switches and very high speed terrestrial switches.
While embodiments of the present invention are described herein in terms of
asynchronous transfer mode (ATM), the system and method of the present
invention are
explicitly not limited to ATM applications, and are equally applicable to
other
telecommunications protocols, and other similar environments employing data
switching
technology, as will occur to those of skill in the art.
An embodiment of the system of the present invention is shown in the block
diagram
of Fig. 2, and generally designated at reference numeral 50. Fig. 2
illustrates feedback and
feed-forward flow control paths in a 2x2 data switching system, and includes
the
interaction among the various flow control processes as described for a
presently preferred
embodiment. Note that a 2x2 data switching system is used here merely to
illustrate the
invention, which is otherwise fully applicable to other mxn data switching
systems, m and
n being any nonzero positive integers, as will be apparent to those of skill
in the art. For
example, the performance results discussed further below were obtained for an
8x8 data
switching system.
System 50 consists of first and second ingress cards 52 and 54 linked to a
switch
fabric 56 having first and second fabric elements 58 and 60. The two fabric
elements 58
and 60 are respectively linked to first and second egress cards 62 and 64.
Each of the first
and second ingress cards 52 and 54 has an ingress buffer 55 and an ingress
congestion
monitor 57 to monitor its ingress buffer occupancy for different classes of
service. Class
of service provides the ability of switches and routes to prioritize traffic
in a network into
different queues or classes of service. Class of service does not guarantee
delivery or
8


CA 02301433 2000-03-20
throughput, which is still best effort, but the highest class has the best
chance of being
successfully transmitted. When ingress congestion for a given Class of Service
occurs,
each affected ingress card notifies first and second switch-fabric elements 58
and 60.
Each of these two switch-fabric elements 58,60 routes the ingress traffic
designated to one
of first and second egress cards 62 and 64. Each of the first and second
egress cards 62
and 64 has an egress buffer 66 and an egress congestion monitor 67 to monitor
its egress
buffer occupancy for each Class of Service in use, and to notify the
respective fabric
element 58 or 60, when egress congestion for a given Class of Service occurs.
A first
forward link 68 represents the forward paths from each of the two ingress
cards 52 and 54
to both switch fabric elements 58 and 60 for data traffic and ingress
congestion
notification signals embedded in the header of data units as in well known to
those of skill
in the art. An alternate forward link 70 represents an alternate path from
each of the two
ingress cards 52 and 54 to both switch fabric elements 58 and 60 for separate
ingress
congestion notification signals having a format as illustrated in Fig. 3. In
this format, the
ingress congestion notification signal contains eight bits Bl-B8, each
representing an
ingress congestion notification value corresponding to one of eight Classes of
Service 1-8
respectively. A value of 1 denotes an "active" ingress congestion
notification, whereas a
value of 0 denotes an "inactive" ingress congestion notification. A similar
format applies
to the egress congestion notification signal that is transmitted from any one
of the two
egress cards 62 and 64 of Fig. 2, to the switch fabric 56 through the
alternate forward link
76.
A first feedback link 72 represents the feedback path for the egress
congestion
notification signals from each of the two egress cards 62 and 64 to the
respective switch
fabric elements 58 or 60. The egress congestion notification signals also have
the same
format as illustrated in Figure 3. A second forward link 74 represents the
forward path
from each of the switch fabric elements 58 and 60 to its respective egress
card 62 or 64 for
the data traffic. A second feedback link 76 represents the feedback path from
each of the
two switch fabric elements 58 and 60 to both ingress cards 52 and 54 for a
fabric
backpressure flow control signal. An traffic scheduler 78 can also be included
in switch
fabric 56, to receive congestion notification from the ingress and egress
congestion
monitors 57 and 67, and to regulate traffic flow through system S0.
In a presently preferred embodiment of the present invention, the following
flow
control schemes work together: feedback congestion information from an egress
card to
the switch fabric, feed-forward congestion information from the ingress card
to the switch
9


CA 02301433 2000-03-20
,
fabric, and adaptive congestion threshold for the ingress card. This
embodiment produces
the desired congestion control behaviour in both ingress and egress cards as
follows.
When congestion is rare and far apart, the flow control scheme works towards
balancing
congestion loss in both ingress and egress cards so as to reduce overall loss
of data units,
also referred as cells in the ATM environment, and not affect service
fairness. However,
when congestion is common and arnves in closer intervals, the method directs a
greater
portion of the congestion loss towards the egress cards. This scheme has the
effect of
reducing overall congestion loss while, at the same time, maintaining service
fairness.
Under this embodiment, egress rate control is preferred over other feedback
schemes
because the rate-reduction factor can be adjusted to reduce the speed of
ingress
congestion. This speed reduction allows more time for the ingress cards to
react to arising
congestion by either ignoring, or requesting the switch fabric to ignore, the
feedback
congestion information from the egress card.
The method of the present invention is generally illustrated in Fig. 4. The
method
commences at decision step 90, where it is determined if an egress congestion
condition
exists. Typically, an egress congestion condition is detected by monitoring
the occupancy
of egress buffers 66 to determine if the occupancy exceeds a preset, or
predetermined,
egress congestion threshold value. If an egress congestion condition is
detected at step 90,
the method proceeds to step 92, where the data flow, or transmission rate,
through switch
fabric 56 is reduced to alleviate the congestion condition at the egress. The
method then
proceeds to decision step 94, where it is determined if an ingress congestion
condition
exists. Again, an ingress congestion condition is typically detected by
monitoring the
occupancy of ingress buffers 55 to determine if the occupancy exceeds a
ingress
congestion threshold value. If an ingress congestion condition is detected at
step 94, the
data flow through switch fabric 56 is increased to clear the ingress
congestion.
The following is a functional description of a presently preferred embodiment
of the
present invention, where congestion is monitored separately for each Class of
Service.
Each of the egress cards 62, 64 continually monitors its egress buffer
occupancy for each
Class of Service. For the Class of Service in use, an egress card will send
feedback
congestion information in the form of an egress congestion notification signal
to the
switch fabric when the egress queue size for traffic of the Class of Service
exceeds a pre-
selected egress congestion notification threshold. In turn, for the Class of
Service, the
switch fabric will reduce the rate of cell transmission to the congested
egress card by a
pre-defined fraction. Similarly, each of the ingress cards continually
monitors its ingress


CA 02301433 2000-03-20
buffer occupancy and, for the same Class of Service as above, an ingress card
will send
feed-forward congestion information in the form of an ingress congestion
notification
signal to the switch fabric when the ingress queue size for the Class of
Service exceeds an
adjustable ingress congestion notification threshold. When the latter happens,
the switch
S fabric will ignore the egress congestion notification signal for the Class
of Service and
resume transmission of traffic at the normal rate. During this time the
ingress congestion
notification threshold is adjusted, based on the frequency of occurrence of
ingress
congestion loss - that is, it will be periodically reduced when ingress
congestion loss is
frequent, or it will be periodically increased when ingress congestion loss is
rare and far
apart. Thus, when ingress congestion loss is rare, the ingress congestion
notification
threshold will be high, and much of the ingress buffer space will be used to
reduce the
congestion loss in the egress cards. However, when ingress congestion loss is
more
frequent, the ingress congestion notification threshold will be small and more
ingress
buffer space will be used to store traffic designated to non-congested egress
cards.
The presently preferred embodiment of this invention can also include a method
by
which non-idling traffic scheduler 78 in switch fabric 56 makes use of both
feedback
congestion information from the egress card and feed-forward congestion
information
from the ingress card to schedule traffic transmission through the switch
fabric.
With reference to Figs. 5 - 10, each of the interrelated processes of the
presently
preferred embodiment of the present invention will now be described. These
interrelated
processed include: feedback egress congestion notification, feed-forward
ingress
congestion notification, adaptive thresholds for ingress congestion
notification, and traffic
scheduling at the switch fabric.
In describing each of these processes, it is assumed that traffic in the
ingress card is
queued according to its Class of Service and its designated egress card;
traffic in the
switch fabric is queued according to its source ingress card, its Class of
Service, and its
designated egress card; and traffic in the egress card is queued according to
its Class of
Service and its destination output port.
Figs. Sa, Sb and Sc illustrate a feedback egress congestion notification
process within
each of the first and second egress cards 62 and 64 shown in Fig. 2. This
process consists
of three phases - first, second and third egress congestion notification
phases for the
operations performed (1) upon initialization, (2) upon cell arrival, and (3)
upon cell
departure, and shown in Figs. Sa, Sb and Sc, respectively. In the flow chart
the following
descriptors are used:
11


CA 02301433 2000-03-20
In the first egress congestion notification phase shown in Fig. Sa, the
process is
entered at initialization step 100 where the ECN on and total cell count are
initialized to
"FALSE" and "0" respectively, following which the first egress congestion
notification
phase is ended. "ECN on = FALSE" designates a state wherein the egress buffer
is not
congested for the Class of Service in use, upon which the egress card notifies
the switch
fabric 56 that its buffer is not congested. In this state, the egress
congestion notification
signal for the Class of Service in use is set to "inactive" by assigning the
bit corresponding
to the Class of Service in use a value of "0" as shown in Figure 3. "total
cell count"
means the total number of cells of a specified Class of Service that is
buffered at a
particular point in time.
In the second egress congestion notification phase shown in Fig. Sb, the
process
determines whether or not to send an "active" egress congestion notification
signal to the
switch fabric 56. Upon cell arrival the process is entered at retrieval step
102 where the
Class of Service of the cell is retrieved, following which at step 104 the
total cell count is
incremented by one. The process then moves to decision step 106 where it is
determined
if (a) the ECN on state is "FALSE" and (b) the total cell count is equal to or
greater than
the ECN on thresh, where "ECN on thresh" is an egress congestion threshold
value that
indicates the egress queue size threshold, above which the egress card informs
the switch
fabric to perform rate control to the traffic that corresponds with the Class
of Service in
use and is designated to the egress card. This threshold setting applies to
the total
available egress buffer space used by the Class of Service.
If the response to either (a) or (b) is "no", this phase of the process is
ended.
However, if both responses are "yes" the process moves to re-set step 108
where the
ECN on state is set to TRUE signifying "a state wherein the egress buffer is
congested for
the Class of Service in use, upon which the egress card notifies the switch
fabric of the
egress congestion. In this state, the egress congestion notification signal
for the Class of
Service in use is set to "active" by assigning the bit corresponding to the
Class of Service
in use a value of "1" as shown in Figure 3. Next, the process moves to bit-
assignment step
110 where an "active" egress congestion notification signal is prepared by
assigning the
bit corresponding to the Class of Service in use a value of "1", and sent to
the switch
fabric 56. Upon receiving the "active" egress congestion notification signal,
and if not
interfered by an ingress congestion notification signal, the switch fabric 56
responds by
reducing the rate of data transmission to the egress card 62 or 64 for the
Class of Service
m use.
12


CA 02301433 2000-03-20
In the third egress congestion notification phase, shown in Fig. Sc, the
process
determines whether or not to send an "inactive" egress congestion notification
signal to the
switch fabric 56. Upon cell departure, the process is entered at retrieval
step 112 where
the Class of Service of the cell is retrieved, following which at step 114 the
total cell count is decremented by one. The process then moves to decision
step 116
where it is determined if the ECN on state is TRUE and the total cell count is
less than
the ECN off thresh, where "ECN off thresh" is the egress congestion threshold
value
that indicates the egress queue size threshold below which the egress card
informs the
switch fabric to deactivate rate control to the traffic that corresponds to
the Class of
Service in use and is designated to the egress card. Similarly, this threshold
setting applies
to the total egress buffer space used by the Class of Service. If either
response is "no" the
process is ended. If both responses are "yes" the process moves to re-set step
118 where
the ECN on is set to FALSE. The process finally moves to bit-assignment step
120 where
an "inactive" egress congestion notification signal is prepared by assigning
the bit
corresponding to the Class of Service in use a value of "0", and sent to the
switch fabric
56. Upon receiving the "inactive" egress congestion notification signal, and
if not
interfered by an ingress congestion notification signal, the switch fabric
resumes normal
data transmission.
A further feature of the present invention is a process whereby egress
feedback rate
control is counteracted by ingress congestion notification with adaptive
thresholds in order
to minimize congestion loss. This feature permits a congested ingress card to
notify the
switch fabric to ignore the egress rate control signal. In accordance with the
present
invention, two approaches are possible for sending ingress congestion
notification
information to the switch fabric 56. A first approach uses the alternate
forward link 76 of
Fig. 2 to send an ingress congestion notification signal formatted according
to Fig. 3. A
second approach uses the first forward link 68 for data traffic, such that the
ingress
congestion notification information is embedded as a 1-bit field in the header
of a
transmitted cell. The first approach allows for a simpler decoding process in
the switch
fabric 56 than the second approach, by providing a quicker response to any
ingress
congestion notification signal. However, in order to implement the first
approach,
additional pins are required for the dedicated path between the egress cards
62, 64 and the
switch fabric 56. Furthermore, if the switch fabric 56 is mufti-staged,
additional pins are
required to propagate the signal between the switch fabric elements. The
second approach
removes the hardware disadvantage of the first approach, but increases the
decoding
13


CA 02301433 2000-03-20
complexity in the switch fabric 56. The switch fabric 56 needs to associate
the content of
the 1-bit field and the Class of Service of the cell that is also embedded in
the cell header
for other purposes, such as resource management.
Fig. 6a, 6b and 6c illustrate a process of feed-forward ingress congestion
notification
in the presently preferred embodiment. This process consists of three phases -
first,
second and third ingress congestion notification phases respectively for the
operations
performed upon initialization, upon cell arrival, and upon cell departure.
As used in the following description and as shown in Figs. 6a, 6b and 6c,
"ICN on thresh" is an ingress congestion threshold value that indicates the
queue size
threshold, above which the ingress card informs the switch fabric to ignore
the egress
congestion notification signal that corresponds to the Class of Service in
use. This
threshold setting applies to the total ingress buffer space used by the
corresponding Class
of Service. "ICN off thresh" is an ingress congestion threshold value that
indicates the
ingress queue size threshold, below which the ingress card informs the switch
fabric to
respond to the egress congestion notification signal that corresponds to the
Class of
Service in use. Similarly, this setting applies to the total ingress buffer
space used by the
corresponding Class of Service. "ICN_off thresh diff' is the difference
between the
"ON" ingress congestion threshold and "OFF" ingress congestion threshold. "ICN
on =
TRUE" is a state wherein the ingress buffer is congested for the Class of
Service in use,
upon which the ingress card notifies the switch fabric of the ingress buffer
congestion. In
this state, the ingress congestion notification signal for the Class of
Service in use is set to
"active" by assigning the bit corresponding to the Class of Service in use a
value of "1".
"ICN on = FALSE" is a state wherein the ingress buffer is not congested for
the Class of
Service in use, upon which the ingress card notifies the switch fabric that
its buffer is not
congested. In this state, the ingress congestion notification signal for the
Class of Service
in use is set to "inactive" by assigning the bit corresponding to the Class of
Service in use
a value of "0".
In the first ingress congestion notification phase shown in Fig. 6a, the
process is
entered at initialization step 200 where the ICN on and total cell count are
initialized to
"FALSE" and "0" respectively, following which the first ingress congestion
notification
phase is ended.
In the second ingress congestion notification phase shown in Fig. 6b, the
process
determines whether or not to send an ingress congestion notification signal to
the switch
fabric 56. Upon cell arrival the process is entered at retrieval step 202
where the Class of
14


CA 02301433 2000-03-20
Service of the cell is retrieved, following which the process moves to
incremental step 204
where the total cell count is incremented by one and then to step 206 where
the previous
ICN on state is set to the current ICN on state. The process then moves to
decision step
208 where it is determined if (a) the ICN on state is "FALSE" and (b) the
total cell count is equal to or greater than the ICN on thresh. If both
responses to (a)
and (b) are "yes", the process moves to step 210 where ICN on is set to
"TRUE".
However, if the response to either (a) or (b) at step 208 is "no", the process
moves to
decision step 212 where it is determined if (a) the ICN on state is "TRUE" and
(b) the
total cell count is equal to or greater than the ICN off thresh less the
ICN off thresh diff. If the response to both (a) and (b) is "yes", the process
moves to
step 210. However, if the response to either (a) or (b) is "no", the process
moves to step
216 where the ICN on is set to "FALSE". From step 210 or step 216, the process
moves
to decision step 218 where it is determined if the previous ICN_on state is
equal to the
current ICN on state. If "yes", the second ingress congestion notification
phase ends. If
"no", the process moves to decision step 220 where it is determined if the ICN
on is
"FALSE". If the response is "no", the process moves to bit-assignment step 222
where an
"active" ingress congestion notification signal for traffic belonging to the
Class of Service
in use is prepared by assigning the bit corresponding to the Class of Service
in use a value
of "1", and sent to the switch fabric. However, if the response to step 220 is
"yes", the
process moves to bit-assignment step 224 where an "inactive" ingress
congestion
notification signal is prepared by assigning the bit corresponding to the
Class of Service in
use a value of "0", and sent to the switch fabric 56. Upon receiving the
"active" ingress
congestion notification signal from step 222, the switch fabric 56 ignores any
"active"
egress congestion notification signals from the egress card. Conversely, upon
receiving
the "inactive" ingress congestion notification signal from step 224 the switch
fabric 56
responds to the egress congestion notification signal for the Class of Service
in use.
In the third ingress congestion notification phase as shown in Fig. Sc, the
process
determines whether or not to send an ingress congestion notification signal to
the switch
fabric. Upon cell departure the process is entered at retrieval step 226 where
the Class of
Service of the cell is retrieved. Following this operation, the process moves
to
decremental step 228 where the total cell count is decremented by one, then to
re-set step
230 where the previous ICN on state is set to the current ICN on state. Next,
the process
moves to decision step 232 where it is determined if (a) ICN on state is
"FALSE" and (b)
the total cell count is equal to or greater than the ICN on thresh. If the
response to both


CA 02301433 2000-03-20
(a) and (b) is "yes", the process moves to step 234 where the ICN on is set to
"TRUE".
However, if the response to either (a) or (b) at step 232 is "no", the process
moves to
decision step 236 where it is determined if (a) the ICN on state is "TRUE" and
(b) the
total cell count is equal to or greater than the ICN off thresh minus the
ICN_off thresh diff. If the response to both (a) and (b) is "yes", the process
moves to
step 234 where the ICN on is set to "TRUE". However, if the response to either
(a) or (b)
at step 236 is "no", the process moves to step 238 where the ICN on is set to
"FALSE".
From step 238 or step 234, the process moves to decision step 240 where it is
determined
if the previous ICN on state equals the current ICN on state. If the response
is "yes", the
process ends. If "no", the process moves to decision step 242 where it is
determined if the
ICN on state is "FALSE". If the response is "no", the process moves to bit-
assignment
step 244 where an "active" ingress congestion notification signal for traffic
belonging to
the Class of Service in use is prepared and sent to the switch fabric.
However, if the
determination at step 242 is "yes", the process moves to bit-assignment step
246 where an
inactive" ingress congestion notification signal for traffic belonging to the
Class of Service
in use is prepared and sent to the switch fabric. The switch fabric reacts to
the ingress
congestion notification signals from steps 246 and 244 in the same way it
responded to
steps 224 and 222 in the above second ingress congestion notification phase.
As described above, the ingress congestion notification threshold is
continually
adjusted at regular intervals as feed-forward ingress congestion information
is sent to the
switch fabric 56. Thus, at the end of an interval, if congestion loss is
detected, this
threshold is decreased by a predetermined amount, and if congestion loss is
not detected,
the threshold is increased by another predetermined amount.
Figs. 7a, 7b and 7c illustrate a process for adjusting the ingress congestion
notification threshold levels in the preferred embodiment. This process
consists of three
phases; first, second and third threshold adjustment phases respectively for
the operation
performed upon initialization, upon cell discard, and for each cell transmit
time. In the
flow chart, the following terms are used: "max ICN on thresh" is the maximum
ingress
congestion notification threshold level. "min ICN on thresh" is the minimum
ingress
congestion notification threshold level. "thresh adapt intvl" is the time
remaining until
the next threshold adaptation event takes place. This variable is decremented
by one for
every cell transmit time. "preset adapt intvl" is the period between
successive threshold
adaptation events. "preset amount of increase" is the incremental amount
applied to the
ingress congestion notification threshold if congestion loss was not detected
during the
16


CA 02301433 2000-03-20
previous interval. "preset amount of decrease" is the decrement applied to the
ingress
congestion notification threshold if congestion loss was detected during the
previous
interval. "cell transmit time" is the duration between successive cell
transmissions, which
is the inverse of cell transmission speed. "cell discard detected = TRUE" is
that a cell
discard has been detected.
In a first threshold adjustment phase, the process is entered at
initialization step 300
of Fig. 7a where all ICN on thresh values are set to max ICN on thresh, all
thresh adapt intvl values are set to preset adapt intvl, and all cell discard
detected states
are set to "FALSE", whereupon the first threshold adjustment phase is ended.
In the second threshold adjustment phase shown in Fig. 7b, upon occurrence of
a cell
discard, the process is entered at retrieval step 302 where the Class of
Service of the cell is
retrieved. Next, the process enters at step 304 where the cell discard
detected state of the
same Class of Service is set to "TRUE", whereupon the second threshold
adjustment
phase is ended.
In the third threshold adjustment phase shown in Fig. 7c, the process
determines the
adjustment required for the ingress congestion notification threshold. At
every cell
transmit time and for each Class of Service in use, the process is entered at
decremental
step 306 where thresh adapt intvl is decremented by one. The process then
moves to
decision step 308 where it is determined if thresh adapt intvl is greater than
"0". If the
response is "yes", the process ends. If "no", the process moves to decision
decision step
310 where it is determined if cell discard detected is "TRUE". If the response
is "yes",
the process moves to step 312 where ICN on thresh is set to ICN_on thresh
minus
preset amount of decrease. From step 312 the process moves to decision step
314 where
it is determined if ICN on thresh is less than min ICN on thresh. If the
response is
"no", the process moves to step 316. If "yes", the process moves to step 318
where
ICN on thresh is set to min ICN on thresh, following which the process moves
to step
316. Returning to step 310, if the determination is "no", the process moves to
step 320
where ICN_on thresh is set to ICN on thresh plus preset amount of increase,
following
which the process moves to decision step 322. Here it is determined if ICN on
thresh is
greater than max ICN_on thresh. If the response is "no", the process moves to
step 316.
If "yes", the process moves to step 324 where ICN on thresh is set to
max_ICN on thresh, following which the process moves to step 316. At step 316
the
process sets the thresh adapt intvl to the preset adapt intvl and sets the
cell discard detected for the Class of Service in use to "FALSE", following
which the
17


CA 02301433 2000-03-20
process ends.
The activation of congestion weights by a non-idling, switch fabric scheduler
is tied
to a congestion situation in the ingress cards 52, 54 so as to preserve
service fairness.
Thus, the traffic scheduler synchronously activates congestion weights the
moment the
switch fabric receives an "active" ingress congestion notification signal from
the ingress
card.
Figs. 8a-8f illustrate the traffic scheduling process at the switch fabric 56
according
to a presently preferred embodiment to reduce the rate of traffic transmission
upon
receiving a notification of congestion from the egress card. This process
allows for greater
simplicity in switch fabric designs. It also allows for maintenance of the
transmission rate
for traffic in non-flow-controlled queues as the transmission rate for traffic
in flow-
controlled queues is reduced. The scheduling phase consists of six independent
and
simultaneously-operating phases; first to sixth traffic scheduling phases
respectively for
the operations performed: upon initialization, upon receiving an "active"
ingress
congestion notification signal, upon receiving an "inactive" ingress
congestion notification
signal, upon receiving an "active" egress congestion notification signal, upon
receiving an
"inactive" egress congestion notification signal, and upon scheduling the
transmission of a
cell to the egress card. In the flow charts of Figs. 8a-8f, 9a-9f and l0a-lOf,
the following
terms are used: "ICN on = TRUE" indicates that the ingress card is congested
for the
Class of Service in use. "ICN on = FALSE" indicates that the ingress card is
not
congested for the Class of Service in use. "rate ctrl on = TRUE" indicates
that the egress
card is congested for the Class of Service in use. "rate ctrl on = FALSE"
indicates that
the egress card is not congested for the Class of Service in use. "Transmit =
TRUE"
indicates that the traffic for the Class of Service in use can be transmitted.
"Transmit =
FALSE" indicates that the traffic for the Class of Service in use cannot be
transmitted.
In the first traffic scheduling phase shown in Fig. 8a, the process is entered
at
initialization step 400 where each of the ICN on, rate ctrl on, and Transmit
is set to
"FALSE", following which this first phase ends. In the second traffic
scheduling phase
shown in Fig. 8b, when the switch fabric receives an "active" ingress
congestion
notification signal for the Class of Service in use, the process moves to step
402 where
ICN on for the same Class of Service is set to "TRUE", following which this
second
phase ends. In the third traffic scheduling phase shown in Fig. 8e, when the
switch fabric
receives an "inactive" ingress congestion notification signal for the Class of
Service in
use, the process moves to step 404 where ICN on for the same Class of Service
is set to
18


CA 02301433 2000-03-20
"FALSE", following which this third phase ends. In the fourth traffic
scheduling phase
shown in Fig. 8d, when the switch fabric receives an "active" egress
congestion
notification signal for the Class of Service in use, the process moves to step
406 where
rate ctrl on for the same Class of Service is set to "TRUE", following which
this fourth
phase ends. In the fifth traffic scheduling phase shown in Fig. 8c, when the
switch fabric
receives an "inactive" egress congestion notification signal for the Class of
Service in use,
the process moves to step 408 where rate ctrl on for the same Class of Service
is set to
'FALSE", following which this fifth phase ends. In the sixth traffic
scheduling phase
shown in Fig. 8f, the process determines whether or not to transmit a cell
that is ready for
transmission to at least one of the egress cards. When a cell is scheduled for
transmission,
the process moves to selection step 410 where the Class of Service of the cell
to be
transmitted is selected. The process then moves to decision step 412 where it
is
determined if ICN on for the selected Class of Service is "TRUE". If "yes",
the process
moves to transmit step 414. If "no", the process moves to decision step 416
where it is
1 S determined if rate ctrl on for the selected Class of Service is "FALSE".
If "yes", the
process moves to transmit step 414. If "no", the process moves to decision
step 418 where
it is determined if the Transmit state for the selected Class of Service is
"TRUE". If "yes",
the process moves to transmit step 414. If "no", the process moves to non-
transmit step
420 and is told not to transmit a cell from the Class of Service in use. From
step 420 the
process moves to toggle step 422 where the state of Transmit is toggled, upon
which the
process ends. Returning to transmit step 414, the switch fabric is told to
transmit a cell
from the Class of Service in use. It then moves to toggle step 422 where the
state of
Transmit is toggled, upon which the process ends. For simplicity of operation,
a fixed rate-
control reduction factor of %2 is used in the traffic scheduling scheme under
the presently
preferred embodiment in order to optimize flow control effectiveness. In other
embodiments, the rate-control reduction factor can have a value (I/2)n ; where
n is a
positive integer. Here, a counter is used for each Class of Service in order
to determine
whether or not to transmit traffic from that Class of Service. Another
variable called
"interval" is used for each Class of Service in order to identify the rate-
reduction factor.
For example, the "interval" can be set to "3" in order to achieve a rate-
reduction factor of
%4. The value of the counter can be incremented each time traffic from the
Class of
Service in use is chosen for transmission. The value of the counter is always
reset to 0
when it exceeds the value of the "interval". When the value of the counter is
0, the switch
fabric scheduler will not transmit traffic from a flow-controlled Class of
Service that is
19


CA 02301433 2000-03-20
due for transmission.
Different traffic scheduling schemes from the one described above in
conjunction
with Figs. 8a-8f for the presently preferred embodiment are also feasible. For
example,
two alternative traffic scheduling schemes for first and second alternative
embodiments of
the invention are illustrated in the flow charts of Figs. 9a-9f and Figs. l0a-
lOf,
respectively. These alternative schemes are similar to that illustrated in
Figs. 8a-8f, except
that the egress rate control in the presently preferred embodiment is replaced
by stop-and-
go flow control in the first alternative embodiment and by credit-based flow
controls in the
second alternative embodiment.
In the first alternative embodiment using stop-and-go flow control, the egress
card
sends a signal to the traffic scheduler to stop transmission of cells until
congestion is
alleviated. As shown in Figs. 9a-9f, the traffic scheduling process consists
of six
independent and simultaneously-operating phases. In the first traffic
scheduling phase
shown in Fig. 9a, the process is entered at initialization step 500 where each
of the
ICN on, and Transmit is set to "FALSE", following which this first phase ends.
In the
second traffic scheduling phase shown in Fig. 9b, when the switch fabric
receives an
"active" ingress congestion notification signal for the Class of Service in
use, the process
moves to step 502 where ICN on for the same Class of Service is set to "TRUE",
following which this second phase ends. In the third traffic scheduling phase
shown in
Fig. 9c, when the switch fabric receives an "inactive" ingress congestion
notification
signal for the Class of Service in use, the process moves to step 504 where
ICN on for the
same Class of Service is set to "FALSE", following which this third phase
ends. In the
fourth traffic scheduling phase shown in Fig. 9d, when the switch fabric
receives an
"active" egress congestion notification signal for the Class of Service in
use, the process
moves to step 506 where Transmit for the same Class of Service is set to
"TRUE",
following which this fourth phase ends. In the fifth traffic scheduling phase
shown in Fig.
9c, when the switch fabric receives an "inactive" egress congestion
notification signal for
the Class of Service in use, the process moves to step 508 where Transmit for
the same
Class of Service is set to 'FALSE", following which this fifth phase ends. In
the sixth
traffic scheduling phase shown in Fig. 9f, the process determines whether or
not to
transmit a cell that is ready for transmission to at least one of the egress
cards 62, 64.
When a cell is scheduled for transmission, the process moves to selection step
510 where
the Class of Service of the cell to be transmitted is selected. The process
then moves to
decision step 512 where it is determined if ICN on for the selected Class of
Service is


CA 02301433 2000-03-20
"TRUE". If "yes", the process moves to transmit step 514. If "no", the process
moves to
decision step 516 where it is determined if the Transmit state for the
selected Class of
Service is "TRUE". If "yes", the process moves to transmit step 514. If "no",
the process
moves to non-transmit step S 18 and told not to transmit a cell from the Class
of Service in
use, upon which the process ends. Returning to step 514, the switch fabric is
told to
transmit a cell from the Class of Service in use, upon which the process ends.
In the second alternative embodiment using credit-based flow control, the
egress card
periodically notifies the traffic scheduler to limit the number of cells it is
transmitting. As
shown in Figs. l0a-lOf, the traffic scheduling process at the switch fabric
consists of five
independent and simultaneously-operating phases; first to fifth traffic
scheduling phases
respectively for the operations performed upon initialization, upon receiving
an "active"
ingress congestion notification signal, upon receiving an "inactive" ingress
congestion
notification signal, upon receiving an egress credit control signal for a
Class of Service,
and upon scheduling the transmission of a cell to the egress card. In the
first traffic
scheduling phase shown in Fig. 10a, the process is entered at initialization
step 520 where
the ICN on is set to "FALSE" and all credits are set to "0", following which
this first
phase ends. In the second traffic scheduling phase shown in Fig. lOb, when the
switch
fabric 56 receives an "active" ingress congestion notification signal for the
Class of
Service in use, the process moves to step 522 where ICN_on for the same Class
of Service
is set to "TRUE", following which this second phase ends. In the third traffic
scheduling
phase shown in Fig. l Oc, when the switch fabric receives an "inactive"
ingress congestion
notification signal for the Class of Service in use, the process moves to step
524 where
ICN on for the same Class of Service is set to "FALSE", following which this
third phase
ends. In the fourth traffic scheduling phase shown in Fig. lOd, when the
switch fabric
receives an egress credit control signal for the Class of Service in use, the
process moves
to update step 526 where the credit is updated based on the content of the
credit control
signal, following which this fourth phase ends. In the fifth traffic
scheduling phase shown
in Fig. 10e, the process determines whether or not to transmit a cell that is
ready for
transmission to at least one of the egress cards. When a cell is scheduled for
transmission,
the process moves to selection step 528 where the Class of Service of the cell
to be
transmitted is selected. The process then moves to decision step 530 where it
is
determined ICN on for the selected Class of Service is "TRUE". If "yes", the
process
moves to transmit 'step 532. If "no", the process moves to decision step 534
where it is
determined if the credit is greater than 0. If "yes", the process moves to
transmit step 532.
21


CA 02301433 2000-03-20
If "no", the process moves to non-transmit step 536 and told not to transmit a
cell from the
Class of Service in use, upon which the process ends. Returning to transmit
step 532, the
switch fabric 56 is told to transmit a cell from the Class of Service in use
and to decrement
the credit, upon which the process ends.
Five sets of simulation analysis were conducted for an 8x8 data switching
system
with a switching speed of 1.24 Gigabits/sec per ingress or egress card. Cell
Loss Ratio
(CLR) values were collected at each of the ingress card, switch fabric, and
egress cards.
Traffic destination was uniformly distributed into all 64 output ports. The
time taken for
each simulation run was 5.5 seconds. Statistics on the CLR were collected
after 0.5
seconds of elapse time on each simulation run. For each simulation set, the
configurations
used were a simple baseline configuration and a baseline configuration
augmented by the
present invention.
The simple baseline configuration included the functions of traffic
prioritization,
traffic queuing, buffer memory partitioning, traffic scheduling, and
congestion control.
Traffic i.e. data limits, cells or packets was categorized into four Classes
of Service:
Constant Bit Rate (CBR), real-time Variable Bit Rate (rt-VBR), non-real-time
Variable Bit
Rate (nrt-VBR), and Unspecified Bit Rate (UBR). Traffic was also prioritized
into three
levels, namely, High Priority, Medium Priority, and Low Priority. CBR traffic
was given
the High Priority level and received higher service precedence over any of the
other traffic
classes. Next, rt-VBR traffic was given the Medium Priority level and received
higher
service precedence over non-real-time traffic. Finally, nrt-VBR and UBR
traffic classes
were given the Low Priority level.
Baseline traffic was queued according to its Class of Service. In addition,
traffic in
each of the ingress card, switch fabric, and egress card was queued
differently, such that
traffic in the ingress card was queued according to its designated egress
card; traffic in the
switch fabric was queued according to its source ingress card and destination
egress card;
and traffic in the egress card was queued according to its designated output
port.
In order to make efficient use of limited buffer capacity, achieve predictable
cell loss,
and achieve delay performance, a strategy for buffer memory occupancy was put
in place.
First, each Class of Service was allocated a certain amount of dedicated
buffer space in
each of the ingress and egress cards to store its traffic. Secondly, each
traffic queue of a
given Class of Service in a given ingress card or a given egress card was
allotted a certain
amount of dedicated memory. Thirdly, all traffic queues of a given Class of
Service in a
22


CA 02301433 2000-03-20
given ingress card (or a given egress card) were alloted an amount of shared
memory
space equal to the amount of dedicated buffer memory of the same Class of
Service in the
same ingress card (or the same egress card) minus the total amount of
dedicated memory
assigned to these traffic queues. Fourthly, in the switch fabric, buffer
partitioning between
S queues of a given Class of Service was used such that each traffic queue was
allotted a
certain portion of fabric's buffer memory.
The baseline configuration for traffic scheduling included a traffic scheduler
in each
of the ingress card, switch fabric, and egress card. Each scheduler
implemented the
weighted fair queuing scheme, similar to that used in the Self Clocked Fair
Queuing
design described in S. J. Golestani, "A Self clocked Fair Queuing Scheme for
Broadband
Applications," Proceedings of IEEE lnfocom, June 1994, by assigning congestion
weights
to all congested traffic queues. Congestion, or non-congestion, was
categorized as
follows:
a) A given Class of Service was considered congested when the total buffer
memory
occupied by traffic from the Class of Service exceeded a certain threshold
(i.e.,
congestion-on threshold if the Class of Service was previously considered not
congested,
or congestion-off threshold if the Class of Service was previously considered
congested);
b) A given Class of Service was not considered congested when the total buffer
memory
occupied by traffic from the Class of Service was below a certain threshold
(i.e.,
congestion-off threshold if the Class of Service was previously considered
congested or,
congestion-on threshold if the Class of Service was previously considered not
congested);
c) A traffic queue from a given Class of Service in an ingress or egress card
was
considered congested when the Class of Service reached a congestion state, and
the size of
the traffic queue exceeded its dedicated buffer space;
d) A traffic queue was not considered congested if the given Class of Service
did not
reach a congestion state, or the size of the traffic queue was below its
dedicated buffer
space.
Finally the baseline congestion control configuration consisted of fabric
backpressure
flow control, selective cell discard for real-time traffic, and selective
packet discard for
non-real-time traffic. Fabric backpressure flow control was used to eliminate
congestion
loss in the buffer-limited switch fabric. Under this scheme, the switch fabric
was able to:
firstly stop the transmission of non-real-time traffic from a given ingress
card when the
queue of the traffic exceeded a certain threshold (i.e., backpressure-on
threshold if the
queue was previously not backpressured, or backpressure-off threshold if the
queue was
23


CA 02301433 2000-03-20
previously backpressured); and secondly allow the transmission of the non-real-
time
traffic when the queue of the traffic was below a certain threshold (i.e.,
backpressure-off
threshold if the queue was previously backpressured, or backpressure-on
threshold if the
queue was previously not backpressured).
A selective cell discard scheme was used to reduce the congestion loss of high
priority, real-time traffic by discarding low priority, real-time traffic upon
detection of
congestion. On the other hand, a selective packet discard scheme was used to
increase
buffer utilization and throughput for non-real-time traffic by storing only
complete
packets. As well, an Early Packet Discard (EPD) scheme, as described in S.
Floyd, V.
Jacobson, "Random Early Detection Gateways for Congestioin Avoidance,"
IEEE/ACM
Transactions on Networking, August 1993, was used to reduce the number of
stored
incomplete packets. Selective discard strategies were defined as follows:
a) A given Class of Service in either of the ingress or egress cards was
subjected to
selective discard when the total buffer memory occupied by the traffic of the
Class of
Service exceeded a certain threshold (i.e., class-discard-on if the Class of
Service was
previously not subjected to selective discard, or class-discard-off if the
Class of Service
was previously subjected to selective discard);
b) A given Class of Service was not subjected to selective discard if the
total buffer
memory occupied by the traffic of the Class of Service was below a certain
threshold (i.e.,
class-discard-off if the Class of Service was previously subjected to
selective discard, or
class-discard-on if the Class of Service was previously not subjected to
selective discard);
c) A traffic queue of a given Class of Service was subjected to selective
discard when
the Class of Service was subjected to selective discard, and the size of the
traffic queue
exceeded a certain threshold (i.e., queue-discard-on if the traffic queue was
previously not
subjected to selective discard, or queue-discard-off if the traffic queue was
previously
subjected to selective discard);
d) A traffic queue was not subjected to selective discard if the Class of
Service was not
subjected to selective discard or the size of the traffic queue was below a
certain threshold
(i.e., queue-discard-off if the traffic queue was previously subjected to
selective discard, or
queue-discard-on if the traffic queue was previously not subjected to
selective discard).
The second configuration consisted of the baseline configuration described
above
together with the four interrelated processes of the present invention,
namely, feedback
egress congestion notification, feed-forward ingress congestion notification,
adaptive
thresholds for ingress congestion notification, and traffic scheduling at the
switch fabric,
24


CA 02301433 2000-03-20
as illustrated in Figure 2.
Performance measurement of the cell-level congestion and flow control schemes
was
conducted under high to very high traffic loads for the following reasons. A
congestion
situation can be more readily captured at high traffic loads and the
performance of flow
control schemes can be assessed in the region where performance is critical.
The
performance improvement offered by such flow control schemes begins to
decrease as the
traffic load decreases.
Each of the five simulations described earlier included constant bit rate,
variable bit
rate, real-time, and non-real-time traffic sources. In these simulations,
constant bit rate
and real-time traffic were together classified as CBR traffic; variable bit
rate and real-time
traffic were together classified as rt-VBR traffic; and variable bit rate and
non-real-time
traffic were together classified as UBR traffic. The flow control schemes
under the
present invention were not applied to real-time-traffic because such traffic
is delay-
sensitive, and the application of flow control measures would increase the
transfer delay of
cells as they flow through the switch.
The first simulation was performed under a 90.5% traffic load, from which 20%
and
24% belonged to CBR and rt-VBR traffic respectively. CBR traffic was
represented by a
constant-bit-rate source per input port, and rt-VBR traffic was represented by
36 VBR II
sources per input port. UBR traffic comprised the remainder of the traffic
load and was
represented by VBR III sources. Since each VBR III source generated a load of
7.749%
of the line rate, six VBR III sources were required per input port in order to
produce the
total traffic load of 90.5%. (The maximum packet size that can be generated by
a VBR III
source is 210 cells).
The second simulation was performed under a 92.5% traffic load, from which 22%
and 24% belonged to CBR and rt-VBR traffic respectively. As in the first set,
six VBR III
sources per input port were needed to represent UBR traffic and bring the
total load to
approximately 92.5%.
The third simulation was performed under a 94.5% traffic load, from which 24%
corresponded to each of the CBR and rt-VBR traffic. Again, six VBR III sources
per
input port were needed to represent UBR traffic and bring the total load to
approximately
94.5%.
The fourth simulation was performed under a 96.5% traffic load, from which 26%
and 24% belonged to CBR and rt-VBR traffic respectively. Again, six VBR III
sources
per input port were needed to represent UBR traffic.

CA 02301433 2000-03-20
The fifth simulation was performed under a 98.25% traffic load, from which 20%
and 24% belonged to CBR and rt-VBR traffic respectively. However, in this set,
seven
VBR III sources per unit port were required to represent UBR traffic and bring
the total
load to approximately 98.25%.
Simulation parameters for UBR traffic, under which the CLR performance was
evaluated, are summarized in Tables 1 through 4.
Table 1 summarizes simulation parameters for buffer sizes, congestion
thresholds
and scheduler weights:
Total


Switch Dedicated Cong. Cong. VSFQ VSFQ Cong.
On Off


Buffer


Element Buffer SizeThresholdThresholdWeight Weight


Size


Ingress11776 327 5792 5664 1 2


Fabric 64 32 n/a n/a 1 2


Egress 7936 220 4535 4407 1 2


Table 1
Table 2 summarizes simulation parameters for the selective packet discard
strategy:
EPD Queue- EPD Queue-
Switch Class-Discard- Class-Discard-
Discard-On Discard-Off
Element On Threshold Off Threshold
Threshold Threshold
Ingress 7328 6816 687 618
Egress 5559 5047 496 446
Table 2
Table 3 summarizes simulation parameters for the flow control strategy:
Rate- Rate-


Backpressure Backpressure ICN-Off


Switch Control-Control- ICN-On


-On -Off Threshold


Element On Off Threshold


Threshold Threshold Difference


ThresholdThreshold


Ingress n/a n/a N/a n/a 6304 128


26


CA 02301433 2000-03-20
Fabric 24 16 N/a n/a n/a n/a
Egress n/a n/a 5047 4919 n/a n/a
Table 3
Finally, Table 4 summarizes simulation parameters for the adaptation of ICN
thresholds:
Maximum Minimum Pre-set Pre-set Pre-set


Switch


ICN-On ICN-On Adaptation Amount of Amount
of


Element


Threshold ThresholdInterval Increase Decrease


Ingress 6304 2943 0.001 sec 4 128


Table 4
Table 3 shows that the egress "rate-control-on" threshold was set to be 512
cells less
than the "class-discard-on" threshold for egress (shown in Table 1). The "rate-
control-on"
threshold is set just slightly lower than the egress "class-discard-on"
threshold so that rate
control is activated before selective packet discard is performed at egress.
The "rate-
control-on" threshold is not generally set far below the "class-discard-on"
threshold,
otherwise the rate control would be activated too frequently. Furthermore, the
difference
between the "rate-control-on" and "rate-control-off' thresholds is relatively
small (Table 3
shows a difference of 128 cells between these on and off thresholds) so that
rate control is
not activated frequently for a long duration, which might otherwise cause
ingress buffers
to grow quickly. Next, the "ICN-on" threshold is set lower than the "ingress-
class-
discard-on" threshold so that the switch fabric can be notified of ingress
congestion and
that egress rate control for the congested ingress can be deactivated before
selective packet
discard is performed at ingress. Similarly, the difference between "ICN-on"
and "ICN-
off" thresholds is generally small so that ingress congestion notification is
not activated
frequently for a long duration, which might otherwise cause egress rate
control to be
ineffective.
Table 4 shows first that the minimum "ICN-on" threshold was set at 2943 cells.
This
number was obtained by multiplying the total number of UBR traffic queues in
an ingress
card (8 unicast queues, one for each egress card, and 1 multicast queue) by
the amount of
buffer space dedicated to each traffic queue. Secondly, the "pre-set
adaptation interval"
parameter was set at 0.001 second, which is deemed sufficiently small for the
scheme to
respond quickly to ingress congestion. The adaptation interval is set smaller
if the CLR
27


CA 02301433 2000-03-20
objective is lower (e.g., lE-05 or lower), and larger if the ingress line rate
is lower.
Finally, the "pre-set increment" and "pre-set decrement" parameters are set at
4 and 128
cells respectively. The "pre-set decrement" parameter is set large so that in
the event of
frequent ingress congestion, the "ICN-on" threshold can quickly reach the
minimum
value. In this manner, much of the ingress buffer space can be used to handle
ingress
congestion. On the other hand, the "pre-set increment" is set to be
substantially smaller
than the "pre-set decrement" so that the "ingress congestion notification-on"
threshold can
slowly reach the maximum value after a congestion situation has ended. In this
way, the
ingress card can more quickly respond to future congestion situations.
Table 5 tabulates CLR values for UBR traffic when the flow control scheme is
not
employed.
Simulation Average CLR at IngressAverage CLR at Overall CLR
Set Egress


1 1.20000E-05 6.71125E-04 6.83117E-04


2 1.05000E-05 3.00788E-03 3.01834E-03


3 1.21250E-05 1.04158E-02 1.04278E-02


4 1.95000E-04 2.56705E-02 2.58605E-02


5 3.09875E-03 3.47991 E-02 3.77900E-02


Table 5


Table 6 illustrates CLR values for UBR traffic with the flow control scheme
employed. Fig. 11 compares these simulation results and confirms the superior
performance of the present invention up to very high traffic loads.
Simulation Average CLR at IngressAverage CLR at Overall CLR
Set Egress


1 O.OOOOOE+00 O.OOOOOE+00 O.OOOOOE+00


2 1.05000E-05 O.OOOOOE+00 1.05000E-05


3 2.36000E-04 4.54713E-03 4.78205E-03


4 1.31025E-03 2.31250E-02 2.44050E-02


5 4.60188E-03 3.32569E-02 3.77057E-02


Table 6


The present invention is suitable for use with a switch fabric that
incorporates a
work-conserving rate scheduling scheme, and other scheduling schemes as will
occur to
28


CA 02301433 2000-03-20
those of skill in the art. A work-conserving scheduler attempts to transmit a
cell whenever
one is available. Conversely, a non-work-conserving or idling scheduler may
not transmit
a cell even if one is available.
The flow control scheme in accordance with the present invention aims at
reducing
the overall congestion loss by taking advantage of both input and output
buffers. The
invention differs from those using solely a stop-and-go or a credit-based
feedback flow
control procedure in that it does not attempt to isolate congestion loss in
the ingress card.
As a result, it is able to maintain service fairness and to reduce the overall
congestion loss
even at very high traffic loads. Flow control schemes that are solely feedback-
based often
yield higher overall congestion loss than with no flow control under very high
traffic
loads. In addition, when buffer sharing is utilized in the ingress card, the
stopped traffic
can quickly consume the shared buffer space. Hence, congestion loss for
traffic destined
to non-congested egress card increases.
An embodiment of flow control scheme in accordance with this invention is
found to
reduce overall congestion loss at medium to high loads and to maintain service
fairness by
shifting congestion loss more to the output side as traffic loads increase
even further. As a
result, the scheme is capable of increasing the utilization of buffer memory.
This is
especially important for an environment where such resource is a limiting
factor for the
system's performance, such as in satellite on-board packet switches.
Since congestion loss in the switch fabric is virtually eliminated through the
use of
stop-and-go or credit-based flow control schemes at the switch fabric, the
traffic
schedulers in the ingress cards and switch fabric play a vital role in
reducing congestion
loss in the ingress cards by applying at least one of a plurality of
congestion weights.
Service fairness is accomplished by linking the activation of congestion
weights in the
traffic scheduler in the switch fabric to a congestion situation in the
ingress card. Thus,
upon receiving an "active" ingress congestion notification signal from an
ingress card, the
traffic scheduler will activate the congestion weight for the Class of
Service/ingress card
pair. This strategy takes advantage of (a) the larger ingress buffer size; (b)
a more
efficient process for accommodating service fairness in the ingress card, and
(c) a one-way
relation between congestion in the ingress card and congestion in the switch
fabric.
Generally, the advantages of the method and system of the present invention
can be
summarized as follows. Egress congestion loss is significantly reduced at
medium to
reasonably high loads. The overall congestion loss is maintained to be smaller
than for the
case where no egress flow control is employed at high to very high loads.
Furthermore, at
29


CA 02301433 2000-03-20
very high loads, congestion loss at the egress cards is found to be
significantly greater than
that at the ingress cards, indicating that service fairness is little
affected. The resulting
scheme achieves desirable congestion loss behaviour and requires reasonable
implementation complexity. The activation of congestion weights in the traffic
scheduler
with a congestion situation in the ingress cards permits more elaborate
addressing of
service fairness issue. This strategy also permits synchronous handling of
ingress
congestion. The method of controlling traffic burstiness and maintaining flow
control
effectiveness requires relatively few hardware resources to implement such
that it is
feasible for implementation in the resource-constrained switch fabric and can
be used with
stop-and-go, credit-based or rate-based flow control scheme.
Modifications, variations and adaptations may be made to the particular
embodiments of the invention described above, without departing from the
spirit and
scope of the invention, which is defined in the claims.
30

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2000-03-20
(41) Open to Public Inspection 2001-02-09
Dead Application 2006-03-20

Abandonment History

Abandonment Date Reason Reinstatement Date
2005-03-21 FAILURE TO PAY APPLICATION MAINTENANCE FEE
2005-03-21 FAILURE TO REQUEST EXAMINATION

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Registration of a document - section 124 $100.00 2000-03-20
Application Fee $300.00 2000-03-20
Maintenance Fee - Application - New Act 2 2002-03-20 $100.00 2002-03-20
Maintenance Fee - Application - New Act 3 2003-03-20 $100.00 2003-03-20
Registration of a document - section 124 $100.00 2003-06-20
Maintenance Fee - Application - New Act 4 2004-03-22 $100.00 2004-03-19
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
SPACEBRIDGE SEMICONDUCTOR CORPORATION
Past Owners on Record
HUANG, JUN
SPACEBRIDGE NETWORKS CORPORATION
WIBOWO, EKO ADI
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative Drawing 2001-02-12 1 13
Description 2000-03-20 30 1,862
Abstract 2000-03-20 1 33
Claims 2000-03-20 4 146
Drawings 2000-03-20 19 333
Cover Page 2001-02-12 1 52
Correspondence 2000-04-06 1 24
Assignment 2000-03-20 3 86
Assignment 2001-03-20 3 85
Correspondence 2001-05-23 1 17
Assignment 2001-07-24 3 86
Assignment 2003-06-20 6 256
Fees 2002-03-20 1 20