Language selection

Search

Patent 2369178 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2369178
(54) English Title: DISTRIBUTED CROSSBAR SWITCHING FABRIC ARCHITECTURE
(54) French Title: ARCHITECTURE REPARTIE DE MATERIEL DE COMMUTATION CROSSBAR
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04Q 3/52 (2006.01)
  • H04L 49/101 (2022.01)
  • H04L 12/04 (2006.01)
  • H04L 12/933 (2013.01)
  • H04L 12/935 (2013.01)
(72) Inventors :
  • HOSNY, MOHAMED SAMY (Canada)
(73) Owners :
  • HOSNY, MOHAMED SAMY (Canada)
(71) Applicants :
  • HOSNY, MOHAMED SAMY (Canada)
(74) Agent: GOWLING WLG (CANADA) LLP
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2002-01-24
(41) Open to Public Inspection: 2003-07-24
Examination requested: 2005-06-02
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data: None

Abstracts

English Abstract



A switching fabric system for routing data is provided. The switching fabric
system comprises an input port for receiving data, a crossbar ingress
processing unit
having a receiving end and a sending end, an interconnecting crossbar having a
first
end and a second end, a crossbar egress processing unit having a receiving end
and a
sending end, and an output port for sending the data out of the switching
fabric
system. The crossbar ingress processing unit receives data from the input port
and
sends data to the interconnecting crossbar. The crossbar egress processing
unit
receives data from the interconnecting crossbar, stores data, and sends data
to the
output port. The switching fabric system may be provided without either of the
ingress or egress processing unit, where the interconnecting crossbar is
connected to
either the corresponding input or output port.


Claims

Note: Claims are shown in the official language in which they were submitted.




WHAT IS CLAIMED IS:
1. A switching fabric system for routing data, the switching fabric system
comprising:
an input port for receiving data into the switching fabric system;
a crossbar ingress processing unit having a receiving end and a sending end,
the receiving end of the crossbar ingress processing unit attached to the
input port, the
crossbar ingress processing unit for receiving the data from the input port
and sending
data out of the sending end;
an interconnecting crossbar having a first end and a second end, the
interconnecting crossbar attached at the first end to the sending end of the
crossbar
ingress processing unit for receiving the data, the interconnecting crossbar
for
allowing the data to travel from the first end to the second end;
a crossbar egress processing unit having a receiving end and a sending end,
the
receiving end of the crossbar egress processing unit attached to the second
end of the
interconnecting crossbar, the crossbar egress processing unit for storing the
data and
for sending the data out the sending end of the crossbar egress processing
unit; and
an output port connected to the sending end of the crossbar egress processing
unit for sending the data out of the switching fabric system.
2. The switching fabric system as claimed in claim 1, wherein the crossbar
ingress
processing unit further comprises a crossbar path on which the data travels.
3. The switching fabric system as claimed in claim 1, wherein the crossbar
ingress
processing unit further comprises a channel selector for selecting a path for
the data to
travel.
4. The switching fabric system as claimed in claim 1, wherein the
interconnecting
crossbar further comprises a crossbar path on which the data travels.
-12-


5. The switching fabric system as claimed in claim 1, wherein the crossbar
egress
processing unit further comprises a crossbar path on which the data travels.

6. The switching fabric system as claimed in claim 1, wherein the crossbar
egress
processing unit further comprises a memory buffer for storing the data.

7. The switching fabric system as claimed in claim 6, wherein the crossbar
egress
processing unit further comprises a scheduler for sending the data from
thememory
buffer to the output port.

8. The switching fabric system as claimed in claim 7, wherein the scheduler
sends
the data according to the quality of service (QoS) of data packets of the
data.

9. The switching fabric system as claimed in claim 1, wherein multiple data
packets
are stored in an output memory buffer unit.

10. The switching fabric system as claimed in claim 9, wherein the multiple
data
packets stored in an output memory buffer unit are sent to the egress output
port based
on the quality of service of the data packet.

11. The switching fabric system as claimed in claim 1, comprising multiple
crossbar
ingress processing units.

12. The switching fabric system as claimed in claim 11, wherein the
interconnecting
crossbar comprises multiple crossbar paths connecting the multiple crossbar
ingress
processing units to the crossbar egress processing unit.

13. The switching fabric system as claimed in claim 1, further comprising
multiple
crossbar egress processing units.


-13-


14. The switching fabric system as claimed in claim 13, wherein the
interconnecting
crossbar further comprises multiple crossbar paths connecting the crossbar
ingress
processing units to the multiple crossbar egress processing units.

15. The switching fabric system as claimed in claim 1, further comprising
multiple
crossbar ingress processing units and multiple crossbar egress processing
units.

16. The switching fabric system as claimed in claim 15, wherein the
interconnecting
crossbar further comprises multiple crossbar paths connecting the multiple
crossbar
ingress processing units to the multiple crossbar egress processing units.

17. A switching fabric system for routing data, the switching fabric system
comprising:
an input port for receiving data into the switching fabric system;
a crossbar ingress processing unit having a receiving end and a sending end,
the receiving end of the crossbar ingress processing unit attached to the
input port, the
crossbar ingress processing unit for receiving the data from the input port
and sending
data out of the sending end;
an interconnecting crossbar having a first end and a second end, the
interconnecting crossbar attached at the first end to the sending end of the
crossbar
ingress processing unit for receiving the data, the interconnecting crossbar
for
allowing the data to travel from the first end to the second end;
an output port connected to the second end for sending the data out of the
switching fabric system.

18. A switching fabric system for routing data, the switching fabric system
comprising:
an input port for receiving data into the switching fabric system;
an interconnecting crossbar having a first end and a second end, the
interconnecting crossbar attached at the first end to the input port for
receiving the
data, the interconnecting crossbar for allowing the data to travel from the
first end to
the second end;


-14-


a crossbar egress processing unit having a receiving end and a sending end,
the
receiving end of the crossbar egress processing unit attached to the second
end of the
interconnecting crossbar, the crossbar egress processing unit for storing the
data and
for sending the data out the sending end of the crossbar egress processing
unit; and
an output port connected to the sending end of the crossbar egress processing
unit for sending the data out of the switching fabric system.

19. A crossbar ingress processing unit for receiving and sending data, the
crossbar
egress processing unit comprising:
a receiving end for receiving data into the crossbar ingress processing unit;
a sending end for sending the data out from the crossbar ingress processing
unit; and
a crossbar path having a first end attached to the receiving end and a second
end attached to the sending end, the crossbar path for allowing the data to
travel from
the receiving end to the sending end.

20. The crossbar ingress processing unit as claimed in claim 19, further
comprising a
channel selector for selecting a path for the data to travel.

21. An egress processing unit for storing and sending data, the crossbar
egress
processing unit comprising:
a receiving end for receiving data into the crossbar egress processing unit;
a sending end for sending the data out from the crossbar egress processing
unit; and
a crossbar path having a first end attached to the receiving end and a second
end attached to the sending end, the crossbar path for allowing the data to
travel from
the receiving end to the sending end.

22. The egress processing unit as claimed in claim21, further comprising a
memory
buffer attached to the crossbar path between the receiving end and the sending
end,
the memory buffer for receiving the data from the receiving end and for
storing the
data.


-15-


23. The egress processing unit as claimed in claim 22, further comprising a
data
packet scheduler for sending data packets from the memory buffer to the
sending end
of the output memory buffer unit.

24. A method for providing a switching fabric system a mechanism for routing
data,
the method comprising steps of:
providing an input port for receiving the data into the switching fabric
system;
providing a crossbar ingress processing unit attached at one end to the
ingress
input port;
providing an interconnecting crossbar attached at a first end to the crossbar
ingress processing unit, the interconnecting crossbar for allowing the data to
travel
from the ingress input port to a second end of the interconnecting crossbar;
providing a crossbar egress processing unit attached at a receiving end to the
second end of the interconnecting crossbar, the crossbar egress processing
unit for
storing the data and for sending the data out a sending end of the crossbar
egress
processing unit; and
providing an output port for sending the data out of the switching fabric
system.


-16-

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02369178 2002-O1-24
Distributed Crossbar Switching Fabric Architecture
FIELD OF THE INVENTION
The present invention relates generally to switching fabrics, and in
particular,
to an architecture for a crossbar switching fabric device. This invention
applies to
switching fabrics in routers, switches, Sonet crossconnects, Sonet Add drop
Muxes or
any other apparatus that uses crossbar switching fabrics.
BACKGROUND OF THE INVENTION
A muter is used for switching data packets between a source and a destination
in a network including a plurality of ports and a switching fabric device. The
switching fabric device receives data packets from the input of one port, and
routes it
to the appropriate output port.
In packet switch communication systems, a muter is a switching device which
receives packets containing data or control information on one port, and based
on the
destination information contained within the packet, routes the packet out
another port
to the destination. This process is typically done at three levels. The first
is the
Physical layer device (PHIL which extracts the packet from the physical media.
The
second is Network Processor (NP) which extracts the address from the packet
and
2o translates this address to an actual port number. Finally, the Switching
Fabric (SF)
device routes the packet to its destination.
The data packet enters the router from the ingress side of the muter, first
into
the Physical layer device (PHY) then into the Network Processor (NP). Then the
data
packet enters the switch fabric. The data packet exits the switch fabric to
the egress
side of the muter and then exits the muter. It is the responsibilityof the
switch fabric
(SF) to route the data packet to its appropriate output port, once the data
packet
departs the Network Processor (NP) at the input ports. A conflicting situation
might
occur if two data packets from different input ports are destinedto the same
output
port at the same time, which may lead to dropping or loosing one of them. To
overcome this problem, designers added memory buffering with special
scheduling
mechanisms to store data packets temporarily; then service these packets
basedon the
Quality of Service (QoS) for each packet. Current generation switch fabrics
are
-1-

CA 02369178 2002-O1-24
classified into two main architectures, input buffering and output buffering,
depending
on the location of these memory buffers.
Figure 1 shows an unfolded overview of a conventional router structure 100
with output buffering, i.e., an overview of an output buffered muter. The
conventional router structure 100 includes ingress components 101, a switch
fabric
102, and egress components 103. The ingress components 101 includes a physical
device (PHY) 104 and a network processor 1 O5. The switch fabric 102 includes
ports
106, a shared output memory buffer 107, a scheduler 108, and a memory manager
109. The egress components include a network processor 105 and a physical
device
104.
For better memory utilization, typically an output buffering architecture 100
employs a shared memory structure 107 where a global memory contains the
packets
moving into and out from switch fabric 102. The bandwidth required inside the
fabric
is proportional to both the number of ports 106 and the line rate. This
internal speed
up factor is inherent to shared memory structures 107, and is the main reason
output
buffered switches are becoming increasingly difficult to implement. In
addition to
memory bandwidth limitations, scheduling the data packets becomes more complex
as the number of ports grow, and the memory buffer becomes bigger, harder to
manage, and the whole switch becomes more expensive.
2o To avoid these limitations in the output buffered switches some systems
architects turn to the input buffering model. Figure 2 shows an unfolded
overview of
a conventional muter structure with input buffering 150. As shown in Figure 2,
the
network processors 105 on the ingress components 101 contain memory buffers
151.
The switch fabric 152 provides a transport 159, typically in a crossbar
structure,
between the ingress 101 network processors and egress 103 network processors,
hence, eliminating the need for each egress Network Processor (NP) to gain
access to
any shared resources between output. The muter scalability is therefore
improved
compared to output buffering. 'The crossbar fabric 152 includes a scheduler
158 that
monitors the state of the input queues, and ensures each packet is serviced
3o appropriately.
Although input buffered switch fabrics 150 are more scalable than output
buffered switch fabrics 100, they suffer from performance issues relating to
head-of
-2-

CA 02369178 2002-O1-24
line blocking. Head-of line blocking is a phenomenon that causes one packet to
block
other packets from reaching their output destinations at the appropriate time,
either
because of the size of this packet or because of a malfunction in the network
that
causes one muter to flood one address location in the network. The effect of
this
phenomenon can be reduced using techniques such that Virtual Output Queuing
(VOQ), which uses N separate queues at each input port (N being an integer
greater
than 0). Each sorting packets may be destined to one of the output ports:
However,
VOQ suffers from a scalability problem, since the number of output ports is
constrained by the number of VOQ on the ingress side. Also the complexity of
the
to switch fabric scheduler grows as the number of ports increases.
SUMMARY OF THE INVENTION
'This invention overcomes the scalability problem associated with switch
fabrics in general, and simplifies the processing tasks associated with it. A
key
element in this invention is slicing the an NxN crossbar switch in a way such
that it is
divided into N slices, preferably identical, (N being an integer greater than
0) based
around the individual input and output ports. This architecture creates
independent
processing engines for each port, hence, making the tasks of processing
ingress and
egress data easier. In addition, the invention adds a capability of scaling
the switch
2o fabric as the number of ports increase.
In one embodiment of the present invention, a switching fabric system for
routing data is provided. The switching fabric system comprises an input port
for
receiving data into the switching fabric system, a crossbar ingress processing
unit
having a receiving end and a sending end, an interconnecting crossbar having a
first
end and a second end, a crossbar egress processing unit comprising a receiving
end
and a sending end, and an output port connected to the sending end of the
crossbar
egress processing unit for sending the data out of the switching fabric
system. The
receiving end of the crossbar ingress processing unit is attached to the input
port. The
crossbar ingress processing unit receives data from the input port and sends
data out
of its sending end. The interconnecting crossbar is attached at its first end
to the
sending end of the crossbar ingress processing unit for receiving the data.
The
interconnecting crossbar allows the data to travel from its first end to its
second end.
-3-

CA 02369178 2002-O1-24
The receiving end of the crossbar egress processing unit is attached to the
second end
of the interconnecting crossbar. The crossbar egress processing unit stores
dataand
sends data out its sending end.
In another embodiment ofthe present invention, a switching fabric system for
routing data is provided. The switching fabric system comprises an input port
for
receiving data into the switching fabric system, a crossbar ingress processing
unit
having a receiving end and a sending end, an interconnecting crossbar having a
first
end and a second end, an output port connected to the second end of the
interconnecting crossbar for sending the data out of the switching fabric
system. The
receiving end of the crossbar ingress processing unit is attached to the input
port. The
crossbar ingress processing unit receives data from the input port and sends
data out
of its sending end. The interconnecting crossbar is attached at its first end
to the
sending end of the crossbar ingress processing unit for receiving the data.
The
interconnecting crossbar allows the data to travel from its first end to its
second end.
In another embodiment of the present invention, a switching fabric system for
routing data is provided. The switching fabric system comprises an input port
for
receiving data into the switching fabric system, an interconnecting crossbar
having a
first end and a second end, a crossbar egress processing unit having a
recaving end
and a sending end, and an output port connected to the sending end of the
crossbar
2o egress processing unit for sending the data out of the switching fabric
system. The
interconnecting crossbar is attached at its first end to the input port for
receiving the
data. The interconnecting crossbar allows the data to travel from its first
end to its
second end. The receiving end of the crossbar egress processing unit is
attached to
the second end of the interconnecting crossbar. The crossbar egress processing
unit
stores data and sends data out its sending end.
In another embodiment of the present invention, a crossbar ingress processing
unit for receiving and sending data is provided. The crossbar egress
processing unit
comprises a receiving end for receiving data into the crossbar ingress
processing unit,
a sending end for sending the data out from the crossbar ingress processing
unit, and a
3o crossbar path having a first end attached to the receiving end and a second
end
attached to the sending end. The crossbar path allows data to travel from the
-4-

CA 02369178 2002-O1-24
receiving end to the sending end. The crossbar ingress processing unit may
further
comprise a channel selector for selecting a path for the data to travel.
In another embodiment ofthe present invention, an egress processing unit for
storing and sending data is provided. The crossbar egress processing unit
comprises a
receiving end for receiving data into the crossbar egress processing unit, a
sending
end for sending the data out from the crossbar egress processing unit, and a
crossbar
path having a first end attached to the receiving end and a second end
attached to the
sending end. The crossbar path allows the data to travel from the receiving
end to the
sending end. The egress processing unit may further comprise a memory buffer
to attached to the crossbar path between the receiving end and the sending
end, the
memory buffer for receiving the data from the receiving end and for storing
the data.
The egress processing unit may further comprise a data packetscheduler for
sending
data packets from the memory buffer to the sending end of the output memory
buffer
unit.
15 In another embodiment of the present invention, a method for providing a
switching fabric system a mechanism for routing data is provided. The method
comprises steps of providing an input port for receiving the data into the
switching
fabric system, providing a crossbar ingress processing unit attached at one
end to the
ingress input port, providing an interconnecting crossbar attached at a
firstend to the
20 crossbar ingress processing unit, providing a crossbar egress processing
unit attached
at a receiving end to the second end of the interconnecting crossbar, and
providing an
output port for sending the data out of the switching fabric system. The
interconnecting crossbar allows data to travel from the ingress input port to
a second
end of the interconnecting crossbar. The crossbar egress processing unit
stores data
25 and sends data out its sending end.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be further understood from the following description with
reference to the drawings in which:
3o Figure 1 shows an unfolded overview of a conventional router structure with
output buffering;
-5-

CA 02369178 2002-O1-24
Figure 2 shows an unfolded overview of a conventional muter structure with
input buffering;
Figure 3 shows a 1x1 output buffered swich fabric system according to an
example of an embodiment of the present invention;
Figure 4 shows an NxN distribued output buffered switch fabric system
according to an example of an embodiment of the present invention;
Figure 5 shows a crossbar structure superimposed on a shared memory switch;
Figure 6 shows an N Kxl crossbar ingress processing unit and N Kxl crossbar
egress processing units functionally equivalent to an NxN crossbar fabric;
Figure 7 shows an unfolded router architecture using 2~tage crossbar ingress
processing units;
Figure 8 shows an unfolded muter architecture using a 2-stage crossbar egress
processing units;
Figure 9 shows an unfolded router architecture using a 2 stage crossbar
i5 ingress and 2-stage crossbar egress processing units;
Figure 10 shows an exploded view of a 2-stage crossbar ingress processing
units;
Figure 11 shows an exploded view of a 2~tage crossbar egress processing
units;
2o Figure 12 shows an unfolded muter/ switch architecture using only crossbar
egress procrssing units; and
Figure 13 shows an overview of unfolded router/ switch architecture using
both crossbar egress and ingress procrssing units.
25 DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
Figure 3 shows an example of a one dimensional switching fabric architecture
200 in accordance with an embodiment of the present invention. The switching
fabric
architecture 200 comprises an input port 201, a crossbar ingress processing
unit 202,
an interconnecting crossbar 203, a crossbar egress processing unit 204, and an
output
30 port 205. The ingress processing unit 202 may contain a crossbar path 207
and a
channel selector 206 to select paths) to which egress port the data is
destined. The
interconnecting crossbar 203 may contain a crossbar path 207. The egress
processing
-6-

CA 02369178 2002-O1-24
unit 204 may contain a crossbar path 207 and in case of a muter or switch
application,
a memory buffer 208, and a scheduling unit 209. In alternative embodiments,
the
switch fabric architecture 200 may be produced without either the crossbar
ingress
processing unit 202 or the crossbar egress processing unit 204.
In this one-dimensional example, data enters the input port 201 from a sending
network processor (not shown) to the crossbar ingress processing unit 202
where the
channels) that the data will travel through within the interconnecting
crossbar are
selected by the channel selector 206. The channel selector 206 is a switching
system
that maps the port address in the packet header into a physical path within
the fabric
to interconnecting crossbar. The data is then directed by the interconnecting
crossbar
203 to the crossbar egress processing unit 204. The data packet is stored in
the
memory buffer 208, in the case of a muter application, and sent to the output
port 205
via the crossbar path 207 in a timely manner, controlled by a scheduler 209.
The data
packet then enters a receiving network processor (not shown).
Figure 4 shows an example of a multiple dimensional switching fabric
architecture 300 in accordance with an embodiment of the present invention.
The
switching fabric architecture 300 comprises a series of ingress input ports
301, a
series of distributed crossbar ingress processing unit 302, an interconnecting
crossbar
303, a series of distributed crossbar egress processing units 304, and a
series of egress
output ports 305. The ingress processing units 302 may contain crossbar paths
307
and a channel selector 306. The interconnecting crossbar 303 rnay contain
crossbar
paths 307. The crossbar egress processing units 304 may contain crossbar paths
307,
and in case of a router or switch application, a memory buffer 308 and a
packet
scheduling unit 309.
In this multi-dimensional example, data enters an ingress input port 301 from
a sending physical layer device and network processor (not shown). There may
be N
ingress input ports 301, where N is an integer greater than 0. The data is
sent to an
ingress processing unit 302, where the channel selector 306 selects the path
for the
data to proceed through the interconnecting crossbar 303 to the designated
egress
3o processing unit 304. The channel selector simple switching system that maps
the port
address in the packet header into a physical path within the fabric
interconnecting
crossbar. Again, there may be N egress processing units 304, where N is an
integer
_7_

CA 02369178 2002-O1-24
greater than 0. The data is stored in the memory buffer 308. There may be many
data
packets in the memory buffer 308. The data is sent to the egress output port
305 via
the crossbar path 307 in a timely manner based on the QoS of the data packet.
The
scheduling of the packets based on QoS is performed by the scheduler 309. The
data
packet then enters a receiving network processor and physical layer device
(not
shown). In alternative embodiments, the switch fabric architecture 300 may be
produced without either the crossbar ingress processing unit 302 or the
crossbar
egress processing unit 304.
The switch fabric architecture 300 will be further described using Figures 5
to
10 for a router or switch application. The main function of a switch fabric is
to
transfer data packets from the ingress network processor (NP) to the egress
NP, with
smallest latency. By applying this concept to the shared memory fabric 500,
i.e.,
input ports 501 fanning out to output ports 505, an imaginary crossbar
structure 502 is
superimposed to the shared output buffered switch fabric 500, which includes a
shared memory buffer 107, a global scheduler 108 and a global channel selector
506
as shown in Figure 5.
In this example, the preferred situation of all input ports 501 fanning out
into
all output ports 505 is shown. Now consider slicing the NxN crossbar 502 with
both
the shared memory 107, and the global packet scheduler 108, into, preferably,
2o identical output set of cones 601, based around the output ports, and input
set of cones
602, based around the input ports, as shown in Figure 6. Each cone in the 601
set
includes individual output memory buffers 308, output packet schedulers (or
schedulers) 309, and K input ports (or crossbar paths) 307 that can be ported
on
separate devices that will be referred to as crossbar egress processing units
(or output
memory buffers and schedulers) 304. Similarly, each cone in the 602 s~ include
individual processing units 302 for ingress ports that may contain a channel
selector
306 that selects which paths) the data will travel through in the crossbar
switch. The
channel selector 306 is a switching system that maps the port address i~ the
packet
header into a physical path within the fabric interconnecting crossbar 502.
3o A collective set of crossbar egress processing units 304 and ingress
processing
units 302 are functionally equivalent to an NxN Switch Fabric. This leads to a
favorable situation, where each cone 601 set may be separated into a Kxl
individual
_g_

CA 02369178 2002-O1-24
crossbar egress processing device 304 that may be ported on each port 305.
Similarly,
each cone in the 602 set can be separated into a 1xK individual crossbar
ingress
processing unit 302 and may be ported on each port 301, hence, achieving a
highly
scalable switching fabric architecture with distributed crossbar processing
units. A
Kxlcrossbar egress processing device 304, indicates that K ingress input ports
301 (or
K crossbar input processing units 302) , where K is an integer greater than 0,
coming
into the crossbar egress processing device 304 are fanning out to, preferably,
1 egress
port 305 out of the same device. Similarly, a 1xK ingress ports 301 ( or
crossbar
ingress device 302), indicates that K egress ports 304 fanning into;
preferably, 1
ingress port 301 into the same device.
K may have 3 different situations. If K is more than (situation 1) or equal to
(situation 2) the number of ports N in the switch fabric 300, then there will
be one
device 302 and/or 304 required per port to build an NxN switch fabric 300.
However,
if K is less than (situation 3) the number of ports N in the switch fabric
300, a number
~5 of devices 302 and/or 304 may be connected together as shown in Figures 7
and 8 in a
2-stage configuration on each port.
Figure 7 shows another example of an embodiment of a multiple dimensional
switching fabric architecture 300 with 2-stage crossbar ingress processing
units 302.
In this example, there are N output ports 305 and K crossbar paths 307 in each
crossbar ingress 302, where K is an integer greater than 1 and N is an integer
greater
than K and less than 2K. The crossbar ingress processing units are organized
in a
manner to handle this situation where there are more ports 305 than crossbar
paths
307.
In this example, the first K output ports 305 use the "a" crossbar ingress
processing units 302. The remaining output ports will use the "b" crossbar
ingress
processing units 302. Each corresponding "a" and "b" crossbar ingress
processing
unit 302 will receive their inputs from a third crossbar egress processing
units 302.
Figure 8 shows another example of an embodiment of a multiple dimensional
switching fabric architecture 300 with 2-stage crossbar egress processing
units 302.
3o In this example, there are N input ports 301 and K crossbar paths 307 in
each crossbar
egress unit 304, where K is an integer greater than 1 and N is an integer
greater than
K and less than 2K. The crossbar egress processing units 304,are organized in
a
-9-

CA 02369178 2002-O1-24
manner to handle this situation where there are more ports 301 than crossbar
paths
307.
In this example, the first K input ports 301 use the "a" crossbar egress
processing units 304. The remaining input ports will use the "b"
crossbaregress
processing units 304. Each corresponding "a" and "b" crossbar egress
processing
units 304 will have their outputs merged using a third crossbar egress
processing units
304.
Figure 9 shows another example of an embodiment of a multiple dimensional
switching fabric architecture 300 with both 2-stage crossbar ingress 302 and
egress
304 processing units.
Figure 10 shows an exploded view of switching fabric architecture 300 shown
in Figure 7. Figure 10 outlines the first K output ports 305 connecting to
crossbar
ingress processing unit la and the remaining output ports 305 connecting to
crossbar
ingress processing unit 1b. Each corresponding "a" and "b" crossbar ingress
processing unit 302 will receive their inputs from crossbar egress processing
unit 1 It
does not matter which 2 paths are used. Finally, the data originally came from
input
port 1.
Similarly, 3 or more crossbar ingress processing units may be coupled and or
nested in a multistage fashion for situations where there are more than 2K
ports. In
2o addition to the scalability feature, data packet scheduling and memory
management
tasks are simpler, since each input port will have its own dedicated
resources.
Figure 11 shows an exploded view of switching fabric architecture 300 shown
in Figure 8. Figure 11 outlines the first K input ports 301 connecting to
crossbar
egress processing unit la and the remaining input ports 301 connecting to
crossbar
egress processing unit 1b. The outputs of the merged crossbar paths are then
sent to a
crossbar path in crossbar egress processing unit 1. It does not matter which 2
paths
are used. Finally, the data are sent to the output port 1.
Similarly, 3 or more crossbar egress processing units may be coupled and or
nested in a multistage fashion3 for situations where there are more than 2K
ports. In
3o addition to the scalability feature, data packet scheduling and memory
management
tasks are simpler, since each output port will have its own dedicated
resources.
-10-

CA 02369178 2002-O1-24
Figure 12 shows an unfolded router architecture in another embodiment of the
present invention with only the crossbar egress processing units 304. In this
case, a
memory manager 109 is not needed since there is a dedicated memory per port
and
the data packet memories 107 and schedulers 108 are pushed into the crossbar
egress
processing unit 304 located on the corresponding output port 305, a key
advantage in
the architecture, while the actual crossbar is a passive interconnecting
device.
Figure 13 shows an unfolded router architecture using the invention in its
switching fabric with both the crossbar ingress processing units 302 and
crossbar
egress processing units 304. In this case the egress processing units 304 are
used the
1o same way as in Figure 12, while the crossbar ingress processing units 302
are used as
crossbar channel selectors to select the paths) the data is going to travel
through in
the switch fabrics to reach its destination(s).
An aspect of this embodiment is slicing the output buffered switch in such a
manner that it is divided into, preferably, identical slices based around the
output and
15 input ports of a routing device. Slicing any ingress processing functions
into identical
slices around their input ports is also provided in this embodiment. This
architecture
(or system) creates independent ingress and egress processing engines. In the
case of
muter applications, memory buffers 308 and schedulers 309 are included (versus
shared memory buffers 107 and single scheduler 108), hence, making the tasks
of
2o managing the memories and scheduling the data packets much easier. In
addition, it
adds a capability of scaling the switch fabric as the number of ports
increase, since the
crossbar is sliced into identical slices that can be ported to the individual
ports.
Embodiments of this invention may be applied to switching fabrics in routers,
switches, Sonet crossconnects, Sonet Add drop Muxes or any other apparatus
that
25 uses crossbar switching fabrics. While specific embodiments of the present
invention
have been described, various modifications and substitutions may be made to
such
embodiments. Such modifications and substitutions are within the scope of the
present invention, and are intended to be covered by the following claims.
-11-

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2002-01-24
(41) Open to Public Inspection 2003-07-24
Examination Requested 2005-06-02
Dead Application 2010-01-25

Abandonment History

Abandonment Date Reason Reinstatement Date
2005-01-24 FAILURE TO PAY APPLICATION MAINTENANCE FEE 2005-06-02
2009-01-26 FAILURE TO PAY APPLICATION MAINTENANCE FEE
2009-02-04 R30(2) - Failure to Respond
2009-02-04 R29 - Failure to Respond

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $150.00 2002-01-24
Maintenance Fee - Application - New Act 2 2004-01-26 $50.00 2004-01-15
Request for Examination $400.00 2005-06-02
Reinstatement: Failure to Pay Application Maintenance Fees $200.00 2005-06-02
Maintenance Fee - Application - New Act 3 2005-01-24 $50.00 2005-06-02
Maintenance Fee - Application - New Act 4 2006-01-24 $50.00 2006-01-13
Maintenance Fee - Application - New Act 5 2007-01-24 $100.00 2007-01-24
Maintenance Fee - Application - New Act 6 2008-01-24 $100.00 2008-01-24
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
HOSNY, MOHAMED SAMY
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Abstract 2002-01-24 1 24
Description 2002-01-24 11 634
Claims 2002-01-24 5 199
Representative Drawing 2002-06-06 1 11
Cover Page 2003-07-04 2 46
Drawings 2002-01-24 11 400
Fees 2004-01-15 1 33
Fees 2005-06-02 1 35
Assignment 2002-01-24 3 78
Fees 2006-01-13 1 34
Prosecution-Amendment 2005-06-02 1 41
Fees 2007-01-24 1 39
Fees 2008-01-24 2 77
Correspondence 2008-01-24 2 77
Prosecution-Amendment 2008-08-04 4 132