Language selection

Search

Patent 2548540 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2548540
(54) English Title: REARRANGEABLY NONBLOCKING MULTICAST MULTI-STAGE NETWORKS
(54) French Title: RESEAUX MULTIETAGE DE MULTIDIFFUSION APTE A UN REAGENCEMENT DE NON BLOCAGE
Status: Deemed Abandoned and Beyond the Period of Reinstatement - Pending Response to Notice of Disregarded Communication
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04Q 01/00 (2006.01)
(72) Inventors :
  • KONDA, VENKAT (United States of America)
(73) Owners :
  • TEAK TECHNOLOGIES, INC.
(71) Applicants :
  • TEAK TECHNOLOGIES, INC. (United States of America)
(74) Agent: BORDEN LADNER GERVAIS LLP
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2003-09-06
(87) Open to Public Inspection: 2006-03-30
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/US2003/027971
(87) International Publication Number: US2003027971
(85) National Entry: 2006-03-06

(30) Application Priority Data: None

Abstracts

English Abstract


A rearrangeably nonblocking multicast network in accordance with the invention
includes an input stage (110) having r1 switches (IS) and n1 inlet links (IL)
for each of r1 switches, an output stage (120) having r2 switches (OS) and n2
outlet links (OL) for each of r2 switches. The network also has a middle stage
(130) of m switches (MS), and each middle switch has at least one link
connected to each input switch for a total of at least r1 first internal links
and at least one link connected to each output switch for a total of at least
r2 second internal links, where m .fwdarw. n1 + n2. The network has all
multicast connections set up such that each multicast connection passes
through at most two middle switches to be connected to the destination outlet
links.


French Abstract

La présente invention a trait à un réseau de multidiffusion apte à un réagencement de non blocage comportant un étage d'entrée (110) comprenant r1 commutateurs (IS) et n1 liens d'entrée (IL) pour chacun des r1 commutateurs, un étage de sortie (120) comprenant r2 commutateurs (OS) et n2 liens de sortie (OL) pour chacun des r2 commutateurs. Le réseau comporte également un étage intermédiaire (130) de m commutateurs (MS), et chaque commutateur intermédiaire comprend au moins un lien connecté à chaque commutateur d'entrée pour un total d'au moins r1 premiers liens internes et au moins un lien connecté à chaque commutateur de sortie pour un total d'au moins r2 deuxièmes liens internes, où m = n1 + n2. Toutes les connexions multidiffusion du réseau sont établies de sorte que chaque connexion multidiffusion traverse un maximum de deux commutateurs à être connectés aux liens de sortie de destination.

Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS:
What is claimed is:
1. A network having a plurality of multicast connections, said network
comprising:
an input stage comprising r1 input switches, and n1 inlet links for each of
said r1
input switches;
an output stage comprising r2 output switches, and n2 outlet links for each of
said r2 output switches; and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link (hereinafter "first internal link") connected to
each input
switch for a total of at least r1 first internal links, each middle switch
further comprising
at least one link (hereinafter "second internal link") connected to each
output switch for a
total of at least r2 second internal links;
wherein each multicast connection from an inlet link passes through at most
two
middle switches, and said multicast connection further passes to a plurality
of outlet links
from said at most two middle switches.
2. The network of claim 1, wherein m .gtoreq. n1 + n2.
3. The network of claim 2,
further is always capable of setting up said multicast connection by changing
the
path, defined by passage of an existing multicast connection, thereby to
change one or
two middle switches used by said existing multicast connection, and said
network is
hereinafter "rearrangeably nonblocking network".
4. The network of claim 1 further comprising a controller coupled to each of
said
input, output and middle stages to set up said multicast connection.
5. The network of claim 2 wherein said r1 input switches and r2 output
switches are
the same number of switches.
-46-

6. The network of claim 2 wherein said n1 inlet links and n2 outlet links are
the
same number of links and n1 = n2 = n, then m .gtoreq. 2 * n.
7. The rearrangeably nonblocking network of claim 3,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more rearrangeably
nonblocking networks.
8. The network of claim 1,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more networks.
9. A method for setting up one or more multicast connections in a network
having an
input stage having n1 * r1 inlet links and r1 input switches, an output stage
having n2 * r2
outlet links and r2 output switches, and a middle stage having m middle
switches,
where each middle switch is connected to each of said r1 input switches
through r1 first
internal links and each middle switch further comprising at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1 .ltoreq. d .ltoreq. r2 , said method comprising:
receiving a multicast connection at said input stage;
fanning out said multicast connection in said input stage into at most two
middle
switches to set up said multicast connection to a plurality of output switches
among said
r2 output switches, wherein said plurality of output switches are specified as
destinations
of said multicast connection, wherein first internal links from said input
switch to said at
most two middle switches and second internal links to said destinations from
said at most
two middles switches are available.
10. The method of claim 9,
wherein a connection exists through said network and passes through a middle
switch and said method further comprises:
-47-

if necessary, changing said connection to pass through another middle switch,
act
hereinafter "rearranging connection".
11. The method of claim 9,
wherein any of said acts of fanning out and rearranging are performed
recursively.
12. A method for setting up one or more multicast connections in a network
having an
input stage having n1*r1 inlet links and r1 input switches, an output stage
having n2*r2
outlet links and r2 output switches, and a middle stage having m middle
switches,
where each middle switch is connected to each of said r1 input switches
through r1 first
internal links and each middle switch further comprising at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1 .ltoreq. d .ltoreq. r2, said method comprising:
checking if at least a first subset of destination output switches of said
multicast
connection have available second internal links to a first middle switch; and
checking if a second middle switch has available second internal links to a
second
subset of destination output switches of said multicast connection.
wherein each destination output switch of said multicast connection is one of
said
first subset of destination output switches and said second subset of
destination output
switches.
13. The method of claim 12 further comprising:
checking if the input switch of said multicast connection has an available
first
internal link to said first middle switch and to said second middle switch.
14. The method of claim 12 further comprising:
prior to said checkings, checking if all the destination output switches of
said
multicast connection are available at said first middle switch.
15. The method of claim 12 further comprising:
repeating said checkings of available second internal links to another second
subset of destination output switches for each middle switch other than said
first and said
second middle switches.
-48-

wherein each destination output switch of said multicast connection is one of
said
first subset of destination output switches and said second subset of
destination output
switches.
16. The method of claim 12 further comprising:
repeating said checkings of available second internal links to another first
subset
of destination output switches with each middle stage switch other than said
first middle
stage switch.
17. The method of claim 12 further comprising:
repeating said checkings of available first internal link to each middle stage
switch
other than said first middle switch and said second middle switch
18. The method of claim 12 further comprising:
setting up each of said connection from its said input switch to its said
output
switches through not more than two middle switches, selected by said
checkings, by
fanning out said multicast connection in its said input switch into not more
than said two
middle stage switches.
19. The method of claim 12 wherein any of said acts of checking and setting up
are
performed recursively.
20. A method for setting up one or more new multicast connections in a network
having an input stage having n1*r1 inlet links anti r1 input switches, an
output stage
having n2*r2 outlet links and r2 output switches, and a middle stage having m
middle
switches, where each middle switch is connected to each of said r1 input
switches
through r1 first internal links and each middle switch further comprising at
least one link
connected to at most d said output switches for a total of at least d second
internal links,
wherein 1 .ltoreq. d .ltoreq. r2, said method comprising:
disconnecting a previously set up multicast connection through the same input
switch of said new multicast connection and;
-49-

setting up said new multicast connection, through at most two middle switches,
first and then setting up said previously set up connection, through at most
two middle
switches.
21. The method of claim 20 further comprising:
when any one of said two setting up acts fail, disconnecting said new
multicast
connection if it succeeded to get set up, and setting up said previously set
up connection
if it failed to get set up.
22. The method of claim 20 further comprising:
repeating said acts of disconnecting a previously set up connection and
setting up
after said new multicast connection for all the remaining previously set up
connections in
the same input switch.
23. The method of claim 20 further comprising:
by setting up said new multicast connection through two middle switches having
available first internal links so that only one of said two middle switches
use second
internal links which are already in use by one or more existing multicast
connections
from other input switches (hereinafter called "incompatible existing
connections"); and
disconnecting said incompatible existing connections.
24. The method of claim 20 further comprising:
for all said incompatible existing connections recursively repeating said acts
of
disconnecting and setting up connections in their respective first switches,
until all said
incompatible existing connections are set up.
25. The method of claim 20 wherein any of said acts of checking, setting up
and
disconnecting are performed recursively.
26. A method of setting up a multicast connection through a three-stage
network, said
method comprising:
fanning out only one or two times in an initial stage.
27. The method of claim 26 further comprising:
-50-

fanning out any number of times in each of the remaining stages.
wherein said three-stage network includes said remaining stages and said
initial
stage.
28. The method of claim 26 further comprising:
repeating said acts of fanning out with a plurality of portions of each of
said
stages.
29. The method of claim 26 further comprising:
recursively performing said act of fanning out.
30. The method of claim 26 wherein:
a remaining stage immediately following said initial stage comprises internal
links
that are at least two times the total number of inlet links of said initial
stage.
31. The method of claim 26 wherein:
said initial stage comprises a plurality of first switches, and a plurality of
inlet
links connected to each said first switch; and
a remaining stage immediately following said initial stage comprises a
plurality of
second switches, that are at least double the number of inlet links of each
first switch and
each second switch comprises a plurality of internal links at least equal in
number to the
number of first switches in said initial stage.
32. A network Comprising:
an input stage comprising N1 or n1*r1 inlet links and r1 input switches and n1
inlet links for each of said r1 input switches, and N1 = n1*r1 said tai inlet
links for
receiving multicast connections;
an output stage comprising N2 or n2*r2 outlet links and r2 output switches and
n2 outlet links for each of said r2 output switches, and N2 = n2*r2 said n2
outlet links
for transmitting said received connections; and
-51-

a middle stage having m middle switches, and each middle switch comprising at
least one link connected to each input switch for a total of at least r1 first
internal links
and each middle switch further comprising at least one link connected to at
most d said
output switches for a total of at least d second internal links, wherein
1.ltoreq.d.ltoreq.r2,
said initial stage having multicast connections with a fan-out of one or two.
33. The network of claim 32 further comprising:
said multicast connections having a fan-out of one or more in said middle
stage.
34. The network of claim 32 further comprising:
said multicast connections having a fan-out of one or mare in said output
stage.
35. A network having a plurality of multicast connections, said network
comprising:
an input stage comprising r1 input switches and n1 inlet links for each of
said r1
input switches, and N1 = n1*r1;
an output stage comprising r2 output switches and n2 outlet links for etch of
said
r2 output switches, and N2 = n2*r2; and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link connected to each input switch for a total of at
least r1 first
internal links; each middle switch further comprising at least one link
connected to each
output switch for a total of at least r2 second internal links,
wherein each multicast connection from an inlet link passes through at most
three
middles switches, and said multicast connection further passes to a plurality
of
outlet links from said at most three middle switches.
36. The network of claim 35, wherein m.gtoreq.2*n1+n2,
37. The network of claim 36,
further is always capable of setting up said multicast connection by changing
the
path, defined by passage of an existing multicast connection, thereby to
change one or
-52-

two middle switches used by said existing multicast connection, and the
network is
hereinafter "rearrangeably nonblocking network".
38. The network of claim 35 comprising a controller in communication with said
input, output and middle stages to set up said multicast connection.
39. The network of claim 36 wherein said r1 input switches and r2 output
switches
are the same number of switches.
40. The network of claim 36 wherein said n1 inlet links and n2 outlet links
are the
same number of links and n1 = n2 = n, then m.gtoreq.3*n.
41. The rearrangeably nonblocking network of claim 37,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more rearrangeably
nonblocking
networks.
42. The network of claim 35,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more networks.
43. A method for setting up one or more multicast connections in a network
having an
input stage having n1*r1 inlet links and r1 input switches, an output stage
having n2*r2
outlet links and r2 output switches, and a middle stage having m middle
switches,
where each middle switch is connected to each of said r1 input switches
through r1 first
internal links and each middle switch further comprising at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1.ltoreq.d.ltoreq.r2, said method comprising:
receiving a multicast connection at said input stage;
fanning out said multicast connection in said input stage into at most three
middle
switches to set up said multicast connection to a plurality of output switches
among said
r2 output switches, wherein said plurality of output switches are specified as
destinations
-53-

of said multicast connection, wherein first internal links from said input
switch to said at
most three middle switches and second internal links to said destinations from
said at
most three middles switches are available.
44. The method of claim 43,
wherein a connection exists through said network and passes through a middle
switch and said method further comprises:
if necessary, changing said connection to pass through another middle switch,
act
hereinafter "rearranging connection".
45. The method of claim 43,
wherein any of said acts of fanning out and rearranging are performed
recursively.
46. A method for setting up one or more multicast connections in a network
having an
input stage having n1*r1 inlet links and r1 input switches, an output stage
having n2*r2
outlet links and r2 output switches, and a middle stage having m middle
switches,
where each middle switch is connected to each of said r1 input switches
through r1 first
internal links and each middle switch further comprising at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1.ltoreq.d.ltoreq.r2, said method comprising:
checking if all the destination output switches of said multicast connection
have
available second internal links from at most three middle switches.
47. The method of claim 46 further comprising:
checking if the input switch of said multicast connection has available first
internal links to said at most three middle switches.
48. The method of claim 46 further comprising:
repeating said checkings of available second internal links to all said
destination
output switches for all the other combinations of at most three middle
switches.
49. The method of claim 46 further comprising:
-54-

repeating said checkings of available first internal links for all the other
combinations of at most three middle switches.
50. The method of claim 46 further comprising:
setting up each of said connection from its said input switch to its said
output
switches through at most three middle switches, selected by said checkings, by
fanning
out said multicast connection in its said input switch into at most said three
middle stage
switches;
51. The method of claim 46 wherein any of said acts of checking and setting up
are
performed recursively.
52. A method for setting up one or more new multicast connections in a network
having an input stage having n1*r1 inlet links and r1 input switches, an
output stage
having n2*r2 outlet links end r2 output switches, and a middle stage having m
middle
switches, where each middle switch is connected to each of said r1 input
switches
through r1 first internal links and each middle switch further comprising at
least one link
connected to at most d said output switches for a total of at least d second
internal links,
wherein 1.ltoreq.d.ltoreq.r2, said method comprising:
disconnecting a previously set up multicast connection through the same input
switch of said new multicast connection and;
setting up said new multicast connection, through at most three middle
switches,
first and then setting up said previously set up connection, through at most
three middle
switches.
53. The method of claim 52 further comprising:
when any one of said two setting up acts fail, disconnecting said new
multicast
connection if it succeeded to get set up, and setting up said previously set
up connection
if it failed to get set up.
54. The method of claim 52 further comprising:
-55-

repeating said acts of disconnecting a previously set up connection and
setting up
after said new multicast connection for all the remaining previously set up
connections in
the same input switch.
55. The method of claim 52 further comprising:
by setting up said new connection through three middle switches having
available
first internal links so that only one of said three middle switches use second
internal links
which are already in use by one or more existing multicast connections from
other input
switches (hereinafter called "incompatible existing connections"); and
disconnecting said incompatible existing connections.
56. The method of claim 52 further comprising:
for all said incompatible existing connections recursively repeating said acts
of
disconnecting and setting up connections in their respective first switches,
until all said
incompatible existing connections are set up.
57. The method of claim 52 wherein any of said acts of checking, and setting
up are
performed recursively.
58. A method of setting up a multicast connection through a three-stage
network, said
method comprising:
fanning out at most three times in an initial stage.
59. The method of claim 58 further comprising:
fanning out any number of times in each of the remaining stages,
wherein said three-stage network includes said remaining stages end said
initial
stage.
60. The method of claim 58 further comprising:
repeating said acts of farming out with a plurality of portions of each said
stages.
61. The method of claim 58 further comprising:
recursively performing said act of fanning out.
-56-

62. The method of claim 58 wherein:
a remaining stage immediately following said initial stage comprises internal
links
that are at least three times the total number of inlet links of said initial
stage.
63. The method of claim 58 wherein:
said initial stage comprises a plurality of first switches, and plurality of
inlet links
connected to each said first switch; and
a remaining stage immediately following said initial stage comprises a
plurality of
second switches, that are at least three times the number of inlet links of
each first switch
and each second switch comprises a plurality of first internal licks at least
equal in
number to the number of first switches in said initial stage.
64. A network comprising:
an input stage comprising N1 or n1 * r1 inlet links and r1 input switches and
n1
inlet links for each of said r1 input switches, and N1 = n1 * r1, said n1
inlet links for
receiving connection connections;
an output stage comprising N2 or n2 * r2 outlet links and r2 output switches
and
n2 outlet links for each of said r2 output switches, and N2 = n2 * r2 , said
n2 outlet links
for transmitting said received connections; and
a middle stage having m middle switches, and each middle switch has at least
one
link connected to each input switch for a total of at least r1 first internal
links and at least
one link connected to each output switch for a total of at least r2 second
internal links,
said initial stage having multicast connections with a fats-out of at most
three.
65. The network of claim 64 further comprising:
said multicast connections having a fan-out of one or more in said middle
stage.
66. The network of claim 64 further comprising:
said multicast connections having a fan-put of one or more in said output
stage.
67. A network having a plurality of multicast connections, said network
comprising:
-57-

an input stage comprising r1 input switches and n1 inlet links for each of
said r1
input switches, and N1 = n1 * r1;
an output stage comprising r2 output switches and n2 outlet links for each of
said
r2 output switches, and N2 = n2 * r2; and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link connected to each input switch for a total of at
least r1 first
internal links; each middle switch further comprising at least one link
connected to each
output switch for a total of at least r2 second internal links,
wherein each multicast connection from an inlet link passes through at most x
middle switches, and said multicast connection further passes to a plurality
of outlet links
from said at most x middle switches.
68. The network of claim 67, wherein m .gtoreq. (x - 1)*n1 + n2, x .gtoreq. 2.
69. The network of claim 68,
further is always capable of setting up said multicast connection by charging
the
path, defined by passage of an existing multicast connection, thereby to
change one or
two middle switches used by said existing multicast connection, and said
network is
hereinafter "rearrangeably nonblocking network".
70. The network of claim 67 comprising a controller in communication with said
input, output and middle stages to set up said multicast connection.
71. The network of claim 68 wherein said r1 input switches and r2 output
switches
are the same number of switches.
72. The network of claim b8 wherein said n1 inlet links and n2 outlet links
are the
same number of links and n1 = n2 = n, then m.gtoreq. x*n.
73. The rearrangeably nonblocking network of claim 69,
-58-

wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more rearrangeably
nonblocking
networks.
74. The network of claim 67,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more networks.
75. A method for setting up one or more multicast connections in a network
having an
input stage having n1 * r1 inlet links and r1 input switches, an output stage
having n2 * r2
outlet links and r2 output switches, and a middle stage having m middle
switches,
where each middle switch is connected to each of said r1 input switches
through r1 first
internal links and each middle switch further comprising at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1 .ltoreq.d .ltoreq. r2, for x .gtoreq.2, said method comprising:
receiving a multicast connection at said input stage;
fanning out said multicast connection in said input stage into at most x
middle
switches to set up said multicast connection to a plurality of output switches
among said
r2 output switches, wherein said plurality of output switches are specified as
destinations
of said multicast connection, wherein first internal links from said input
switch to said at
most x middle switches and second internal links to said destinations from
said at most
x riddles switches are available.
76. The method of claim 75,
wherein a connection exists through said network and passes through a middle
switch and said method further comprises:
changing said connection to pass through another middle switch, the act
hereinafter "rearranging connection".
77. The method of claim 75 wherein any of said acts of fanning out and
rearranging is
performed recursively.
-59-

78. A method for setting up one or more multicast connections in a network
having an
input stage having n1 * r1 inlet links and r1 input switches, an output stage
having n2 * r2
outlet links and r2 output switches, and a middle stage having m middle
switches,
where each middle switch is connected to each of said r1 input switches
through r1 first
internal links and each of said r2 output switches through r2 second internal
links, for
x .gtoreq. 2, said method comprising:
checking if all the destination output switches of said multicast connection
have
available second internal links from at most x middle switches.
79. The method of claim 78 further comprising:
checking if the input switch of said multicast connection has an available
first
internal links to said at most x middle switches.
80. The method of claim 78 further comprising:
repeating said checkings of available second internal links to all said
destination
output switches for all the other combinations of at most x middle switches.
81. The method of claim 78 further comprising:
repeating said checkings of available first internal links for all the other
combinations of at most x middle switches.
82. The method of claim 78 further comprising:
setting up each of said connection from its said input switch to its said
output
switches through at most x middle switches, selected by said checkings, by
fanning out
said multicast connection in its said input switch into at most said x middle
stage
switches.
83. The method of claim 78 wherein any of said acts of checking and setting up
are
performed recursively.
84. A method for setting up one or more new multicast connections in a network
having an input stage having n1 * r1 inlet links and r1 input switches, an
output stage
-60-

having n2 * r2 outlet links and r2 output switches, and a middle stage having
m middle
switches, where each middle switch is connected to each of said r1 input
switches
through r1 first internal links and each middle switch further comprising at
least one link
connected to at most d said output switches for a total of at least d second
internal links,
wherein 1 .ltoreq. d .ltoreq. r2, for x .gtoreq. 2, said method comprising:
disconnecting a previously set up-multicast connection through the same input
switch of said new multicast connection and;
setting up said new multicast connection, through at most x middle switches,
first
and then setting up said previously set up connection, through at most x
middle switches.
85. The method of claim 84 further comprising:
when any one of said two setting up acts fail, disconnecting said new
multicast
connection if it succeeded to get set up, and setting up said previously set
up connection
if it failed to get set up.
86. The method of claim 84 further comprising:
repeating said acts of disconnecting a previously set up connection and
setting up
after said new multicast connection for all the remaining previously set up
connections in
the same input switch.
87. The method of claim 84 further comprising:
by setting up said new connection through x middle switches having available
first internal links so that only one of said x middle switches use second
internal links
which are already in use by one or more existing multicast connections from
other input
switches (hereinafter called "incompatible existing connections"); and
disconnecting said incompatible existing connections.
88. The method of claim 84 further comprising:
for all said incompatible existing connections recursively repeating said eats
of
disconnecting and setting up connections in their respective first switches,
until all said
incompatible existing connections are set up.
-61-

89. The method of claim 84 wherein any of said acts of checking, and setting
up are
performed recursively.
90. A method of setting up a multicast connection through a three-stage
network, for
x .gtoreq. 2, for x .gtoreq. 2, said method comprising:
fanning out at most x times in an initial stage.
91. The method of claim 90 further comprising:
fanning out any number of times in each of the remaining stages,
wherein said three-stage network includes said remaining stages and said
initial
stage.
92. The method of claim 90 further comprising:
repeating said acts of fanning out with a plurality of portions of each of
said
stages.
93. The method of claim 90 further comprising:
recursively performing said act of fanning out.
94. The method of claim 90 wherein:
a remaining stage immediately following said initial stage comprises internal
links
that are at least x times the total number of inlet links of said initial
stage.
95. The method of claim 90 wherein:
said initial stage comprises a plurality of first switches, and plurality of
inlet links
connected to each said first switch; and
a remaining stage immediately following said initial stage comprises a
plurality of
second switches that are at least x times the number of inlet links of each
first switch and
each second switch comprises a plurality of first internal links at least
equal in number to
the number of first switches in said initial stage.
96. A network comprising:
-62-

an input stage comprising N1 or n1 * r1 inlet links and r1 input switches and
n1
inlet links for each of said r1 input switches, and N1 = n1 * r1, said n1
inlet links for
receiving connection connections;
an output stage comprising N2 or n2 * r2 outlet links and r2 output switches
and
n2 outlet links for each of said r2 output switches, and N2 = n2 * r2, said n2
outlet links
for transmitting said received connections; and
a middle stage having r2 middle switches, and each middle switch has at least
one
link connected to each input switch for a total of at least r1 first internal
links and each
middle switch further comprising at least one link connected to at most d said
output
switches for a total of at least d second internal links, wherein 1 .ltoreq. d
.ltoreq. r2 ,
said initial stage having multicast connections with a fan-out of at most x,
for
x.gtoreq.2.
97. The network of claim 96 further comprising:
said multicast connections having a fan-out of one or more in said middle
stage.
98. The network of claim 96 further comprising:
said multicast connections having a fan-out of one or more in said output
stage.
99. A network having a plurality of multicast connections, said network
comprising:
an input stage comprising r1 input switches and n1w inlet links in input
switch w ,
for each of said r1 input switches such that w .epsilon.[1, r1]and n1 =
MAX(n1w);
an output stage comprising r2 output switches and n2w outlet links in output
switch v , for each of said r2 output switches such that v .epsilon.[1, r2]
and n2 = MAX(n2v);
and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link connected to each input switch for a total of at
least r1 first
internal links; each middle switch further comprising. at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1.ltoreq.d.ltoreq.r2 ,
-63-

wherein m .gtoreq. <IMG> where <IMG> and x1, x2 ,..., x p .gtoreq. 1;
wherein, for 1 .ltoreq. i .ltoreq. p, multicast connections from .alpha.i
inlet links of each input
switch pass through at most x i middles switches.
100. The network of claim 99, where x1,x2,...,x p.gtoreq. 2,
further is always capable of setting up said multicast connection by changing
the
path, defined by passage of an existing multicast connection, thereby to
change one or
two middle switches used by said existing multicast connection, and said
network is
hereinafter "rearrangeably nonblocking network".
101. The network of claim 99 comprising a controller in communication with
said
input, output and middle stages to set up said multicast connection.
102. The network of claim 99 wherein said r1 input switches and r2 output
switches
are the same number of switches.
103. The network of claim 99 wherein said n1 inlet links and n2 outlet links
are the
same number of links and n1 = n2 = n , then m .gtoreq. x * n .
104. The rearrangeably nonblocking network of claim 100,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more rearrangeably
nonblocking
networks.
105. The network of claim 99,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more networks.
106. A network having a plurality of multicast connections, said network
comprising:
an input stage comprising r1 input switches and n1w inlet links in input
switch w ,
for each of said r1 input switches such that w .epsilon.{1,r1] and n1 =
MAX(n1w);
-64-

an output stage comprising r2 output switches and n2v outlet links in output
switch v, for each of said r2 output switches such that v .epsilon.[1, r2 ]
and n2 = MAX(n2v);
and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link connected to each input switch for a total of at
least r1 first
internal links; each middle switch further comprising at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1.ltoreq.d.ltoreq.2
wherein each multicast connection from an inlet link passes through at most
two
middle switches, and said multicast connection further passes to a plurality
of outlet links
from said at most two middle switches.
107. The network of claim 106, wherein m .gtoreq. n1 + n2 ,
108. The network of claim 107,
further is always capable of setting up said multicast connection by changing
the
path, defined by passage of an existing multicast connection, thereby to
change one or
two middle switches used by said existing multicast connection, and said
network is
hereinafter "rearrangeably nonblacking network".
109. The network of claim 106 comprising a controller in communication with
said
input, output and middle stages to set up said multicast connection.
110. The network of claim 107 wherein said r, input switches and r2 output
switches
are the same number of switches.
111. The network of claim 107 wherein said n1 inlet links and n2 outlet links
are the
same number of links and n1 = n2 = n , then m .gtoreq. 2 * n.
112. The rearrangeably nonblocking network of claim 108,
-65-

wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more rearrangeably
nonblocking
networks.
113. The network of claim 106,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more networks.
114. A network having a plurality of multicast connections, said network
comprising:
an input stage comprising r1 input switches and n1w inlet links in input
switch w,
for each of said r1 input switches such that w .epsilon. [1, r1] and n1 =
MAX(n1w);
an output stage comprising r2 output switches and n2v outlet links in output
switch v, for each of said r2 output switches such that v .epsilon. [1, r2]
and n2 = MAX (n2v);
and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link connected to each input switch for a total of at
least r1 first
internal links; each middle switch further comprising at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1 .ltoreq. d .ltoreq. r2 ,
wherein each multicast connection from an inlet link passes through at most
three
middle switches, and said multicast connection further passes to a plurality
of outlet links
from said at most three middle switches.
115. The network of claim 114, wherein m .gtoreq. 2 * n1 + n2,
116. The network of claim 101,
further is always capable of setting up said multicast connection by changing
the
path, defined by passage of an existing multicast connection, thereby to
change one or
two middle switches used by said existing multicast connection, and said
network is
hereinafter "rearrangeably nonblocking network".
-66-

117. The network of claim 114 comprising a controller in communication with
said
input, output and middle stages to set up said multicast connection.
118. The network of claim 115 wherein said r1 input switches and r2 output
switches
are the same number of switches.
119. The network of claim 115 wherein said n1 inlet links and n2 outlet links
are the
same number of links and n1 = n2 = n , then m .gtoreq. 3*n.
120. The rearrangeably nonblocking network of claim 116,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more rearrangeably
nonblocking
networks.
121. The network of claim 114,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more networks.
122. A network having a plurality of multicast, said network comprising:
an input stage comprising r1 input switches and n1w inlet links in input
switch w ,
for each of said r1 input switches such that w.epsilon.[1, r1] and n1
=MAX(n1w);
an output stage comprising r2 output switches and n2v outlet links in output
switch v , for each of said r2 output switches such that v .epsilon.[1, r2]
and n2 = MAX(n2v);
and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link connected to each input switch for a total of at
least r1 first
internal links; each middle switch farther comprising at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1.ltoreq.d.ltoreq.r2 ,for 2.ltoreq.×r2,
-67-

wherein each multicast connection from an inlet link passes through at most x
middle switches, and said multicast connection further passes to a plurality
of outlet links
from said at most x middle switches.
123. The network of claim 122, wherein m .gtoreq.(x -1)*n1 + n2 .
124. The network of claim 123,
further is always capable of setting up said multicast connection by changing
the
path, defined by passage of an existing multicast connection, thereby to
change one or
two middle switches used by said existing multicast connection, and said
network is
hereinafter "rearrangeably nonblocking network".
125. The network of claim 122 comprising a controller in communication with
said
input, output and middle stages to set up said multicast connection.
126. The network of claim 123 wherein said r1 input switches and r2 output
switches
are the same number of switches.
127. The network of claim 123 wherein said n1 inlet links and n2 outlet links
are the
same number of links and n1 =n2 = n, then m .gtoreq. x * n .
128. The rearrangeably nonblocking network of claim 124,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more rearrangeably
nonblocking
networks.
129. The network of claim 122,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more networks.
130. A network having a plurality of multicast connections, said network
comprising:
an input stage comprising r1 input switches, and n1w inlet links in input
switches,
for each of said r1 input switches such that w.epsilon.[1,r1] and n1 =
MAX(n1w);
-68-

an output stage comprising r2 output switches, and n2v outlet links in output
switch .nu. , for each of said r2 output switches such that .nu..epsilon.[1,
r2] and n2 = MAX (n2);
and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link (hereinafter "first internal link") connected to
each input
switch for a total of at least r1 first internal links, each middle switch
further comprising
at least one link (hereinafter "second internal link") connected to at most d
said output
switches for a total of at least d second internal links, wherein 1 .ltoreq. d
.ltoreq. r2 ,
wherein each multicast connection from an inlet link passes through at most
two
middle switches, and said multicast connection further passes to a plurality
of outlet links
from said at most two middle switches.
131. The network of claim 130, wherein <IMG>
and. k is an integer,
wherein at most k multicast connections cannot be set up, (hereinafter
"blocked")
or at most k existing connections are disconnected to set up new multicast
connections.
132. The network of claim 130 further comprising a controller coupled to each
of said
input, output and middle stages to set up said multicast connection.
133. The network of claim 131 wherein said r1 input switches aid r2 output
switches
are the same number of switches.
134. The network of claim 131 wherein said n1 inlet links and n2 outlet links
are the
same number of links and n1 = n2 = n, then m = 2 * n - 2 * k , for 1 .ltoreq.
k < n .
135. The network of claim 130,
wherein each of said input switches, or each of said output switches, or each
of
said middle switches further recursively comprise one or more networks.
-69-

136. A method for setting up one or more new multicast connections in a
network
having an input stage comprising r1 input switches and n1w inlet links in
input switches,
for each of said r1 input switches such that w .epsilon.[1,r1] and n1 =
MAX(n1w);
an output stage comprising r2 output switches and n2v outlet links in output
switch v , for each of said r2 output switches such that .nu..epsilon.[1,r2]
and n2 = MAX(n2v);
and
a middle stage comprising m middle switches, and each middle switch
comprising at least one link connected to each input switch for a total of at
least r1 first
internal links; each middle switch further comprising; at least one link
connected to at
most d said output switches for a total of at least d second internal links,
wherein
1 .ltoreq.d r2 , for x .gtoreq. 2, said method comprising:
disconnecting one or more previously set up multicast connections through the
same input switch of said new multicast connection end;
setting up said new multicast connection, through at most x middle switches,
first
and then setting up said one or more previously set up connections, through at
most x
middle switches, in all possible combinations of sequential order.
137. The method of claim 136 further comprising:
when any one of said setting up acts fail, disconnecting said new multicast
connection if it succeeded to get set up, and setting up one or more said
previously set up
connections if they failed to get set up.
138. The method of claim 136 further comprising:
repeating said acts of disconnecting a previously set up connection and
setting up
after said new multicast connection for all the other one or more groups of
previously set
up connections in the same input switch.
139. The method of claim 136 further comprising:
by setting up said new connection through x riddle switches having available
first internal links so that only one of said x middle switches use second
internal links
which are already in use by one or more existing multicast connections from
other input
switches (hereinafter called "incompatible existing connections"); and
-70-

disconnecting said incompatible existing connections.
140. The method of claim 136 further comprising:
for all said incompatible existing connections recursively repeating said acts
of
disconnecting and setting up connections in their respective first switches,
until all said
incompatible existing connections are set up.
141. The method of claim 136 wherein any of said acts of checking, and setting
up are
performed recursively.
-71-

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
REARRANGEABLY NONBLOCKING MULTICAST
MULTI-STAGE NETWORKS
Venkat Konda
CROSS REFERENCE TO CD-ROM APPENDIX
Appendix A includes software written in the C programming language for a
prototype of a scheduling method and a rearrangement method to set up
connections
through a three-stage network. The C code is compilable by Visual C++
compiler,
version 6.0 available from Microsoft Corporation, to form an executable file
for use in an
IEM compatible personal computer. Appendix A also includes documentation in a
readme file for the C code and also instructions on how to compile and execute
the C
code.
Cddir
volume in drive D is 010925_1558
Volume Serial Number is FFC7-6B58
Directory of D:\
09/25/01 03:58p <DIR>
0/25/01 03:58p <DIR>
09/25/01 03:58p <DIR> M-121 5 1
3 File ( s ) 0 bytes
Directory of D: \M-12151
09/25/01 03:58p <DIR>
09/25/01 03:58p <DIR>
09/21/01 12:22a 30,436 OUT1.RTF
09/21/01 11:36a 1,726 README.TXT
09/21/01 11:34a 30,285 RNB.C
5 Files) 62,447 bytes
Total Files Listed:
$ File (s) 62, 447 bytes
0 bytes free
A portion of the disclosure of this patent document contains material which is
subject to copyright protection. The copyright owner has no objection to the
facsimile
_1_

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
reproduction by anyone of the patent document or patent disclosure, as it
appears in the
U.S. Patent and Trademark Office patent files or records, but otherwise
reserves all
copyrights whatsoever.
CROSS REFERENCE TO RELATER APPLICATIONS
This application is Continuation In Part PCT Application to and incorporates
by
reference in its entirety the related U.S. Patent Application Serial No.
09/967,815 entitled
"REARRANGEABLY NON-BLOCKING MULTICAST MULTI-STAGE
NETWORKS" by Venkat Konda assigned to the same assignee as the current
application,
and filed on 27, September 2001. This application is related to and
incorporates by
reference in its entirety the related U.S. Patent Application Serial No.
09/967,106 entitled
"STRICTLY NON-BLOCKING MULTICAST MULTI-STAGE NETWORKS" by
Venkat Konda assigned to the same assignee as the current application, filed
an 27,
September 2001, and its Continuation In Part PCT Application Docket No. M-0002
filed
concurrently.
1 S This application is related to and incorporates by reference in its
entirety the
related U.S. Provisional Patent Application Docket No. M-0003 entitled
"STRICTLY
NON-BLOCKING MULTICAST LINEAR-TIME MULTI-STAGE NETWORKS" and
U.S. Provisional Patent Application Docket No. M-000 entitled "STRICTLY NON-
BLOCKING MULTICAST MULTI-SPLTT LINEAL-TIME MULTI-STAGE
NETWORKS" by Venkat Konda assigned to the same assignee as the current
application
and filed concurrently.
BACKGROUND OF INVENTION
As is well known in the art, a Clos switching network is a network of switches
configured as a mufti-stage network so that fewer switching points are
necessary to
implement connections between its inlet links (alto called "inputs") and
outlet links (also
Galled "outputs") than would be required by a single stage (e.g. crossbar)
switch having
the same number of inputs and outputs. Clos networks are very popularly used
in digital
crossconnects, switch fabrics and parallel computer systems. However Clos
networks
may block some of the connection requests.
2-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
There are generally three types of nonblocking networks: strictly nonblocking;
wide sense nonblocking; and rearrangeably nonblocking (See V.E. Benes,
"Mathematical
Theory of Connecting Networks and Telephone Traffic" Academic Press, 1965 that
is
incorporated by reference, as background). In a rearrangeably nonblocking
network, a
S connection path is guaranteed as a result of the network's ability to
rearrange prior
connections as yew incoming calls are received. In strictly nonblocking
network, for any
connection request from an inlet link to some set of outlet links, it is
always possible to
provide a connection path through the network to satisfy the request without
disturbing
other existing connections, and if more than one such path is available, any
path can be
i0 selected without being concerned about realization of future potential
connection
requests. In wide-sense nonblockin~ networks, it is also always possible to
provide a
connection path through the network to satisfy the request without disturbing
other
existing connections, but in this case the path used to satisfy the connection
request must
be carefully selected so as to maintain the nonblo~king connecting capability
for future
1 S potential connection requests.
U.S. Patent 5,451,936 entitled "Non-blocking Broadcast Network" granted to
Yang et al. is incorporated by reference herein as background of the
invention. This
patent describes a number of well known nonblocking mufti-stage switching
network
designs in the background section at column 1, line 22 to column 3, S9.
20 An article by Y. Yang, and G.M., Masson entitled, "Non-blocking Broadcast
Switching Networks" IEEE Transactions on Computers, Vol. 40, No. 9, September
1991
that is incorporated by reference as bacl~ground indicates that if the number
of switches in
the middle stage, m, of a three-stage network satisfies the relation
m >_ min((n -1)(x + r lax )) where 1 <_ x _< min(rc -1, r) , the resulting
network is
2S nonblocking for multicast assignments. In the relation, r is the number of
switches in the
input stage, and n is the number of inlet links in each input switch. Kim and
Du (See D. S.
Kim, and D. Du, "Performance of Split Routing Algorithm for three-stage
multicast
networks", lEEE/ACM Transactions on Networking, Vol. 8, No. 4, August 2000
incorporated herein by reference) studied the blocking probability for
multicast
30 connections for different scheduling algorithms.
-3-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
SCfMMARY OF INVENTION
A three-stage network is operated in rearrangeably nonbloeking manner, in
accordance with the invention, when the number of switches in the middle stage
is greater
than or equal to the sum of the number of inlet links of each switch in the
input stage and
the number of outlet links of each switch in the output stage. In one
embodiment, each
connection (uni~ast, multicast, or broadcast) is set up through such a three-
stage network
by use of at most two switches in the middle stage. When the number of inlet
links in
each input switch is equal to the number of outlet links in each output
switch, a three-
stage network is operated in rearrangeably nonblocking manner in accordance
with the
invention, if the number of middle switches is greater than or equal to twice
the number
of inlet links in each input switch. Also in accordance with the invention, a
three-stage
network having more middle switches than the sum of the number of inlet links
of each
input switch and the number of outlet links of each output switch is operated
in
rearrangeably nonblocking manner even if some connections are set up using
mere than
two middle switches as long as each connection has available links into at
'least two
middle switches.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1A is a diagram of an exemplary three-stage symmetrical network with
exemplary multicast connections in accordance with the invention; and FtG. 1B
is high-
level flowchart of a rearrangeable scheduling method according to the
invention, used to
set up the multicast connections in the network 100 of FIG. 1A.
FIG. 2A is a diagram of a general symmetrical three-stage rearrangeably
nonblocking network with n inlet links in each of r input stage switches, n
outlet links
in each of r outpdt stage switches, and m = 2 ~ h middle stage Switches that
are used
with the method of FIG. 1B in one embodiment; and FIG. 2B is a diagram of a
general
non-symmetrical three-stage rearrangeably nonblocking network with n, inlet
links in
each of r1 input stage switches, sz2 outlet links in each of r2 output stage
switches, and
m = n1 + na middle stage switches that are used with the method of FIG. 1B in
one
embodiment;
-4-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
FIG. 3A is intermediate Level flowchart of one implementation of the method
140
of FIG. 1B; FIG. 3B shows an exemplary V(6,3,9) network with certain existing
multicast connections; FIG. 3C shows the network of FIG. 3B after a new
connection is
set up by selecting one middle switch in the network, using the method of FIG.
3A in one
S implementation; and FIG. 3D shows the network of FIG. 3C after another new
connection is set up by selecting two middle switches in the network, using
the method of
FIG. 3A in one implementation.
FIG. 4A is another intermediate level flowchart of one implementation of the
act
142 of FIG. 3A. FIG. 4B is low-level flowchart of one variant of act 142 of
the method of
FIG. 4A; and FIG. 4C illustrates, in a flowchart, pseudo code for one example
of
scheduling method of FIG. 4B. FIG. 4D implements, in one embodiment, the data
Structures used to store and retrieve data from memory of a controller that
implements the
method of FIG. 4C.
FIG. 5A is intermediate lev~I flowchart of one implementation of the method
140
of FIG. 1B; FIG. 5B is first intermediate level flowchart of one embodiment of
the
rearrangement act 150 of the method of fIG. 5A; FIG. SC shows an exemplary
1(6,3,9)
network with certain existing multicast connections; arid FIG. SD shows the
network of
FIG. ~C after a new multicast connection is set up by rearranging an existing
connection
in the network, using the method 140 of FIG. 5A. FIG. SE shows an example of
existing
multicast coimections in a network where in a new multieast connection is to
be set up.
FIG. SF is the network of FIG. SE after the new connection has been set up and
some
existing connections have been disconnected which will be set up later.
FIG. 6A is second intermediate level flowchart of one implementation of the
method 150 of FIG. 5B; FIG. 6B is Low-Level flowchart of one variant of act
168 of the
method of FIG. 6A; and f IG. 6C is low-level flowchart of one variant of act
170 of the
method of FIG. 6A.
FIG. 7A illustrates, in a flowchart, pseudo code for one example of act 160 of
the
rearrangement method 150 of FIG. 6A; and FIG. 7B illustrates, in a flowchart,
pseudo
code for one example of act 170 of the rearrangement rjnethod 150 of FIG. 6A.
-g_

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
FIG. 8A is a diagram of an exemplary three-stage network where the middle
stage
switches are each three-stage networks; FIG. 8B is high-level flowchart, in
one
embodiment, of a recursively rearrangeable scheduling method in a recursively
large
mufti-stage network such as the network in FIG. 8A.
FIG. 9A is a diagram of a general symmetrical three-stage network with n inlet
links in each o~ r input stage switches and m = 3 * n middle stage switches;
and FIG. 9B
is high-level flowchart, in one embodiment, of a rearrangeable scheduling
method used to
set up multicast connections the network of FIG. 9A, according to the
invention.
FIG. 10A is a diagram of a general symmetrical three-stage nerivork with n
inlet
links in each of f~ input stage switches and m ~ x * ri middle stage switches
for x >_ 2 ;
and FIG. 10B is high-level flowchart, in one embodiment, of a rearrangeable
scheduling
method used to set up multicast connections in the network of FIG. 10A,
according to the
invention.
FIG. 11A is a diagram of an exetrtplary x(6,3,4) three-stage network, with
I S m = 2 * n middle stage switches implemented in space-space-space
configuration, with
certain existing multicast connections. setup using the method 140 of FIG. 5A;
FIG. 11B
is the first time step of the TST implementation of the network in FIG. 1 lA;
FIG. 1 IC is
the second time step of the TST implementation of the network in FIG. 11A; and
FIG.
11D is the third time step of the TST implementation of the netwoxk in FIG.
11A.
DETAILED DESCRIPTION OF THE INVENTION
The present invention is concerned with the design and operation of mufti-
stage
switching networks for broadcast, unicast and multicast connections. When a
transmitting
device simultaneously sends information to more than one receiving device, the
one-to-
many connection required between the transmitting device and the receiving
devices is
called a multicast connection. A set of multicast connections is referred to
as a multicast
assignment. When a transmitting device sends information to one receiving
device, the
one-to-one connection required between the' transmitting device and the
receiving device
is called unicast connection. When a tratamitting device simultaneously sends
-6-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
information to all the available receiving devices, the one-to-all coruiection
required
between the transmitting device and the receiving devices is called a
broadcast
connection.
In general, a multicast connection is meant to be one-to-many connection,
which
includes unicast and broadcast connections. A rnulticast assignment in a
switching
network is nonblocking if any of the available inlet links can always be
connected to any
of the available outlet links. In certain multi-stage networks of the type
described herein,
any connection request of arbitrary fan-out, i.e. ~rorn an inlet link to an
outlet link or to a
set of outlet links of the network, can be satisfied without blocking if
necessary by
rearranging some of the previous connection requests. Depending on the number
of
switches in a middle stage of such a network, such connection requests may be
satisfied
even without rearranging as described in detail in IJ.S. Patent Application
Serial No.
09/967,106 that is incorporated by reference above.
Referring to FIG. 1A, an exemplary symmetrical three-stage Clos network of ten
switches far satisfying communication requests, such as setting up a telephone
call or a
data connection, between an input stage 110 and output stage 120 via a middle
stage 130
is shown where input stage 110 consists of three, two by four switches ISIrIS3
and output
stage 12p consists of three, four by two switches OSl-OS3, and middle stage
130 consists
of four, three by three switches MS 1-MS4. Such a network can be operated in
rearrangeably non-blocking manner, because the number of switches in the
middle stage
130 (i.e. four switches) is equal to the sum of the number of links (i.e. two
inlet links) of
each of the switches in the input stage 110 and output stage 120. The specific
method
used in implementing the rearrangeable nQn-blocking connectivity can be any of
a
number of different methods that will be apparent to a skilled person in view
of the
disclosure. One such method is described below in reference to FIG. 1B.
In one embodiment of this network each of the input switches IS 1-IS3 and
output
switches OS 1-OS3 are crossbar switches. When the number of stages of the
network is
one, the switching network is called single-stage switching network, crossbar
switching
network or more simply crossbar switch. A (N ~ M) crossbar switching network
with N
inlet links and M outlet links is composed of N~ cross points. As the values
of N and
M get larger, the cost of making such a crossbar switching network becomes
prohibitively

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
expensive, In another embodiment of the network in FIG. 1A each of the input
switches
IS 1-IS3 and output switches OS 1-OS3 are shared memory switches.
The number of switches of input stage 110 and of output stage 120 can be
denoted
in general with the variable r for each stage. The number of middle switches
is denoted
by m . The size of each input switch IS1-IS3 can be denoted in general with
the notation
n * m and of each output switch OS1-OS3 can be denoted in general with the
notation
m * n . Likewise, the size of each middle switch MS 1-MS4 can be denoted as r
* r . A
switch as used- herein can be either a crossbar switch, or a network of
switches each of
which in turn may be a crossbar switch or a network of switches. A three-stage
network
can be represented with the notation V (m, n, r) , where n represents the
number of inlet
links to each input switch (for example the links ILl, IL,2 for the input
switch ISl) and m
represents the number of middle switches MS1-MS4. Although it is not necessary
that
there be the same number of inlet links ILl-IL6 as there are outlet links OLl-
OL6, in a
symmetrical network they are the same. Each of the rn middle switches MS 1-MS4
are
connected to each of r input switches through r links hereinafter "first
internal" links,
for example the links FL1-FL3 connected to the middle; switch MS 1 from each
of the
input switch IS1-IS3), and connected to each of the output switches through r
links
(hereinafter "second internal" links, for example the lir~ks S,L1-SL3
connected from the
middle switch MSl to each of the output switch OSl-OS3).
Each ofthe first internal links FL1-FL12 and second internal links SLl-SL12
are
either available for use by a new connection or not available if currently
used by an
existing connection. The input switches IS 1-IS3 are also referred to as the
network input
ports. The input stage 110 is often referred to as the first stage. The output
switches
OS 1-OS3 are also referred to as the network outputs ports. The output stage
120 is often
referred to as the last stage. In a three-stage network, the second stage 130
is referred to
as the middle stage. The middle stage switches MSl-l~tS4 are referred to as
middle
switches or middle ports.
In one embodiment, the network also includes a controller coupled with each of
the input stage 110, output stage 120 and middle stage 1f0 to form connections
between
an inlet link ILl-IL6 and an arbitrary number of outlet links OL1-OL6. In this
embodiment the controller maintains i~ memory a pair of lists of available
destinations
_g_

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
for the connection through a pair of middle switches (e.g. MSl and MS2 in FIG
1A) to
implement a fan-out of two. In a similar manner a set of n lists are
maintained in an
embodiment of the controller that uses ~ fan-out of n.
FIG. 1B shows a high-level flowchart of a scheduling method 140, in one
S embodiment executed by the controller of FIG. 1A. According to this
embodiment, a
connection request is received in act 141. Then a connection to satisfy the
request is set
up in act 148 by fanning out into at most two switches in middle stage 130
from its input
switch.
In the example illustrated in FIG. 1A, a fan~out of four is possible to
satisfy a
multicast connection request if input switch is IS2, but only two middle stage
switches
will be used in accordance with this method. Similarly, although a fan-out of
three is
possible for a multicast connection request if the input switch is IS3, again
only a fan-out
of two is used. The specific middle switches that are chosen when selecting a
fan-out of
two is irrelevant to the method of FIG. 1B so long as at most two middle
switches are
1 S selected to ensure that the connection request is satisfied, i.e. the
destination switches
identified by the connection request can be reached from the middle switches
that axe part
of the selected f~~n-out. If a fan-out of two is not available, existing
connections may be
rearranged to set up the connection through at most two middle switches. In
essence,
limiting the fan-out from input switch to no more than two middle switches
permits the
network 100 to be operated in rearrangeably nonblocking manner in accordance
with the
invention.
After act 148, control is returned to act 141 so that acts 141 and 148 are
executed
in a loop for each connection request. According to one embodiment as shown
fort=her
below it is not necessary to have more than 2 ~ n middle stage switches in the
network
2S 100 of FIG. 1A, where the number of inlet links ILl-II,2 equals the number
of outlet links
OL1-OL2, both represented by the variable n for the network to be a
rearrangeably
nonblocking symmetrical switching network, when the scheduling method of FIG.
1B is
used.
The connection request of the type described above in reference to method 140
of
FIG. 1B can be unicast connection request, a multicast connection request or a
broadcast
connection request, depending on the example. In case of a unic~st connection
req~xest, a
-9-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
fan-out of one is used, i.e. a single middle stage switch is used to satisfy
the request.
Moreover, although in the above-described embodiment a limit of two has been
placed on
the fan-out into the middle stage switches, the limit can be greater depending
on the
number of middle stage switches in a network, as discussed below in
reference'to FIG.
2A (while maintaining the rearrangeably nonblocking nature of operation of the
network).
Moreover, in method 140 described above in reference to FIG. 1B any arbitrary
fan-out
may be used between each middle stage switch and the output stage switches,
and also
any arbitrary fan-out may be used within each output stage switch, to satisfy
the
connection request. Moreover, although method 140 of FIG. 1B has been
illustrated with
examples in a ten switch network 100 of FIG. 1A, the method 140 can be used
with any
general network, of the type illustrated in FIGS. 2A and 2B.
Network of FIG. 1A is an ea~ample of general symmetrical three-stage network
shown in FIG. 2A. The general symmetrical three-stage network can be operated
in
rearrangeably nonblocking manner if »a _> 2 ~ n (and in the example of FIG.
2A,
l 5 m = 2 ~ ~t ), wherein has n inlet links for each of r input switches IS 1-
ISr (for example
the links IL,11-ILln to the input switch ISl) and n outlet links for each of r
output
switches OS 1-OSr (for example OL 11-OL 1 n to the output switch OS 1 ). Each
of the m
switches MS1-MSm are connected to each of the input switches through r first
internal
links (for example the links FLl 1-FLrI connected to the middle switch MS1
from each of
the input switch IS 1-ISr), and connected to each of the output switches
through ~ second
internal links (for example the links SLl 1-SLrl connected from the middle
switch MS1
to each of the output switch OS 1-OSr). In such a general symmetrical network
no more
than 2 ~ n middle stage switches MS 1-MS2n are necessary for the network to be
rearrangeably nonblocking, when using a scheduling method of the type
illustrated in
FIG. 1B. Although FIG. 2A shows an equal number of first internal links and
second
internal links, as is the case for a symmetrical three-stage network, the
present invention,
however, applies even to non-symmetrical networks of the type illustrated in
FIG. 2B
(described next).
in general, an (N, ~ NZ ) asymmetric network of three ~tagos can be operated
in
rearrangeably nonblocking manner if m >_ n1 + faa (and in the e~ampl'e of FIG.
2B
m = n1 + n2 , wherein network (FIG: 2B) has r, (n, ~ m) switches IS 1-ISrI in
the first
-10-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
stage, m (rl ~ r2 ) switches MS 1-MSm in the middle stage, and ~2 (m ~ n2 )
switches
OS1-OSr2 in the last stage where N, = rc, ~ rl is the total number of inlet
links and
NZ - n2 * r2 is the total number of outlet links of the network. Each of the m
switches
MS1-MS(nl+n2) are connected to each of the input switches through r1 first
internal links .
(for example the links FL11-FLrI l connected to the middle switch MS1 from
each of the
input switch IS1-ISri), and connected to each of the output switches through
r2 second
internal links (for example the links SLl l-SLr21 connected from the middle
switch MSI
to each of the output switch OS 1-OSr~). Such a mufti-stage switching network
is denoted
as a h(m, n" r" n2 , r~ ) netwoxk. For the special symmetrical case where n1 =
n2 = n and
r, = r2 = r , the three-stage network is denoted as a h(m, n, r) network. In
general, the set
of inlet links is denoted as f 1,2,...,rl~r1 and the set of output switches
are denoted as
O = ~I,2,..., r2 ~ . In ~n asymmetrical three-stage network, as shown in FIG.
2B with nl
inlet links for each of rl input switches, rez outlet links for each of y2
output switches, no
more than ra, + n2 middle stage switches are necessary for the network to be
rearrangeably nonblocking, again when using the scheduling method of FIG. 1B.
The
network has all connections set up such that each connection passes through at
most two
middle switches to be connected to all destination outlet links.
Every switch in the mufti-stage networks discussed herein has multieast
capability. In a V (m, n" r" oz , r2 ) network, if a network inlet link is to
be connected to
more than one outlet link on the same output switch, then it is only necessary
for the
corresponding input switch to have one path to that output switch. This
follows because
that path catl be multicast within the output switch to as many outlet links
as necessary.
Multicast assignments can therefore be described in terms of connections
between input
switches and output switches. An existing connection or a new connection from
an input
switch to r' output switches is said to have fan-out r' . If all multicast
assignments of a
first type wherein any inlet link of an input switch is to be connected in an
output switch
to at most one outlet link are realizable, then multicast assignments of a
second type,
wherein any inlet link of each input switch is to be connected to more than
one outlet link
in the same output switch, can also be realized. For this reason, the
following discussion
is limited to general multicast connections of the first type (with fan-out
r', 1 <_ r'<_ r2)
although the same discussion is applicable to the second type.
-11-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
To characterize a multicast assignment, for each inlet link i E {1,2,...,r,~,
~, let
I; = O, where O ~ ~1,2,..., rZ ~, denote the subset of output switches to
which inlet link i
is to be connected in the multicast assignment. For example, the network of
FIG. lA
shows an exemplary three-stage network, namely Y(4,2,3),,with the following
multicast
assignment h = f 1,2~, IZ = ~1,3~, 16 = {2,3} and all other I~ _ ~ for j = [I-
6]. It should
be noted that the connection I, fans out in the first stage switch IS 1 into
the middle stage
switches MS 1 and MS2, in middle switch MS 1 fans out into output switch OS l,
and in
middle switch MS2 fans out into output switch OS2. And in both output switches
OS 1
and OS2 it fans out once into the outlet links OL1 and OL4 respectively.
Connection Iz
fans out in input switch IS 1 once into middle switch MS4, where it fans out
in middle
switch MS4 twice into output switches OS1 and OS3, and in output switches OSl
and
OS3 it fans out once into the outlet links OL2 and OLS respectively. Finally
the
connection IZ fans out in the input switch IS3 twice into middle switches MS l
and MS3,
and from middle switch MS 1 it fans out once into output switch OS2, from
middle switch
MS3 once into the output switch OS3, and in output switches OS2 and OS3 fans
out once
into the outlet links OL3 and OL6 respectively. In accordance with the
invention, each
connection can fan out in the f rst stage switch into at most two middle stage
switches,
and in the middle switches and last stage switches it can fan out any
arbitrary number of
times as required by the connection request.
Two niulticast connection requests l; = O, and 1~ = O~ for i ~ j are said to
be
compatible, if and only if O; n O~ _ ø . It means when the requests 1, and I~
are
compatible, and if the inlet links i and j do not belong to the same input
switch, they
can be set up through the same middle switch.
FIG. 3A is intermediate level flowchart of one implementation of the method of
FIG. iB. In the following "destination switch" or "destination" refers to any
switch in the
output stage 120 xhat is identified in a connection request. According to this
implementation, a connection request is received in act 141. Then the method
140 checks
in act 142 if the connection can be set up through only one middle switch and
if act 142A
fords a middle switch which has second internal links to all the destinations
available then
the connection is set up in act 142C end the control returns to act 141. If
act 142A results
-12-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
in "no", the control goes to act 142B where the method 140 checks if the
connection can
be set up through only two middle switches. If act 142B results in "yes" act
142C sets up
the connection through the two middle switches. If act 142B results in "no",
the control
goes to act 150, which is the rearrangement method, illustrated later. Also it
must be
noted that acl 148 consists of the scheduling act 142 and the rearrangement
act 150.
Therefore no more than two middle ,switches are used when attempting to
satisfy the
connection request. When the connection is set up in 1420, control returns to
act 141 so
that acts 141 and 142 ale executed in a loop, for each connection request.
After the performance of steps 141, 142 of method 140 if act 142B results in
"no"
it means there are no available middle switches, at most two in number, that
can be used
to set up the multicast connection without the rearrangement of the existing
connections.
There after, in one implementation of the invention, existing connections are
rearranged
to set up the multicast conineetion. Any method well known in the art can be
used to
rearrange the existing connections to set up the rnulticast connection. One
specific
1 S method of rearrangement is described below in reference to FIG. 5A.
-13-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
TABLE 1
A Multicast Assisnment in a Y(6,3,9) Network
Requests for Requests for r = Requests for
r =1 2 r = 3
h = {1,2,3
I4 =- {1,4,7{ h = {1,5,9{
I~ _ {4,5,6{
IS _ {2~5~8{ I$ - {2~6~~{.
I3 = {7,8,9{
I6 = {3,6,9} I9 = {3,4,8}
Table 1 above shows a multicast assignment in V(6,3,9) network. This network
has a total of twenty-seven inlet links and twenty-seven outlet links. The
multicast
assignment in Table 1 shows nine multicast connections, three each from the
first three
input switches. Each of the nine connections has a fan-out of three. For
example, the
connection request Ii has the destinations as the output switches OSl, OS2,
and OS3
(referred to as l, 2, 3 in Table 1). Request II only shows the output switches
and does not
show which outlet links are the destinations. However it can be observed that
each output
switch is used only three times in the multicast assignment of Table l, using
all the three
outlet links in each output switch. For example, output switch 1 is used in
requests h, I4,
h, so that all three outlet links of output switch 1 are in use, and a
specific identification
of each outlet link is irrelevant. And so when all the nine connections are
set up all the
twenty-seven outlet links will be in use.
FIG. 3B shows an initial state of the V(6,3,9) network in which the
connections h
- IS of Table 1 are previously set up. The existing connections h, I2, I3, I4,
and IS pass
through the middle switches MS1, MS2, MS3, MS4, and MSS respectively. Each of
these connections is fanning out only once in the first switch and fanning out
three times
in each middle switch. Connection Ii from input switch ISl fans out into
middle switch
MSl, and from middle switch MS1 into output switches OSl, OS2, and OS3.
Connection I~ from input switch IS 1 fans out into middle switch MS2, and from
middle
switch i.VIS2 into output switches OS4, OSS, and OS6. Connection I3 from input
switch
IS1 fans out into middle switch MS3, and from middle switch MS3 into output
switches
OS7, OSB, and OS9. Connection I4 from input switch IS2 fans out into middle
switch
MS4, and from middle switch MS4 into output switches OS1, OS4, and OS7.
Connection IS from itlput switch IS2 fans out into middle switch MSS, and from
middle
switch MSS into output switches OS2, OSS, and OSB.
-14-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
Method 140 of FIG. 3A next sets up a connection I6 from input switch IS2 to
output switches OS3, OS6 and OS9 as follows. FIG. 3C shows the state of the
network
of FIG. 3B after the connection I6 of Table 1 is set up. In act 142A the
scheduling
method of FIG. 3A finds that only the middle switch MS& is available to set up
the
connection I6 (because all other middle switches MSl-MSS have unavailable
second
internal links to at least one destination switch), and sets up the connection
in act 1420
through switch MS6. Therefore, Connection I6 from input switch IS2 fans out
only once
into middle switch MS6, and from middle switch MS6 three times into output
switches
OS3, OS6, and OS9 to be connected to all the destinations.
Method 140 next sets up a connection h from input switch IS3 to output
switches
OS 1, OSS and OS9 as follows. FIG. 3D shows the state of the network of FIG.
3C after
the connection I7 of Table 1 is set up. The scheduling method of FIG. 3A could
not find a
single middle switch that has links to all required destinations available to
set up the
connection. I-Iowever in act 1428, it Ends two middle switches MS1 and MS2 to
together have links to all required destinations available for the connection
and
accordingly the connection h is set up in act 142C. And so connection h fans
out twice
in the first switch IS3 into the middle switches MS1 and MS2. Also in the
middle switch
MS 1 it fans out twice into output switches OSS and OS9, and in the middle
switch MS2 it
fans out once into output switch OS 1 to be connected to all the required
destinations.
Act 142 of FIG. 3A is implemented in one embodiment by acts 242A-242D
illustrated in FIG. 4A. Specifically, in this embodiment, act 142A is
implemented by acts
242A, 242C, and 242D wherein a loop is formed to check if a middle switch has
an
available link td the input switch, and also has available links to all the
required
destination switches. In this implementation, the same loop is also used with
an additional
act 242B to implement act 142B of FIG. 3A. Use of the same loop as illustrated
in FIG.
4A provides efficiency by eliminating repetition of the same acts, namely acts
242A,
2420, and 242I) that would otherwise have been iepeated if act 142B is
performed
independent of act 142t~ (FIG. 3A). In act 242B, the method of FIG. 4A checks
if another
middle switch has available links to destinations that could not be reached by
use of the
middle switek~ in act 242A (described above). As illustrated in FIG. 4B, act
242B is
reached when the decision in act 242A is "no". In one specific example, acts
242A-242B
-15-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
of FIG. 4C are implemented by use of the information developed in act 242A,
for an
efficient implementation as discussed next.
FIG. 4B is a low-level flowchart of one variant of act 142 of FIG. 4A. The
control to act 142 comes from act 141 after a connection request is received.
In act
142A1, an index variable i is set to a first middle switch 1 among the group
of middle
switches that form stage 130 (FIG. 2B) to initialise an outer loop (formed of
acts of
142A2, 142A3, 242B, 2420 and 242D) of a doubly nested loop. Act 142A2 checks
if the
input switch of the connection has an available link to the middle switch i.
If not control
goes to acl 2420. Else if there is an available link to middle switch i, the
control goes to
act 142A3. Act 142A3 checks if middle switch i has available links to all the
destination
switches of the multicast connection request. If so the contrnl goes to act
142C 1 and the
connection is set up through middle switch i. And all the used links from
middle switch i
to destination output switches are marked as unavailable for future requests.
Also the
method returns "SUCCESS". Act 242C checks if middle switch i is the last
middle
switch, if so the control goes to act 1 SO where the rearrangement of previous
connections
to set up the connection will be performed. If not the control goes to act
242D from act
242C where i is set to the next middle switch. And the outer loops next
iteration starts.
If act 142A3 results in "no" the control goes to act 142B, In act 142B 1
another
index variable j is set to middle switch 1 to initialize an inner loop (formed
of acts 142B2,
142B~, 142B4 and 142B5) of the doubly nested loop. Then the control goes to
act
142B2, where the method 140 checks if middle switch j is equal to middle
switch i. If
middle switch j is equal to middle switch i, the control goes to act 142B4.
Else if middle
switch j is not equal to middle switch i, the control goes to act 142B3 where
the method
140 checks if for all the destinations that have unavailable links from middle
switch i
2$ have available links from middle switch j. If act 14~B3 results in "yes",
the connection is
set up through middle switch i and middle switch j, in act I42C2. Also all the
links used
in act 142C2 from middle switch i and middle switch j to destination output
switches for
setting up the connection are marked as unavailable for future requests and
the method
returns "SUCCESS". If act 142B3 results in "no", the control goes to act
142B4. In act
142B4, the method 140 checks if middle switch j is last middle switch, and if
so the
control goes to act 142A4, if not the control goes to act I42B5 where middle
switch j is
set to the next middle switch. From act 142B5 the control transfers to act
142B2. And
-16-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
thus acts 142B2, 142B3, 142B4 and 142B5 form the inner loop stepping through
all the
middle switches until two middle switches are found to set up the connection.
If at most
two middle switches are not found through which the connection can be set up,
as
illustrated already the control goes from act 242C to act 150 where the
rearrangement of
previous connections will be tried tp set up the connection.
FIG. 4C illustrates, in a flowchart, a computer implementation of one example
of
the scheduling method of FIG. 4B. The flowchart FIG. 4C is similar to the
flowchart of
FIG. 4B excepting for three differences. In FIG. 4C the check for setting up
the
connection through one middle switch also efficiently implements the half of
the check
for setting up the connection through two middle switches. The second
difference is the
loop control code. In the flowchart of FIG. 4B the loop exit test is performed
at the end
of the inner and outer loops whereas in the flowchart of FIG. 4C the loop exit
test is
performed at the beginning of the inner loop and outer loops. The third
difference is in
tl~e flowchart of FIG. 4B whets the connection cannot be set up in the method
142 without
rearranging previous connections, the control directly goes to the
rearrangement method
150 whereas in the flowchart of FIG. 4C the control returns "FAIL" and the
rearrangement method need to be called separately to set up the connection.
And the following method illustrates the pseudo code for ode implementation of
the scheduling method of FIG. 4C to set up a new multicast connection request
through
the network of FIG. 2B, when there are at least n, + n2 middle switches in the
network as
discussed above.
Pseudo code of the scheduling method:
Step 1: c = current connection request;
Step 2: for s = mid switch 1 to mid switch m do ~
Step 3: if (c has no available link to s) continue;
Step 4: O; = Set of all destination switches of c having available links from
s ;
Step 5: Ok = Set of all destination switches of c having no available links
from s ;
Step 6: if ( OJ = All the required destination switches of c) {
Set up c through s ;
Mark all the used paths to and from I as unavailable;
return ("SUCCESS");
)
-i 7-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
Step 7: for j = mid switch I to mid switch m do {
Step 8: if (t = j) {
continue;
Step 9: } else ~
O~ = Set of all destination switches of c having available links from j ;
Step 10: if ( Ok a O~ ) {
Set up a through t and j;
Mark all the used paths to and from t and j as unavailable;
return ("SUCCESS");
)
Step 11: return("FAIL");
Step 1 above labels the current connection request as "c". Step 2 starts an
outer
loop of a doubly nested loop and steps through all the middle switches. If the
input
switch of c has no available link to the middle switch t, next middle switch
is selected to
be t in the Step 3. Steps 4 and 5 determine the set of destination switches of
c having and
not having available links from middle switch t, respectively. In Step 6 if
middle switch t
have available links to all the destination switches of connection request c,
connection
request c is set up through middle switch t. And all the used links of middle
switch t to
output switches are marked as unavailable for future requests. Also the method
returns
"SUCCESS". Step 7 starts the inner loop to step through all the middle
switches to
search for the second middle switch, and if middle switch t is same as the
middle switch j,
Step $ continues to select the next middle switch to be j. Step 9 determines
the set of all
destination switches having available links from middle .switch j. And in Step
10, if all
the links that are unavailable from middle switch t are available from middle
switch j,
connection request c is set up through middle switch t and middle switch j.
All the used
links from middle switch t and middle switch j to output switches are marked
as
unavailable and the method returns "SUCCESS". These steps are repeated for all
the
pairs of middle switches. Itz Step 11, if one or two middle switches are not
found through
which c can be set up, the method returns "FAIL". It is easy to observe that
that the
number of steps performed by the scheduling method is proportional to rrt2,
where m is
-18-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
the number of middle switches in the network and hence the scheduling method
is of time
complexity O(m Z )
Table 2 shows how the steps 1-11 of the above pseudo code implement the
flowchart of the method illustrated in FIG. 4C, in one particular
implementation.
TABLE 2
Steps of the pseudo Acts of Flowchart
code of the of FIG. 4C
scheduling method
1 301
2 301, 302, 315
3 304
4,5 305
6 306, 314
7 307, 308, 313
8 309
9 310
i0 311, 312
11 303
FIG. 4D illustrates, in one embodiment, the data structures used to store and
retrieve data from memory 500 of a controller 580 that implements the method
of FIG.
4C. In this embodiment, the fan-out of at most two in the input switch of each
connection is implemented by use of two data structures (such as arrays or
linked lists) to
indicate the destinations that can be reached from each of two middle
switches.
Specifically as illustrated in FIG. 4D, two arrays 530 arid 550 are determined
for each of
the two middle switches MSi and MSj that are checked for possible use in
setting up the
connection, for example in act 148 of the rearrangeable Scheduling method 140
of FIG.
1B. Arrays 530 and 550 are determined as follows. Each connection request 510
is
specified by an array S20 of destination switch identifiers (and also an inlet
link of an
input switch identifier). Another array 560 of middle switches contains tn
elements one
each for all the middle switches of the network. Each element of array 560 has
a pointer
_ f 9_

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
to one of m arrays, 570-1 to 570-m, containing a bit that indicates
availability status
(hereinafter availability status bit) for each output switch OS1-OSr as shown
in FIG. 4D.
If second internal link to an output switch is available from a middle switch,
the
corresponding bit in the availability status array is set to 'A' (to denote
available, i.e.
unused link) as shown in FIG. 4D. Otherwise the corresponding bit is set to
'IT' (to
denote unavailable, i.e. used link).
For each connection 510 each pair of acniddle switches MSi, and MSj are
checked
to see if all the destinations of connection 510 are reachable from the pair_
Specifically
this condition is checked by using the availability status arrays 570-i, 570-j
of two middle
I O switches MSi and MSj, to determine the available destinations of the
connection 510
from MSi and MSj in the respective arrays 530 and 550. In one implementation,
each
destination is checked if it is available from any one of the middle switches
MSi and MSj,
and if both the middle switches MSi and MSj do not have availability for a
particular
destination, this particular pair of middle switches MSi and MSj cannot be
used to set up
15 the connection. However if middle switches MSi and MSj are determined to
have
unavailability of a particular destination, a different pair of middle
switches are checked
for example the middle switches MSi and MSk. In this implementation, middle
switches
MSi and MSk are checked for the availability of all the destinations of the
connection 514
in the same manner as middle switches MSi and MSj. Therefore in this
implementation,
20 there is no need to use an additional array 540 of unavailable destinations
from middle
switch MSi (as discussed next).
An alternative implementation saves (see act 305 of FIG. 4C) an array 540 (see
FIG. 4D) of unavailable destinations from middle switch MSi, at the time
middle switch
MSi is first paired with a middle switch, (e.g. MSj) other than itself when
attempting to
25 ~ satisfy the connection request 510. Such saving of array 540 eliminates
the need for each
destination of the connection request 510 to be checked for middle switch MSi,
when
middle switch MSi is paired with mother middle switch (e.g. MSk). If the array
540 of
unavailabie destinations from MSi is saved once, only these destinations (in
array 540)
need to be checked for availability in middle switch MSk, which improves the
speed of
30 the computation. The embodiment of FIG. 4D can be implemented to set up
connections
in a controller and memory (described above in reference to FIG. 1A, FICA. 2A,
and FIG.
2B etc.).
-20-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
FIG. 5A is intermediate level flowchart of one implementation of the method
140
of FIG. 1B. According to this implementation, a multicast connection request
is received
in act 141. In act 142AB, the method 140 checks if the connection can be set
up by
fanning out into at most two middle stage switches from its input switch. If
142AB
S results in "yes", the control goes to act 142C and the connection is set up.
Then control
returns to act 141. If act 142AB results in "no", the control goes to act 150
(also called
"method 150") and the connection is set up by an act of changing the paths
(also called a
"rearranging act") of one or more existing connections. Then the control is
returned to
act 14.1. And thus acts 141, 142 and 150 are executed in a loop for each
multicast
connection request. Acts 142 and 150 together form act 148.
Although, in FIG. 5A certain acts 142AB and 1420 are described, as would be
apparent to the skilled person other acts can be performed in act 142 (to
check the
possibility of setting up a connection) in combination with act 150. According
to one
embodiment no more than 2 ~ n middle stage switches are used by the method 140
(FIG.
SA) in the network of FIG. 1A, where the number of inlet links ILl-Il_,2
equals the
number of outlet links OL1-OL2, both represented by the variable n, to be a
rearrangeably nonblocking symmetrical switching network. If the network is
asymmetric
no more than rcl + n2 middle switches are used by the method 140, wherein n1
is the
number of inlet links in each input switch and n2 is the number of outlet
links in each
output switch.
11IG. 5B is an intermediate level ("also called first intermediate level")
flowchat-t
of one embodiment of the rearrangement act 150 of FIG. 5A. The control comes
to act
150 when the multicast connection through at most two middle switches cannot
be set up
without rearranging one or more existing connections. In act 168, each
existing
connection in the input switch {of the new connection) is disconnected
tempprarily, to be
reconnected later. Thereafter the new connection and the disconnected
connection, in theat
order, are tried to be set up. This is repeated for each existing connection
in the input
switch of the new connection until both get set up.
Although in this description the term "disconnection" is used, it is to be
understood that an existing connection need not be actually disconnected
during the
performance of act 150 o~the method 140 of FIG.SA. And instead such an
existing
-21-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
connection can be marked for disconnection in acts 168A and/or 172A while
performing
the method in an alternative embodiment, and the actual disconnection is
performed at the
end if it is determined that the method 140 is successful in setting up the
new connection
by use of links freed by such disconnection.
If both get set up act 168 results in "yes". If act 168 results in "no" for
all the
existing connections, the control goes to act 171. In act 171, the new
connection is set up
through two middle switches having available first internal links so that only
one of these
two middle switches use second internal links which are already in use by one
or more
existing multicast connections from other input switches (hereinafter called
"incorn~patible
existing connections"). And the control goes to act 172. In act 1?2, the
incompatible
existing connections are disconnected and marked as visited. Then the control
is
transferred to act 169. In act 169, the method 150 checks if there are more
new
connections to be set up. If there are no new connections to be set up control
goes to act
141, to receive the next multicast connection request. However in act 169, if
there are
more new connections to be set up, the control goes to act 168 and acts of
168, 169, 171,
and 172 are recut~sively performed until the method exits to act 141. When the
control
goes to act 141, it means the new connection and all the incompatible
connections are set
up through at mo$t two middle switches.
The rearrangement method 150 of FIG. 5B can also be observed as a loop
consisting of acts 168, 168 and 170. In act 168, each new connection is tried
to be set up
by rearranging existing connections in its own input switch. If act 168
results in "no", in
act 170 the new connection is set up by forcibly disconnecting one or more
existing
connections in other input switches. And then the disconnected connections are
marked as
new connections and the process is repeated in a loop until the nevv
connection and all the
disconnected connections are set up.
FIG. 5C shows the state of the network of FIG. 3D after the connection I8 of
Table
1 is set up using the scheduling method 140 of FIG. 5A. Act 142AB of the
scheduling
method 140 of FIG. 5A could not find a single middle switch to set up the
connection,
However act 142AB fords two middle switches MS4 and MS6 to be available for
the
connection anti accordingly the connection I8 is set up in act 142C. And so I$
fans out
twice in the first switch IS3 into the middle switches MS4 and MS6. Also in
the middle
-22-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
switch MS4 it fans out twice into output switches OS2 and OS6, and in the
middle switch
MS6 it fans out once into output switch OS7 to be connected to all the
destinations.
FIG. 5D shows the state of the network of FIG. SC after the connection I9 of
Table
1 is set up using the rearrangeable scheduling method 140 of FIG. 5A. In act
142AB, the
S scheduling method of FIG. 5A could not find a single middle switch to set up
the
connection. Act 142AB also could not find two middle switches to set up the
connection
because from input switch IS3 there are only two middle switches MS3 and MS5
with
available Iinics. And the connection I9 has destinations of OS3, OS4, and OSB.
And from
both the middles switches MS3 and MS5 the link to output switch OS8 is not
available
I O when performing acts of 142AB. And hence act I42 results in "no" and the
control goes
to rearrangement act 150.
In act 150 the control goes to act 168 of FIG. 5B. In act 168, each connection
from the same input switch IS3 of the connection I9 is disconnected. First it
disconnects
the connection h and so the first internal links from input switch IS3 to
middle switches
15 MSl and MS2 are now available and the second internal links from middle
switch MS1
to output switches OSS and OS9 are riow available and also the second internal
link from
middle switch MS2 to output switch OS 1 is also available. Then act 168 tries
to set up
the connection I9 by using no more than two middle switches and it will be set
up through
middle switches MS1 and MS2. And so, in this example, ~9 fans out twice in the
input
20 switch IS3 into the middles switphes MS1 and MS2. also in the middle switch
MSl it
fans out twice into output switches OS4 and OS8, and in the middle switch MS2
it fans
out once into output switch OS3 to be connected to all the destinations.
Later act 168 tries to set up the disconnected connection I7 by using no more
than
two middle switches and it will be set up through the middle switches MS3 and
MSS,
25 And, in this example, h fans out twice in the input switch IS3 into the
middles switches
MS3 and MSS. Also in the middle switch MS3 it fans out twice into output
switches OS1
and OSS, and in the middle switch MSS it fans out once into output switch OSS
to be
connected to all the destinations. And thus act 168 rearranges the existing
connection h
changing its path from the middle switches MS I and MS2 to the middle switches
MS3
30 and MSS to set up the new connection I9 through the middle switches MS 1
and MS2.
From act 168 the control goes to act I69. In act 169 the method ISO checks if
all the new
-23-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
connections are set up, and since they are all set up the control goes to act
141 where it is
ready to receive the next multicast connection request.
FIG. SE shows a partial diagram of an exemplary three-stage network. There are
many existing connections (not shown in the diagram). But three existing
connections I;,
h, and Ik are fanning out twice in their input switches. Each of these three
connections fan
out into middle switch MS3. In middle switch MS3, the connections I;, h, and
Ik each fan
out once into output switches OS3, OS4, and OS2 respectively. And in the
output
switches OS3, OS4, and OS2 the existing connections I;, h, and Ik fan out
once. Suppose a
new connection II with destinations to the output switches OSl, OS2, OS3, OS4,
and
OSS is to be set up in this network using the method 140 of FIG. 5A. Also in
this
example, the connection Ii cannot be set up in act 142AB. And so the control
goes to act
150. In the implementation of act 150 in FIG. 5B, act 168 cannot setup the
connection by
disconnecting only one of the existing connections in its input switch IS 1.
And the
control goes to act 171. In act 17'1, it selects the two middle switches MS2
and MS3 to set
1 S up the connection. That is the new connection h is h'ied to be fanned out
into middle
switches MS2 and MS3. In middle switch MS2 there are available links to only
two
destination switches OS 1 and OSS out of the required five destinations. In
middle switch
MS3 all the three remaining destinations OS2, OS3, and OS4 ar'e unavailable
because the
existing connections I;, Ii, and Ik frog other input switches are currently
using the second
internal links to all the three output switches OS2, (?S3, and OS4. And so the
three
connections I;, h, and I~ are the incompatible existing connections. In act
172 these three
connections are disconnected and the new connection h is set up through the
middle
switches MS2 and MS3.
FIG. SF shows the network of FIG. SE after the incompatible existing
connections
I;, h, and Ik are disconnected and the new connection h is set up through the
middle
switches MS2 and MS3. In the middle switch MS2, connection h fans out twice
into the
output switches OSl and OSS. In the middle switch MS3 it fans out thrice into
the output
switches OS2, OS3, and OS4. In act 172, the incompatible existing connections
I;, h, and
Ik are marked as new connections to be formed, after being disconnected. And
the control
goes to act 169, where it results in "no" because there are new connections to
be set up.
So from act 169, control transfers to act 168. In act 168 the connections I;,
h, and Ik will
be tried to be set up. In one implementation, new connections I;, h, and Ik
are set up by
24-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
disconnecting only one connection in their respective input switches, and if
necessary
other existing connections may be disconnected.
In rearrangeably nonbiocking networks, the switch hardware cost is reduced at
the
expense of increasing the time required to set up connection a connection. The
set up
S time is increased in a rearrangeably nonblocking network because existing
connections
that are disrupted to implement rearrangement need to be themselves set up, in
addition to
the new connection. For this reason, it is desirable to minimize or even
eliminate the
need for rearrangements to existing connections when setting up a new
connection.
When the need for rearrangement is eliminated, that network is either wide-
sense
nonblocking or strictly nonblocking; depending on the number of middle
switches and the
scheduling method.
In strictly nonblocking multicast networks, for any request to form a
multicast
connection from an inlet link to some set of outlet links, it is always
possible to find a
path through the network to satisfy the request without disturbing any
existing multicast
1 S connections, and if more than one such path is available, any of them can
be selected
without being concerned about realization of future potential multicast
connection
requests. In wide-sense non~locking multicast networks, it is again always
possible to
provide a connection path through the network to satisfy the request without
disturbing
other existing multieast connections, but in this case the path used tet
satisfy the
connection request must be selected to maintain nonblocking connecting
capability for
future multicast connection requests. In strictly nonblocking networks and in
wide-sense
nonblocking networks, the switch hardware cost is increased but the time
required to set
up connections is reduced compared to rearrangeably nonblocking networks.
Embodiments of strictly nonblocking networks using 3 =~ n -1 or chore middle
switches
2S are described in the related U.S. Patent application Serial No. 091967,106
that is
incorporated by reference above. The foregoing discussion relates to
embodiments of
rearrangeably nonblocking networks where the switch hardware cost is smaller.
FIG. 6A is a detailed intermediate level (also called "second intermediate
level")
flowchart of one implementation of the method of FIG. SB. The control comes to
act
160, from act 142, when the connection through at most two middle switches
cannot be
set up without rearranging one or more existing connections. In act 167A, the
method
-2S-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
150 assigns the new connection to a group of connections called the current
connection
group. Another group of connections called next connection group is
initialized to contain
no connections, Both current connection group and next connection group
contain a list
of connections to be set up. Initially, i.e. when the control comes from act
142 to act
167A, the current connection group contains the new connection and the next
connection
group contains no connections. Also mark the new connection as visited. From
act 167A,
the control goes to act 161 where the method 150 checks if the a next
connection request
in the current connection group, which is initially the new connection, can be
set up by
disconnecting only one of the existing connections in the same input switch of
the new
connection. And then act 161 checks if the disconnected connection can also be
set up. If
both the new connection and the disconnected connections can be set up, the
control goes
to act 168A. In act 168A, the existing connection chosen in act 161 is
disconnected, and
the new multicast request and disconnected connection, in that order, are set
up. The new
connection is removed from the current connection group. The control is then
transferred
1 S to act 169A. In act 161, for all the existing connections in the same
input switch of the
new connection, after disconnecting each of them if it is checked that either
the new
connection cannot set up or the disconnected connection cannot be set up, act
161 results
in "no". In such case, act 161 makes sure the disconnected connection is set
up again and
the new connection is still not set up. Then the control goes to act 171.
In act 171, the new connection is set up through two middle switches having
available first internal links so that only one of these two middle switches
use second
internal links which are already in use by one or more existing multicast
connections
from other input switches (hereinafter called "incompatible existing
connections"). Also
the new connection is removed from the current connection group. And the
control goes
to act 172. In act 172, the incompatible existing connections are
disconnected, marked as
visited, and added to the next connection group as new connection requests
that need to
be set up. Then the control is transferred to act 169A. In act 169A, the
method 150
checks if there are more connections in the current connection group. If not
control goes
to act 169B. In 169B, the method 150 checks if there are more connections in
the next
connection group. If not the control goes to act 141 at~d the rearrangeable
scheduling
method 150 will be completed, which meats the new connection is set up by
rearranging
the existing connections and using a fan-out of no more than two in the input
switch. If
act 169B results in "yes", control goes to act 169C where the next connection
groin is
-~26-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
copied into the current connection group and the next connection group is
cleared. And
the control transfers to act 161. Also when act 169A results in "yes", the
control is
transferred to act 161. And act 161 repeats the process to the next new
connection
request in the current connection group until the method 150 reaches act 141
which
means the new connection is set up by rearranging some existing connections.
FIG. 6B is low-level flowchart of one variant of act 168 of the method of FIG.
6A. The control comes to act 168, from act 142, when the multicast connection
cannot be
set up through at most two middle switches without rearranging one or more
existing
connections. In act 161, if there is an existing connection that has not been
tried to be
i 0 disconnected in the input switch of the multicast request, Ehe control
goes to act 162. In
act 162, the particular existing connection is disconnected. The control
transfers to act
163. In act 163, the method 160 checks if all the destinations of the new
multicast request
can be reached through at most two middle switches now. If so the control
transfers to
act 164.
15 In act 164, the method 160 checks if all the destinations of the
disconnected
connection could be reached if the links needed to satisfy the new multicast
connection
request are already in use. If so, the control goes to act 165 where both the
new multicast
request and the disconnected connection are set up in that order. From act 165
control
transfers to act 169. If any of acts 163 or 164 result in "no" control
transfers to act 166
20 where the disconnected connection is set up. From act 166, the control
transfers to act
161 where the process is repeated for the next untried existing connection. In
act 161, if
there are no more untried existing connections the control transfers to act
170. If the
control goes to act 170, it means the new connection request cannot be
satisfied by
disconnecting only one existing connection in the same input switch.
25 FIG. 6C is.IQw-level flowchart of one variant of act 170 ofthe method of
FAG.
6A. The control comes to act 170, from act 161, because the next connection
request
cannot be set up by disconnecting only one of the existing connections in the
same input
switch. In act 171, the new connection is set up through two middle switches
having
available first internal links so that only one of these two middle switches
use second
30 internal links which are already in use by one or more existing multicast
connections
from other input switches (hereinafter called "incompatible existing
connections"). And

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
the control goes to act 172A. In act 172A, the incompatible existing
connections are
disconnected. At this point the new connection has novv been set up, and only
the
remaining task is to set up any previously existing connections that are being
disconnected (to accommodate the new connection). The control then goes to act
1728.
In act 172B, the method 170 checks if any of the existing connections was
marked as
visited (generated by existing connections being disconnected in an attempt to
rearrange)
If so for all these connections the control transfers to act 172C. In act
172C, a different
existing connection from the same input switch which itself was not marked as
visited, is
chosen to be disconnected so that the existing connection which is marked
visited can be
set up through two middle switches. The control then transfers to act 172D. In
act 172B,
for all the existing connections, which were not marked visited, the control
transfers to
act 172D. In act 172D, all the disconnected connections are marked as visited.
From act
172D, the control goes to act 169.
i~hen the scheduling method of FIG. 4B returns fail, the following
rearrangement
method is called to set up the new connection request by rearranging one or
more
previously set up requests. The method illustrates the pseudo code of the
rearrangement
method 150 to set up a new multicast connection request through the network of
FIG. 2B.
Pseudo code for the Rearrangement method:
Step 1: L next = new connection to be rearranged;
Mark the new connection as visited ;
Step 2: L current = L next ;
Step 3: while(L current !_ ~ ) }
Step 4: for l = each connection in the list L current {
Step 5: for j = each connection in the same input switch as l }
Step G: Disconnect(j) ;
Step 7: if(Schedule(i) = FAIL) ~
Schedule(j) ;
Continue;
Step 8: } else iffi(Schedule(j) = FAIL) {
Disconnect(i) ;
Schedule(j) ;
Continue ;
Step 9: } else ~
Remove connection l from the list L next ;
Break ;
-28-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
1
)
Step 10: Clear the list L current ;
Step 11: for k = each connection in the list L next ~
Step 12: Set up k on any two available middle switches such that
incompatibilities
arise in the only one of the two middle switches ;
Step 13: Disconnect all the incompatible connections in the other switches;
Step 14: For each incompatible connection {
Step 15: If (the connection visited before) (
Step 16: Select and disconnect another existing connection from the same input
switch that was not visited such that the incompatible connection is set up;
)
1
Step 17: Mark all the disconnected connections as visited ;
Add them in the list L current ;
Step 18: Clear the list L next ;
Step 19: L next = L current ;
1
Step 1 and Step 2 start with assigning the current multicast requests to be
rearranged to the lists L next and L current respectively. Whem the method is
called
first, L next and L current will contain only the new request. Step 1 also
marks the
current multicast request as visited Step 3 starts the while loop. If L
current contains no
requests the method stops executing which means the new request is set up in
the multi
stage network. Initially Step 3 results in "TRUE" and the control goes to Step
4.
Step 4 starts a for loop, with loop index as .i, to step through all the
requests in
L current. The first time this loop is executed, L current will contain only
the new
request. Step 5 is a lQOp, with loop index as j, for stepping through all the
previously set
up connections in the same input switch as the request i. Step 6 disconnects
the existing
connection j. Step 7 tries to set up connection i and if connection i still
cannot be set up,
existing connection j is set up and the loop continues to set the next
existing connection
as j . Step 8 tries to set up the existing connection j, in case the
connection i is set up in
Step '~. If existing connection j cannot be set up (after the connection i is
set up in Step
~, the connectipn i is disconnected and the existing connection j is set up
and the loop
-29-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
continues to set the next existing connection as j. Step 9 removes the request
the
connection i from L next, when both the connection i and the existing
connection j are
set up in Step 7 and Step 8 respectively.
Step 10 clears L current list. If at least one of the connections cannot be
set up by
rearranging only one existing connection in the same input switch, L next will
not be
empty when Step 11 is executed. Step 11 starts the loop far setting up the
requests in
L next list. Each connection k is set up through two middle switches having
available
first internal links so that only one of these two middle switches use second
internal links
which are already in use by one or more existing multicast connections from
other input
switches (hereinafter called "incompatible existing connections"). In Step 13,
the
incompatible existing connections are disconnected. In Step 14, each
incompatible
connection is checked if it is visited before, in a loop. In Step 15, it is
checked if an
incompatible cormection is visited before. If so, in Step 16, another existing
connection is
disconnected which itself was not visited before such that tJhe incompatible
connection
marked as visited is set up. Then in Step 17, all the disconnected connections
are marked
as visited and they all are added to the list L current. Step 18 clears the L
next and Step
19 copies L current into L next. The while lo~o~ of step 3 is continued until
L current is
empty. The foregoing rearrangement method has the time complexity of O~r ~ n).
The
method converges and fords a nonblocking schedule for any multicast assignment
based
on the proof of this invention discussed later.
FIG. 7A illustrates, in a flowchart, pseudo code for one example of act 160 of
rearrangement method of FIG. 6A. FIG. 7B illustrates, in a flowchart, pseudo
code for
one example of act 170 of rearrangement method of FIG. 6A. The flowcharts of
FIG. 7A
and FIG. 7B are similar to the pseudo code of the rearrangement method above
excepting
fox one difference. The flowchart expands the loop control code for all the
for loops into
loop initialization code, loop exit code and loop step through code. Table 3
shows how
the steps 1-15 of the above pseudo code implement the flowcharts of the
methods
illustrated in FIG. 7A 'and FIG. 7B, in one particular implementation.
-30-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
TABLE 3
Steps of the psuedoActs of Flowchart
code of of FIG.
the rearrangement 7A and FIG. 7B
method
1,2 401
3 402
4 403, 404, 419
405, 406, 418
6 407
7 408, 409
8 410, 411
9 412
413
11 414, 415
12 421
13 422
14,15 423
16 424
17 425
18, 19 417
In another embodiment, in either of or both of act 168 and act 171 of FIG. SB,
instead of only one existing connection, two or more existing connections can
also be
attempted disconnecting and so that the new connection and the disconnected
existing
5 connections can all be set up in any one of all the possible the
combinations of order of
set up. For example in any of act 16S or act 171, two existing connections A
and B in the
same input switch can be disconnected and can be tried setting up in the
following two
ways: 1) the new connection, existing connection A, and existing connection B
are tried
setting up in that order or 2) the new cormection, existing connection B, and
existing
10 connection A are tried setting up in that order. In any of these setting up
acts, if the new
connection and both the connections A and B are all set up, the rearrangement
method
150 will be successful and the rearrangement act 150 goes to act 141 to
receive the next
connection request. Similarly three existing connections in the same input
switch can be
-31-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
tried disconnecting and setting up in all the six combinations after the new
connection is
set up. In the same way more than three existing connections can be tried
with. Also all
these varieties of disconnecting one, two or more can be tried individually in
each
rearrangement act of 150 or can be tried by using in any arbitrary mixed ways
in setting
up any new connection.
First the proof for the rearrangeably nonblocking behavior of symmetric
networks
Y(rn, n, r) of the invention is presented. Later it will be extended for
asymmetric
networks V (m, n,, r1, n2, r2 ) . According to the current invention when m >=
2 ~ h , the
V (m, n, r) Clos network is operated in rearrangeably nonblocl~ing manner for
multicast
connections if the following scheduling criterion is rr~et: Every connection
request is
fanned out at most twice in the input switch; Alternatively every connection
request is set
up through at most two middle switches.
wince when m >= 2 ~ n -1, the Y(m, n, r) network is s~rrictly nonblocking for
unicast assignments, it means for unicast assignments, applicant notes that
there always
exists an available link through at least one middle switch from any arbitrary
input switch
to any arbitrary output switch. Alternatively, if there exists available links
from an
arbitrary input switch to a number of middle switches at least one of these
middle
switches has axi available link to any arbitrary output switch. 3t also means
when
m >= 2 w n -1, from any arbitrary input switch if there exists available links
to a nurriber
of middle switches, all output switches have available links from at least one
of those
middle switches.
To prove that the network is rearrangeably nonblocking for multicast
assignments,
applicant notes that it is necessary and sufficient to.prove the following two
conditions: 1)
There are enough middle switches to fan out each connection at most twice in
the input
switch; 2) From ari arbitrary input switch, there always exist at least two
middle switches
with available links between these two middle switches and the input switch
such that
there are available links to all the destination output switches, ~f any
connection request
(e.g. All output switches in case of a broadcast connection request), from
these two
middle switches.
-32-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
To prove the condition l, applicant observes that there are enough middle
switches if each connection is fanned out at most twice since m >= 2 * n .
Moreover
applicant pravides proof for the condition 2 by contradiction as follows. In
the worst-
case scenario, suppose all the r output switches have (n -1) outlet links
already
connected. Now suppose from the given input switch all the output switches
have to be
reached for the n th outlet link.
Suppose there are not at least two middle switches available through which
there
are available links from the given input switch to all the output switches. If
it happens,
then each of the middle switches will have ~ ~ + 1J second internal Links
already in use.
i.e., total second internal links used in all the middle switches is given by,
~~+l~*(2*n)
= n*r+2*n
Which is not possible because the maximum possible second internal links in
use
is ra*r.
So there always exist at least two middle switches thxough which there are
paths
from any given input switch to all the output switches. Since the number of
middle
switches m = 2 * n is sufficient to set up the multicast connections, the V
(rr~, n, r) Clos
network can be operated in rearrangeably nonblocking manner. Hence, if m >= 2
* n , the
Y(m, n, r) Clos network can be operated in rearrangeably nonblocking manr3er
for
multi~cast connections of any arbitrary fan-out.
-33y

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
Table 4
A multicast assignment in a Y(10,5,25) Network
Requests for Requests for Requests for
r =1 r = 2 r = 3
11 =~1,2,3,4,5~,16 =~1,6,11,16,21~,II1 =1,7,13,19,25},
I2 = ~6, 7, 8, 17 = {2, 7,12,17,112 = f 2, 8,14,
9,10}, 22~, 20, 21~,
13 = f11,12,13,14,15~,I$ ={3,8,13,18,23,113 ={3,9,15,16,22,
I4 =~16,17,18,19,20~,I9 ={4,9,14,19,24,114 = 4,10,11,17,23,
IS = {21, 22, Ilo = f 5,10,15,Its = f 5, 6,12,18,
23, 24, 25~, 20, 25~, 24~,
Requests for Requests for
r = 4 r = 5
116 =~1,8,15,17,24~,121 = f1,9,12,20,23~,
117 = (2, 9,11,18,122 = {2,10,13,16,
25}, 24,
118 ={3,10,12,19,21},123 = f3,6,14,17,~5~,
119 = ~4, 6, 124 = ~4, 7,15,18,
i 3, 20, 22}, 21~,
120 = ~5, 7,14,16,125 = f 5, 8,11,19,
23~, 22~
Table 4 shows an exemplary multicast assignment in a V(10,5,25) network. Each
request has a fan-out of f ve. All the outlet links are connected in this
multicast
assignment since each output switch is used exactly five times in the requests
corresponding to five outlet links of each output switch. ~n one
implementation, Table 5
shows by using only m = 2 ~ ~ =10 middle switches, the multicast assignment
can be set
up to operate the network in rearrangeably nonblocking manner.
TABLE
S
A
rear~ran~eably
nonblockin~
Schedule
of
the
Multicast.assi,~nment
of
Table
M=1 M=2 M=3 1c3=4M=5 M=6 M=7 M=8 M=9 M=10
R=I 1,2 3,4,56,7 8,9,1011,12 13,14,1516,1718,19,2021,22 23,24,25
R--26,11,16,211 2,12,17,227 3,8 4,9,19,2413,18,23.14 5,10,2515,20
R=3 7,13,19,252,8 1 14,20,219,15,16,223 4,10,1117,236,12,18,245
R=4 8,15,17,249,11,18,255 1 2 12 6,20;224,13 7,14,16,233,10,19,21
R=5 9,12,20,2310,13,16,243,14,252 1 6,17 7,15,215,8,11,2219 4,18
-34-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
Each row in Table 5 represents an input switch and each column represents a
middle switch. And each element in the table represents the list of output
switches set up
through the corresponding middle switch for a connection originating from the
corresponding input switch. The correspondence between different connections
from the
same row of Table 5 and hence from the same input switch can be obtained from
the
multicast assignment of the Table 4.
To extend the proof (described above), applicant now shows that
V (m, n1, r1, n2, rz ) network can be operated in rearrangeably nonblocking
manner for
multicast connections when m >= n, + nz , by considering the two eases n, < nz
and
n1 > nz .
1) n, < nz : In this case, the number of middle switches necessary is2 * ra,
which is
< (n, + nz ) . To prove the sufficiency, even though there are a total of nz *
rz outlet links
in the network, in the worst-case scenario only n, * rz second internal links
will be
needed. This is because, even if ali nz * rz outlet links are destinations of
the connections,
using the fan-out capability in the output switches the rearrangeably
nonblocking
behavior can be realized. And so 2 * n, which is < (n, + nz ) middle switches
is
sufficient.
2) ~, > hz : In this case, since there are a total of nz * rz outlet links in
the
network, only a maximum of nz * rz second internal links will be used even if
all the
nz * rz outlet links are destinations of the network connections. When-the
number of
middle switches is n~ + nz the total second internal links in the network is
given by
rz * (n, + nz )which is mare than the required number, according to the
rearrangeability
v
proof for Y(m, n, r) as shown earlier, which is rz * (2 * nz ). Also from any
input switch
only a maximum of nz out of n1 available inlet links can each have fan-out of
rz . And
so pnly a maximum of nz connections from any input switch need to be fanned
out into
two. And so n1 + nz middle switches are sufficient.
Referring to FIG. 8A a five stage rearrangeably nonblocking network is shown
according to an embodiment of the present invention that uses recursion as
follows. The
-35-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
five stage network comprises input stage 110 and output stage 120, with inlet
links IL1-
IL12 and outlet links OL1-OL12 respectively, where input stage 110 consist of
six, two
by four switches IS 1-IS6, and output stage 120 consist of six, four by two
switches OS 1-
OS6. However, unlike the single switches of middle stage 130 ofthe three-stage
network
S of FIG. 1A, the middle stage 130 of FIG. 8A consists of four, six by six
three-stage
subnetworks MS 1-MS4 (wherein the term "subnetwork" has the same meaning as
the
term "network"). Each of the four middle switches MS 1-MS4 are connected to
each of
the input switches through six first internal links (for example the links FL1-
FL6
connected to the middle switch MSl from each of the input switch ISl-IS6), and
connected to each of the output switches through six second internal links
(for example
the links SL1-SL6 connected from the middle switch MS1 to each of the output
switch
OS1-OS6). In one embodiment, the network also includes a controller coupled
with the
input stage 110, output stage 120 and middle stage subnetworks 130 to form
connections
between an inlet link IL1-IL6 and an arbitrary number of outlet links OL1-OL6.
1S Each of middle switches MS1-MS4 is a Y(4,2,3) three-stage subnetwork. For
example, the three~stage subnetwork MS 1 comprises input stage of three, two
by four
switches MISI-MIS3 with inlet links FLl-FL6, and an output stage of three,
four by two
switches MOS1-MOSS with outlet links SL1-SLG. mhe middle stage of MSl consists
of
four, three by three switches MMS1-MMS4. Each of the middle switches MMS1-MMS4
are connected to each of the input switches MIS1-MIS3 through three first
internal links
(for example the links MFL1-MFL3 connected to the middle switch MMS1 from each
of
the input switch MIS 1-MIS3), and connected to each of the output switches MOS
1-
MOS3 through three second internal links (for example the links MSL1-MSL3
connected
from the middle switch MMSl to each of the output switch MOS1-MOS3). In
similar
2S fashion the number of stages can increase to 7, 9, etc.
As with the three-stage network, the network of FIG. 8A has the property of
being
operable in rearrangeably nonblocking manner as described herein with no more
than
2 * n middle stage three-stage networks. In the network of FIG. 8A the middle
stage
req~ixes no more than 2 * n three-stage subnetworks. Thus in FIG. 8A where n
equals 2,
middle stage 130 has four middle stage three-stage subnetworks MS 1-MS4.
Furthermore, according to the present invention, each of the middle stage
subnetworks
MS1-MS4 require no more than k, +k2 middle switches MMS1-MMS4, where k, is the
-36-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
number of inlet links for each middle input switch MIST-MISS arid k2 is the
number of
outlet links for each middle output switch MOSI-MOS3.
In general, according to certain embodiments, one or more of the switches, in
any
of the first, middle and last stages can be recursively replaced by a three-
stage
subnetwork with no more than n1 + n2 middle stage switches where n1 is the
number of
inlet links to each first stage switch in the subnetwork and n2 is the number
of outlet
links to each last stage switch of the subnetwork for rear~angeably
nonblocking operation,
for multicast connections of arbitrary fan-out. Note that because the term
"subnetwork"
has the same meaning as "network", the just described replacement can be
repeated
recursively, as often as desired, depending on the embodiment. Also each
subnetwork
may have a separate controller and memory to schedule the multicast
connections of
corresponding network.
It should be understood that the methods, discussed so far, are applicable to
k-
stage networks for k>3 by recursively using the design criteria developed on
any of the
I S switches in the network. The presentation of the methods in terms of three-
stage
networks is only for notational convenience. That is, these methods can be
generalized
by recursively replacing each of a subset of switches (at least 1) in the
network with a
smaller thxee-stage network, which has the same number of total inlet links
and total
outlet links as the switch being replaced. For irnstance, in a three-stage
network, one or
more switches in either the input, middle or output stages can be replaced
with a three-
stage network to expand the network. If, for example, a five-stage network is
desired,
then all middle switches (or all input switches or all output switches) are
replaced with a
three-stage network.
In accordance with the invention, in any of the recursive three-stage networks
each connectipn can fan out in the i"~rst stage switch into at most two middle
stage
subnetworks, and in the middle switches and cast stage switches it can fan out
any
arbitrary number of times as required by the connection request. For example
as shown
in the network of FIG. $A, cpnnection Iz fans out in the first stage switch
ISl twice into
middle stage subnetworks MS 1 and MS3. In middle stage subnetwork MS I it fans
out
four times into output switches OS1, OS2, OS3 and OSS. however in the three-
stage
subnetwork of MSl, it can fan out at most twice in the first stage, for
example connection
-37-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
h fans out twice in the input switch MIS 1 into middle switches MMS 1 and MMS3
of the
three-stage subnetwork of MS 1. Similarly a connection can fan out arbitrary
number of
times in the middle and last stages of any three-stage network. For example
connection II
fans out twice in middle syvitch MMS2 into output switches MOSI and MOS3 of
three-
stage subnetwork of MS1. Also the connection h fans out in MMS 3 once into
MOS2 and
from there once into OS3. The connection I4 fans out once into three stage
network once
where it is fanned out three times into output switches OS2, OS4, and OS6. The
connection 14 fans out once in MIS4 into MMS6 where it fans out three times
into output
switches MOS4, MOSS, and MOS6 of the three-stage subnetwork MS2.
FIG. 8B shows a high-level flowchart of a scheduling method, in one embodiment
executed by the controller of FIG. 8A. The method of FIG. 8B is used only for
networks
that have three stages each of which may be in turn composed of three-stage
subnetwork~,
.. in a recursive manner as described above in reference to FIG. 8A. According
to this
embodiment, a multicast connection request is received in act 250 (FIG. 8B).
Then a
connection to satisfy the request is set up in act 260 by fanning out into at
most two
middle stage subnetworks from its input switch. Then the control goes to act
270. Act
270 recursively goes through each subnetwork contained in the network. For
each
subnetwork found in act 270 the control goes to act 280 and each subnetwork is
treated as
a network and the scheduling is performed similarly. Once all the recursive
subnetworks
are scheduled the control transfers from act 270 to act 250 so that each
multicast
connection will be scheduled in the same manner in a loop. It must be noted
that even
thoqgh FIG. 8A does not explicitly show the rearrangement method, when the
scheduling
method 260 fails to set up the connection, similar to the method of FIG. 5A,
the above
described rearrangement method is performed for each network, before
recur~ively
scheduling each subnetwork found in act 270.
A direct extension of the foregoing discussion is that when the number pf
middle
switches is increased, the above-described methods can be changed to improve
speed or
to reduce the frequency of rearrangement. For example when m = 3 * n, each
multicast
connection can be fanned ouI into at most three middle switches and the Y(m,
n, r)
network can be operated in rearrangeably nonblocking manner. Similarly, when
m = 2 * n, + n2 , the V(m, n1, r1, n2, r2 ) network is operated in
rearrangeably nonblocking
manner if each multicast connection is fanned out into ~t most three middle
switches.
-38-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
FIG. 9A shows a general symmetrical mufti-stage network with m = 3 * n middle
switches. Excepting for the middle switches to be m = 3 * n, the description
of FIG. 9A
is similar to FIG. 2A. FIG. 9B shows the scheduling method by fanning out into
at most
three middle switches. Excepting for the additional act 142D of testing for
three middle
switches and setting up a connection through three middle switches in act
1420, the
description of the method of FIG. 9B is similar to the method of FIG. 5A.
The just-described method of FIG. 9B can be used in conjunction with the
method
of FIG. 5A, e.g. to use a fan-out of at most two in setting up some
connections and fan-
out of at most three in setting up other connections. Such a combined method
may be
used, for example if there are m = 2 * n + k middle stage switches where 0 < k
< n , and
initially fan-out of two is attempted without rearrangement followed by fan-
out of three
followed by the rearrangement method 150.
In general when m = x * n and x >_ 2 each multicast connection can be fanned
out
into at most x middle switches and the Y(m, n, r) network is operated in
rearrangeably
nonblocking manner. Similarly, when m = (x -1) * nl + nz , V (m, n, , rl, nz ,
rz ) network is
operated in rearrangeably nonblocking manner if each multicast connection is
fanned out
into at most x middle switches. FIG. 10A shows a general symmetrical mufti-
stage
network with m = x * n middle switches. Excepting for the middle switches to
be
m = x * n, the description of FIG. l0A is similar to FIG. 2A. FIG. l OB shows
the
scheduling method by fanning out into at most x middle switches. Excepting for
the
additional act 142X oftesting for x middle switches and setting up a
connection through
x middle switches in act 142C, the description of the method of FIG. l OB is
similar to the
method of FIG. 5A.
In an alternative embodiment, when m >_ xl * a, + xz * az + ... + x p * a p ,
where
2S al + az + ...+ a p = n, + nz , the V (m, n" rl, nz, rz ) network is
operated in re~rrangeably
nonblocking manner as described herein, when multicast connections are set up
such that
conne~tivns from a; inlet links of each input switch pass through at most x;
middles
switches, for 1 <_ l <_ p .
Also another embodiment has m middle stage switches where rn = 2 * n - 2 * k ,
with n being the number of inlet links for each input switch, and at most k
connections
-39-.

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
from each input switch are blocked when the rearrangeable scheduling method
140 of
FIG. 1B is used in the network of FIG. 2A. For example if k = ~ , only half of
the middle
switches required for a rearrangeably nonblocking network are present and in
such a
network at most ~ connections from each input switch are set up using the
method of
FIG. 1B and any additional connections are dropped. Specifically, when ~
connections
are already formed, at most n middle switches have been used from each input
switch
and so no more middle switches are av~.ilable. Under certain circumstances
fewer than k
connections are blocked when using method 140, for example if fan-out of less
than two
is used to set up some of the existing connections. In yet another
embodirner~t, when
m = n, + n2 - 2 * k , at most k connections from each input switch will be
blocked from
each input switch when the rearrangeable scheduling method 140 of FIG. 1B is
used in
the network of FIG. 2B.
A Tr(m, n" r" n2, r2 ) network can be further generalized, in an embodiment,
by
having an input stage comprising r1 input switches and nz,~, inlet links in
input switches,
for each of said r1 input switches such that w ~ ~l, r, ~ and n, = MAX(nl,~ );
an output stage
comprising r2 output switches and n2y outlet links in output switch v , for
each of said r2
output switches such that v ~ ~1, r2 ~ and na = MAA'(r~av ); and a middle
stage comprising
rn middle switches, and each middle switch comprising at least one link
connected to
each input switch for a total of a( least r, first internal links; each middle
switch further
comprising at least one link connected to at most d said output switches for a
total of at
least d second internal links, wherein 1 ~ d _< r2 , and applicant notes that
such an
embodiment can also be operated in rearrangeably nonblocking manner, according
to the
current invention, for setting up multicast connections by fanning out not
more than twice
in the imput switch, when m >= n, + fat .
The V (na, n1, r1, n2, r2 ) network embodiments described so fax, in the
current
invention, are implemented in space-space-space, also known as SSS,
configuration. In
this configuration all the input switches, output switches and middle switches
are
-40-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
implemented as separate switches, for example in one embodiment as crossbar
switches.
The three-stage networks l~(m, h1, r,, nz, r2 ) can also be implemented in a
time-space-
time, also known as TST, configuration. In TST configuration, in the first
stage and the
last stage all the input switches and all the output switches are implemented
as separate
switches. However the middle stage, in accordance with the current invention,
uses
m number of switches where m >_ n, + nz , with each middle switch having r,
~(ni ~ ~z )
first internal links connected to all input switches and also having rz second
internal links
connected to all output switches. The TST configuration implements the
switching
mechanism, in accordance with the current invention, in MIN(ni, nz ) steps in
a circular
fashion. So in TST configuration, the middle stage physically implements only
m middle switches; and they axe shared in time in, MIN(n, , nz ) steps, to
~(ni ~ h2 )
switch packets or timeslots from input ports to the output ports.
The three-stage networks Y(m, n1, r~, nz, rz ) implemented in TST
configuration
play a key role in communication switching systems. In one embodiment a
crossconnect
in a TDM based switching system such as SONET/SDH, each communication link is
time-division multiplexed - as an example an OC-12 SONET link consists of 336
VT1.5
channels time-division multiplexed. In another embodiment a twitch fabric in
packet
based switching system such as IP, each comriiunication link is statistically
time division
multiplexed. When a V(m, n1, r" nz, rz ) network is switching TDM or packet
based links,
each of the r, input switches receive time division multiplexed signals - for
example if
each input switch is receiving an OC-12 SONET stream and if the switching
granularity
is VTl .5 then n, (= 336) inlet links with each inlet link receiving a
different VTl .5
channel. A crossconnect, using a V (m, x~l, r1, nz, rz ) network, to switch
would implement
a TST configuration, so that switching is also performed in time division
multiplexed
fashion just the same way communication in the lii~lcs is performed in time
division
multiplexed fashion.
For example, the network of FIG. 11A shows an exemplary three-stage network,
namely V(6,3,4) in space-space-space configuration, with the following
multicast
-41-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
assignment I, _ ~1}, IZ = ~1,2,3,4~, I6 = ~3~, 11, = f4~ and 1,2 = f3,4~.
According to the
current invention, the multicast assignment is setup by fanning out each
connection not
more than twice in the first stage. The connection 1, fans out in the first
stage switch IS 1
into the middle stage switch MS I, and fans out in middle switch MS 1 into
output switch
OS1. The connection 1, also fans out in the last stage switch OSl into the
outlet links
OL2 and OL3. The connection IZ fans out in the first stage switch IS1 into the
middle
stage switches MS3 and MS4. The connection IZ fans out in middle switch MS3
into
output switches OS1, OS3, and OS4. The connection IZ also fans out in the last
stage
switches OS1, OS3, and OS4 into the outlet links OL1, OL7 and OL12
respectively. The
connection IZ fans out in the middle switch MS4 once into output switch OS2.
The
connection IZ fans out in the output switch OS2 into outlet links OL4, OLS,
and OL6.
The connection 16 fans out once in the input switch IS2 into middle switch MS2
and fans out in the middle stage switch MS2 into the last stage switch OS3.
The
connection Id fans out once in the output switch OS3 into outlet link OL9. The
connection 1" fans out once in the input switch IS4 into middle switch MS6,
fans out in
the middle switch MS6 once into output switch OS4. The connection 1" fans out
in the
output switch OS4 into outlet link OL10. The connection I,2 fats out once in
the input
switch IS4 into middle switch MSS, fans out in the middle switch MS5 twice
into output
switches pS3 and OS4. The connection I,a fans out in the output switch OS3 and
OS4
into outlet links OLD and OL11 respectively.
FIG. 11B, FIG. 11C and FIG. 11D illustrate the implementation ofthe TST
configuration of the V(6,3,4) network of FIG. 11A. According to the current
invention, in
TST configuration also the multicast assignment is setup .by fanning out each
connection
not more than twice in the first stage, in exactly the same as the scheduling
method is
performed in SSS configuration. Since in the network of FIG. 11A, ~ = 3 , the
TST
eonf guration of the network of FIG. 11A has rT = 3 different time steps; and
since
nt
- = 2 , the middle stage in the TST configuration implements only 2 middle
switches
n
each with 4 first internal links and 4 second internal links as shown in FIG.
11B, FIG.
11C, and FIG. 11D, In the first time step, as shown in FIG. 11B the two middle
switches
-42-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
act as MS1 and MS2 of the network of FIG. 1 1A. Similarly in the second time
step, as
shown in FIG. 11C the two middle switches act as MS3 and MS4 of the network of
FIG.
1 1A and in the third time step, as shown in FIG. 11D the two middle switches
act as MSS
and MS6 of the network of FIG. 1 1A.
In the first time step, FIG. 11B implements the switching functionality of
middle
switches MS 1 and MS2, and since iri the network of FIG. 11A, connections 1,
and 16 are
fanned out through middle switches MS1 and MS2 to the output switches OS1 and
OS3
respectively, and so connections Ii and I6 are fanned out to destination
outlet links
{0L2, OL3~ and OL9 respectively, just exactly the same way they are routed in
the
network of FIG. 1 1A in all the three stages. Similarly in the second time
step, FIG. 11G
implements the switching functionality of middle switches MS3 and MS4, and
since in
the network of FIG. 1 1A, connection IZ is fanned out through middle switches
MS3 and
MS4 to the output switches {OS 1, OS3, OS4} and OS2 respectively, and so
connection
IZ is fanned out to destination outlet links f OLI, OL7, OL12~ and {0L4, OLS,
OL6}
respectively, just exactly the same way they are routed in the network of FIG.
1 1A in all
the three stages.
Similarly in the third time step, FIG. 11D implements the switching
functionality
of middle switches MSS and MS6, and since in the network of FIG. I 1A,
connections I,1
and IIZ are fanned out through middle switches MSS and MS6 to the output
switches
OS4 and {0S3, OS4~ respeclively~ and so connections h, and 1,2 are fanned out
to
destination outlet links OL10 and ~OLS, OL11 } respectively, just exactly the
same way
they are routed in the network of FIG. 1 1A in all the three stages. In
digital cross
connects, optical cross connects, aid packet or cell switch fabrics since the
inlet links and
outlet links are used time-division multiplexed f~.shion, the switching
network such as the
Y(m; n" r1, h2, r2 ) network implemented in TST configuration will save cost,
power and
space compared to a space-space-space configuration.
In ,accordance with the invention, the h(m, n" r" nz , r2 ) network
implerne~ted in
TST configuration, using the same scheduling method as in SSS configuration
i.e., with
each connection fanning out in the first stage switch into only one middle
stage switch,
and in the middle switches and last stage switches it fans out any arbitrary
number of
-43-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
times as required by the connection request, is operable in rearrangeably
nonblocking
manner with number of middle switches is equal to '~ , where m > nl +r~z .
~(~~ ~ ~Z
Numerous modifications and adaptations of the embodiments, implementations,
and examples described herein will be apparent to the skilled artisan in view
of the
disclosure.
For example its one embodiment of a V (m, h, r) network, ohe or more new
connections that are in the process of being set up, are dropped (disconnected
permanently) if it takes longer than a predetermined amount of time to compute
the
rearrangement of thg existing connections as described in reference to act 150
of the
rearrangeable scheduling method 140 of FIG. 5A. In another example of this
embodiment, one of the existing connections is dropped to set up the new
connection so
that the computation time to setup the new conr~ection by rearrangement of the
existing
connections is reduced.
For example; in one embodiment a metl~.od of the type described above is
modified as follows when the number of output switches rz is less than or
equal to four.
Specifically, a three-stage network is operated iu strictly nonblocking manner
when the
multicast connection is fanized out only once in the input stage, with m
number of middle
stage switches where
m >_ L r2 J~ MIN(nl, nz ) when L rz J is > 1 and odd, or when L rz ~= 2 ,
m >- ~ rz ~-1~~ MIN(nl,nz) when L rz ~ is > 2 and even, and
m >_ n, + nz -1 when ~ rz ~ =1. So when rz is less than or equal to eight a
three-stage network is operated in strictly nonblocking manner for m <- 2 ~ n
.
For example, in another embodiment, a method of the type described above is
modified tcf set up a multirate mufti-stage network as follows. Specifically,
a multirate
connection can be specified as a type of multicast connection. In a multicast
connection,
an inlet link transmits to multiple outlet links, whereas in a multirate
connection multiple
inlet links transmit to a single outlet link when the rate of data transfer of
all the paths in
-44-

CA 02548540 2006-03-06
WO 2006/033650 PCT/US2003/027971
use meet the requirements of multirate connection request. In such a case a
multirate
connection can be set up (in a method that works backwards from tie output
stage to the
input stage), with fan-in (instead of fan-out) of not more than two in the
output stage and
arbitrary fan-ins iri the input stages and middle stages. And a three-stage
multirate
network is operated in rearrangeably nonblocking manner with the exact same
requirements on the number of middle stage switches as described above for
certain
embodiments.
Numerous such modifications and adaptations are encompassed by the attached
claims.
-45-

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: Office letter 2007-10-18
Inactive: Delete abandonment 2007-10-18
Application Not Reinstated by Deadline 2007-09-06
Time Limit for Reversal Expired 2007-09-06
Inactive: Abandoned - No reply to Office letter 2007-06-07
Deemed Abandoned - Failure to Respond to Maintenance Fee Notice 2006-09-06
Inactive: Courtesy letter - Evidence 2006-08-01
Inactive: Cover page published 2006-07-31
Inactive: Notice - National entry - No RFE 2006-07-26
Application Received - PCT 2006-07-04
Application Published (Open to Public Inspection) 2006-03-30
Inactive: Correspondence - Formalities 2006-03-10
National Entry Requirements Determined Compliant 2006-03-06

Abandonment History

Abandonment Date Reason Reinstatement Date
2006-09-06

Maintenance Fee

The last payment was received on 2006-03-06

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Fee History

Fee Type Anniversary Year Due Date Paid Date
MF (application, 2nd anniv.) - standard 02 2005-09-06 2006-03-06
Basic national fee - standard 2006-03-06
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
TEAK TECHNOLOGIES, INC.
Past Owners on Record
VENKAT KONDA
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Description 2006-03-05 45 2,545
Drawings 2006-03-05 33 1,084
Claims 2006-03-05 26 1,118
Representative drawing 2006-04-25 1 26
Abstract 2006-03-05 1 64
Notice of National Entry 2006-07-25 1 193
Courtesy - Abandonment Letter (Maintenance Fee) 2006-10-31 1 175
Request for evidence or missing transfer 2007-03-06 1 101
PCT 2006-03-05 2 112
Correspondence 2006-03-09 1 33
Correspondence 2006-07-25 1 28
Correspondence 2007-10-17 1 30