Language selection

Search

Patent 2321015 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2321015
(54) English Title: METHOD AND APPARATUS FOR DETERMINING A BIT RATE NEED PARAMETER IN A STATISTICAL MULTIPLEXER
(54) French Title: METHODE ET APPAREIL DE DETERMINATION D'UN PARAMETRE DE DEBIT BINAIRE NECESSAIRE DANS UN MULTIPLEXEUR STATISTIQUE
Status: Deemed Abandoned and Beyond the Period of Reinstatement - Pending Response to Notice of Disregarded Communication
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04N 7/08 (2006.01)
  • H04J 3/16 (2006.01)
  • H04N 7/015 (2006.01)
  • H04N 19/61 (2014.01)
(72) Inventors :
  • LIU, VINCENT (United States of America)
  • WU, SIU-WAI (United States of America)
  • ON, HANSON (United States of America)
  • NEMIROFF, ROBERT S. (United States of America)
  • CASTELOES, MICHAEL (United States of America)
  • CHEN, JING YANG (United States of America)
  • LAM, REBECCA (United States of America)
(73) Owners :
  • GENERAL INSTRUMENT CORPORATION
(71) Applicants :
  • GENERAL INSTRUMENT CORPORATION (United States of America)
(74) Agent: SMART & BIGGAR LP
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2000-09-27
(41) Open to Public Inspection: 2002-03-27
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data: None

Abstracts

English Abstract


A statistical multiplexer for coding and
multiplexing multiple channels of digital television
data, or multiple panels of HDTV digital television
data. A bit rate need parameter is determined for each
encoder in a stat mux group by scaling the complexities
of previous pictures of the same and different picture
types. Scaling factors based on an activity level,
motion estimation score, and number of pictures of a
certain type in a GOP, may be used. Moreover, the
scaling factors may be bounded based on a linear or
non-linear operator to prevent large variations in the
factors. An encoding bit rate is allocated to each
channel based on its need parameter.


Claims

Note: Claims are shown in the official language in which they were submitted.


38
CLAIMS
1. A method for allocating bits in a statistical
multiplexer for coding a plurality of channels of video
data sources comprising respective video pictures,
comprising the steps of:
for each channel, obtaining a bit rate need
parameter for a current picture, which has an
associated picture type, by scaling at least one
complexity measure that is based on at least one
previous picture of the same type, and by scaling at
least one complexity measure that is based on at least
one previous picture of a different type; and
allocating an encoding bit rate for coding the
current picture of each channel according to the bit
rate need parameter thereof.
2. The method of claim 1, wherein:
for at least one of the channels, when the current
picture is an I-picture, the bit rate need parameter
thereof is obtained by scaling a complexity measure of
a previous I-picture, an average complexity measure for
a plurality of previous P-pictures, and an average
complexity measure for a plurality of previous B-
pictures.
3. The method of claim 2, wherein:

39
for at least one of the channels, the complexity
measure of the previous I-picture is scaled according
to a ratio of: (a) an activity level of the current
picture to (b) an average activity level of a plurality
of previous pictures.
4. The method of claim 2, wherein:
the average complexity measure of the plurality of
previous P-pictures is scaled according to a number of
P-pictures in a current group of pictures (GOP).
5. The method of claim 2, wherein:
the average complexity measure of the plurality of
previous B-pictures is scaled according to a number of
B-pictures in a current group of pictures (GOP).
6. The method of claim 1, wherein:
for at least one of the channels, when the current
picture is a P-picture, the bit rate need parameter
thereof is obtained by scaling a complexity measure of
an I-picture in a current group of pictures (GOP), an
average complexity measure for a plurality of previous
P-pictures, and an average complexity measure for a
plurality of previous B-pictures.
7. The method of claim 6, wherein:
the complexity measure of the I-picture is scaled
according to a ratio of: (a) an activity level of the

40
current picture to (b) an activity level of the I-
picture.
8. The method of claim 6, wherein:
the average complexity measure for the plurality
of previous P-pictures is scaled according to a ratio
of: (a) a motion estimation score of the current
picture to (b) an average motion estimation score for
the plurality of previous P-pictures.
9. The method of claim 6, wherein:
the average complexity measure for the plurality
of previous P-pictures is scaled according to a number
of P-pictures in the current GOP.
10. The method of claim 6, wherein:
the average complexity measure for the plurality
of previous B-pictures is scaled according to a number
of B-pictures in the current GOP.
11. The method of claim 1, wherein:
for at least one of the channels, when the current
picture is a B-picture, the bit rate need parameter
thereof is obtained by scaling a complexity measure of
an I-picture in a current group of pictures (GOP), an
average complexity measure for a plurality of previous
P-pictures, and an average complexity measure for a
plurality of previous B-pictures.

41
12. The method of claim 11, wherein:
the complexity measure of the I-picture is scaled
according to a ratio of: (a) an activity level of the
current picture to (b) an activity level of the I-
picture.
13. The method of claim 11, wherein:
the average complexity measure for the plurality
of previous P-pictures is scaled according to a number
of P-pictures in the current GOP.
14. The method of claim 11, wherein:
the average complexity measure for the plurality
of previous B-pictures is scaled according to a ratio
of: (a) a motion estimation score of the current
picture to (b) an average motion estimation score for
the plurality of previous B-pictures.
15. The method of claim 11, wherein:
the average complexity measure for the plurality
of previous B-pictures is scaled according to a number
of B-pictures in the current GOP.
16. The method of claim 1, wherein:
for at least one of the channels, when the current
picture is a P-picture in a scene change group of
pictures (GOP), the bit rate need parameter thereof is

42
obtained by scaling an average complexity measure for a
plurality of previous P-pictures, and an average
complexity measure for a plurality of previous B-
pictures, by a ratio of (a) a motion estimation score
of the current picture to (b) an average motion
estimation score for the plurality of previous P-
pictures.
17. The method of claim 1, wherein:
for at least one of the channels, when the current
picture is a B-picture in a scene change group of
pictures (GOP), the bit rate need parameter thereof is
obtained by scaling an average complexity measure for a
plurality of previous P-pictures, and an average
complexity measure for a plurality of previous B-
pictures, by a ratio of (a) a motion estimation score
of the current picture to (b) an average motion
estimation score for the plurality of previous B-
pictures.
18. The method of claim 1, comprising the further
step of:
for at least one of the channels, bounding at least
one of the scaled complexity measures according to at
least one of a non-linear and linear function.
19. The method of claim 1, comprising the further
step of:

43
for at least one of the channels, temporally
filtering the need parameters for a channel to reduce
picture-to-picture fluctuations thereof.
20. The method of claim 1, comprising the further
step of:
for at least one of the channels, providing an
adjustment factor for the need parameter for at least
one of fading, low brightness level, low amount of
movement, and fullness level of a buffer that receives
the pictures after coding thereof.
21. The method of claim 20, wherein:
the adjustment factor increases the need parameter
for at least one of fading, low brightness level, and
low amount of movement.
22. The method of claim 1, wherein:
at least ene of the channels comprises high-
definition television (HDTV) data;
the current picture of the HDTV channel is sub-
divided into panels that are encoded in parallel at
respective encoders;
a bit rate need parameter is obtained for each
panel; and
a bit rate need parameter of the HDTV channel is
obtained by summing the bit rate need parameters of
each panel thereof.

44
23. The method of claim 1, wherein:
for at least one of the channels, the bit rate need
parameter is obtained, at least in part, by scaling a
weighted average of complexity measures for a plurality
of previous pictures.
24. An apparatus for allocating bits in a
statistical multiplexer for coding a plurality of
channels of video data sources comprising respective
video pictures, comprising:
means for obtaining, for each channel, a bit rate
need parameter for a current picture, which has an
associated picture type, by scaling at least one
complexity measure that is based en at least one
previous picture of the same type, and by scaling at
least one complexity measure that is based on at least
one previous picture of a different type; and
means for allocating an encoding bit rate for
coding the current picture of each channel according to
the bit rate need parameter thereof.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02321015 2000-09-27
t ,
T r
METHOD AND APPARATUS FOR DETERMINING A BIT RATE NEED
PARAMETER IN A STATISTICAL MULTIPLEXER
BACKGROUND OF THE INVENTION
The present invention relates to a statistical
multiplexer for coding and multiplexing multiple
channels of digital television data.
Digital television has become increasingly popular
due to the high quality video image it provides, along
with informational and entertainment features, such as
pay-per-view, electronic program guides, video-on-
demand, stock, weather and stock information, Internet
hyperlinks, and so forth. Such television data can be
communicated to a user, for example, via a broadband
communication network, such as a satellite or cable
television network, or via a computer network.
However, due to the bandwidth limitations of the
communication channel, it is necessary to adjust a bit
rate of the digital video programs that are encoded and
multiplexed for transmission in a compressed bit
stream. A goal of such bit rate adjustment is to meet
the constraint on the total bit rate of the multiplexed
stream, while also maintaining a satisfactory video
quality for each program.
Accordingly, various types of statistical
multiplexers have been developed that evaluate

CA 02321015 2000-09-27
2
statistical information of the source video that is
being encoded, and allocate bits for coding the
different video channels accordingly. For example,
video channels that have hard-to-compress video, such
S as a fast motion scene, can be allocated more bits,
while channels with relatively easy to compress scenes,
such as scenes with little motion, can be allocated
fewer bits.
In MPEG-2 digital video systems, the complexity of
a video frame is measured by the product of the
quantization level (QL) used to encode that frame and
the number of bits used for coding the frame (R). This
means the complexity of a frame is not known until it
has been encoded. As a result, the complexity
information always lags behind the actual encoding
process, which requires the buffering of a number of
frames prior to encoding, thereby adding expense and
complexity. This kind of look-behind information may
be avoided by using some pre-encoding statistics about
the video, such as intra-frame activity, or motion
estimation (ME) scores as a substitute for the
traditional complexity measure. However, the
relationship between the pre-encoding statistics of a
video frame and the complexity of that frame may not be
direct, and sometimes the relationship may change due
to the changing subject matter of the source video.
Accordingly, there is a need for an improved
statistical multiplexing system. Such a system should

CA 02321015 2000-09-27
f_ ,
r
3
employ a number of individual encoders that encode
data from a number of incoming channels of source video
data. This data may be obtained from a storage media,
live feed, or the like.
The system should dynamically allocate bits to the
individual encoders to encode frames of video data from
the channels.
The system should use pre-encoding statistics of
the source video frames that are closely related to the
complexity of the frames, and which account for
changing content in the source video.
The system should be usable with essentially any
type of video data, including high-definition (HD) and
standard-definition (SD) television (TV).
The present invention provides a system having the
above and other advantages.

CA 02321015 2000-09-27
r
4
SUMMARY OF THE INVENTION
The present invention relates to a statistical
multiplexer for coding and multiplexing multiple
channels of digital television data.
Bandwidth is dynamically allocated among a number
° of variable bit rate (VBR) video services that are
multiplexed to form a fixed bit rate transport bit
stream.
Since a video service's need for bandwidth varies
with the amount of information in the video content, it
is more efficient for bandwidth usage to allocate the
total available bandwidth dynamically among the
services according to the need of the individual
service instead of using a fixed allocation.
The present invention achieves this goal by
providing a number of advantageous features, including:
1. Using the coding complexity of previous
frames (in encoding order) to estimate a need parameter
for a current frame.
2d 2. Using a relative change in the intra-frame
activity of the picture, calculated at least one frame
time ahead of encoding, to adjust the need parameter of
I-frames.
3. Using a relative change in a motion
estimation score of the picture, calculated one frame
time ahead of encoding, to adjust the need parameter of
P- and B-frames.

CA 02321015 2000-09-27
r
a i
4. Using scene change information to adjust the
need parameter.
5. Boosting the need parameter for scenes where
artifacts can be more visible, such as low spatial
5 complexity or slow motion scenes.
6. Boosting the need parameter when the number
of bits generated in the previous frames exceeds the
available bit budget.
The stat mux system includes three distinct parts:
1) The collection of visual characteristics and
complexity information for individual video channels
and a need parameter is generated for each video
channel to indicate how difficult it is to compress
that channel. This process is repeated once per frame
and it is done by the individual single-channel
encoders (which could be SD and/or HD).
2) The most up-to-date need parameters from all
the video channels are collected by a quantization
level processor (QLP), or rate control processor. The
rate control processor assigns an encoding bandwidth to
each video channel in the form of an encoding bit rate.
Each channel receives a different encoding bit rate
based on its need parameter in relation to the needs of
all the other channels. The encoding bit rate is used
to control the video encoding of individual channels.
The rate control processor also assigns transmission
bit rates to the channels, which determine how many
bits are sent by each video channel to a decoder.

' CA 02321015 2000-09-27
6
3) The single-channel encoder uses the encoding
bit rate it is given to perform video compression. The
primary task here is a rate control function, which
involves using the encoding bit rate and the relative
complexities of different frame-types (i.e., I, B and P
types) to assign a target bit budget for each frame it
is about to encode.
A particular method for allocating bits in a
statistical multiplexer for coding a plurality of
channels of video data sources comprising respective
video pictures, includes the step of: for each
channel, obtaining a bit rate need parameter for a
current picture, which has an associated picture type,
by scaling at least one complexity measure that is
based on at least one previous picture of the same
type, and by scaling at least one complexity measure
that is based on at least one previous picture of a
different type. An encoding bit rate is allocated for
coding the current picture of each channel according to
the bit rate need parameter thereof.
Note that the pictures can be, e.g., frames or
fields.
A corresponding apparatus is also presented.

CA 02321015 2000-09-27
7
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a statistically multiplexed
multi-channel encoding system in accordance with the
present invention.
FIG. 2 illustrates an encoder for standard
definition television data in accordance with the
present invention.
FIG. 3 illustrates an encoder for high-definition
television data in accordance with the present
invention.
FIG. 4 illustrates a method for obtaining a need
parameter for an I-picture in accordance with the
present invention.
FIG. S illustrates a method for obtaining a need
parameter for an P-picture in accordance with the
present invention.
FIG. 6 illustrates a method for obtaining a need
parameter for an B-picture in accordance with the
present invention.

CA 02321015 2000-09-27
r
8
DETAILED DESCRIPTION OF THE INVENTION
The present invention relates to a statistical
multiplexes for coding and multiplexing multiple
channels of digital television data.
FIG. 1 illustrates a statistically multiplexed
multi-channel encoding system in accordance with the
present invention.
The encoding system 100 includes L buffer/need
parameter calculation functions 102, 104, ..., 106 that
receive corresponding uncompressed source video inputs.
The functions 102, 104, ..., 106 provide the need
parameter data to a rate control processor 125, which
in turn provides a corresponding encoding bit rate
allocation to each of the encoders 112, 114, ..., 116.
The encoders may provide feedback information to the
rate control processor regarding the actual encoding
bit rate. The encoded data is provided to a mux 120 to
provide a multiplexed bitstream, then to a transport
packet buffer 130, and to a transmitter 135 for
transmission across a channel.
The rate control processor 125 may receive a
fullness signal from the transport packet buffer 130.
At a decoding side 180, a receiver 182, decoder
buffer 184, demux 186, and decoder 188 are provided to
output a decoded video signal, e.g., for display on a
television.
FIG. 2 illustrates an encoder for standard

CA 02321015 2000-09-27
r
9
definition television.
The encoder 112, which is an example one of the
encoders 112, 114, ..., 116 of FIG. l, encodes a single
channel of input data, and includes a compressor 210
that performs conventional data compression, including
motion compensation (for P- and B-frames), discrete
cosine transform (DCT) and quantization. A video
first-in, first-out (FIFO) buffer 230 temporarily
stores the compressed data, and a packet processor 250
forms packets of the compressed data with appropriate
header information, e.g., according to the MPEG-2 or
other video standard.
FIG. 3 illustrates an encoder for high-definition
television.
The encoder 300 encodes a single channel of input
data. However, a panel splitter 305 divides up a video
frame such that different sub-regions, or panels, of
the frame are routed to respective different
compressors 310-324. Eight compressors are shown as an
example only. Typically, the same sub-region of
successive frames is assigned to the same compressor.
A master compression controller (MCC) 370 controls
the compression of the data at each compressor via a
peripheral component interconnect (PCI) bus 325, and
the compressed data is output to a video FIFO 330 for
temporary storage. The compressed data is formed into
packets for transport at a packet processor 350.
A encoding bit rate need parameter for the HDTV

CA 02321015 2000-09-27
r
channel is determined by the MCC 370 by summing a need
parameter for each of the panel compressors. Other
statistical information, such as motion estimation
scores and the like, are also summed from each
S compressor
Note that it is possible to combine both SDTV and
HDTV encoders in a single stat mux group. In this
case, the encoder 300 is an example one of the encoders
112, 114,..., 116 of FIG. 1. For example, one HDTV
10 encoder may be combined with two or three SDTV encoders
in a stat mux group.
~.... ".__. ... .
A key part of a statistically multiplexed multi-
channel encoding system of the invention is the
calculation of the need parameter.
The visual characteristics and complexity
information regarding the source video are collected
and condensed into a single parameter, which is
referred to as the "need parameter". A need parameter
is calculated for each video channel, and is updated
once per frame whenever a new video frame is processed
by the corresponding single-channel encoder 112, 114,
..., 116. Optionally, the need parameter can be updated
more often, such as multiple times per frame.
Moreover, for field-picture mode, the need parameter
can be updated once per field.
In accordance with the invention, a need parameter
is determined for each frame by scaling complexity

' CA 02321015 2000-09-27
11
information from previous frames of the same and
different picture types. For example, the complexity
(QL x R) for a past encoded I-frame is scaled
(multiplied by) by a scale factor:
(intra-frame activity of current frame/intra-frame
activity of past I-frame).
This provides an estimate of the current frame's
complexity if it is to be intra-frame coded.
Advantageously, this avoids the need to know the exact
relationship between the pre-encoding statistics (e. g.,
the intra-frame activity) and the true complexity of
the current frame, which can only be determined
following encoding (post-encoding). This scaled I-
frame complexity, (Act (n) /Act (m) ) ~X- (m) , is found in
equations (2), (3), (8) and (9), discussed below.
Frame (n) refers to the current frame, and (m) refers
to the last encoded I-frame; Act( )'s are the intra-
frame activities, and XI (m) is the complexity of the
last encoded I-frame.
Similarly, the P-frame coded complexity for the
current picture may be estimated from:
(the average complexity of past P-frame encoded
pictures) x (ME score of current frame/ average ME score
of past P-frames).
The above scale factor works if the motion
estimation performed on the current frame is of the P-
frame type, as in the 2"~ term of equation (2) .
To estimate the B-frame coded complexity for the

CA 02321015 2000-09-27
12
current picture, apply the scale factor:
(ME score of current frame/average ME score of
past B-frames)
to the average ME score of past B-frames, assuming
that a B-type motion estimation is performed on the
current picture. However if the current frame uses P-
type motion estimation, then an extra scaling factor of
kappa, K is added to the above formula since the ME
scores of past B-frames are of the B-type:
[x ~ (MEP score of current frame/ average MEH score
of past B-frames)]
This is exactly the case for the third additive
term in equation (2). However, from experiments for
many steady scenes, the above scale factor is close to
unity in most cases. Thus equation (2) is simplified
to equation (13). The non-linear operator O and the
linear operation of "( )", which denotes a bounded
value, can optionally be used to prevent large swings
in the scale factors. In the case of a scene change,
the factor K x(ME score of current frame/ average ME
score of past B-frames) does not remain close to unity.
It is substituted with (ME score of current
frame/average ME score of past P-frames) instead, so
eqn. (2) is simplified into eqn. (8) in the case of a
scene change. With the non-linear and linear operators
added, eqn. (8) becomes eqn. (16) .
The above discussion assumes that the current
frame is to be coded as P-frame, with P-type motion

r
CA 02321015 2000-09-27
13
estimation. Thus, eqn. (2) and its derivatives, eqns.
(8), (13) and (16) apply.
If the current frame is to be coded as B-frame,
then the ME score for the current frame is of the B
type. To estimate the P-frame complexity for the
current picture, use the scale factor:
[ ( 1/K) ~ (MEB (n) / average MEp scores of past P-
frames)]
multiplied with the average complexity of past P-
frames, as seen in the second additive term of eqn.
(3). Again, for the case of no scene change, the above
scale factor may be replaced with unity, and eqn. (3)
simplifies to eqn. (14). In the case of a scene
change, the above factor is replaced with (ME score of
current frame/average ME score of past B-frames), which
leads to eqn. (9) and its operator-bounded equivalent
in eqn. (15).
To summarize, to estimate the complexities for the
current frame before encoding it, use scaled versions
of the complexities of past encoded frames. The
various scaled complexities are combined through the
use of a weighted summation to form a basis for the
need parameter. The weights N~, and NB are,
respectively, the number of P-frames and the number of
B-frames in a GOP.
After the weighted sum is calculated, several
subjective visual quality adjustments can optionally be
used, including FADEaa~ to account for the presence of

CA 02321015 2000-09-27
14
fading in the source material, APLad~ to account for low
brightness level (average pixel level) in the picture,
and Bade to account for the encoder's buffer fullness.
One more visual quality adjustment, Low motlonad~, may
be added which adjusts the need parameter for a low
amount of movement in the video scene. Thus,
adjustments can be made to the need parameter when
these special cases are detected. The optimal amount
of adjustment can depend on the implementation. The
detection of such scenes may be based on the scene
scores computed for scene change detection, which is
the sum of absolute differences between two consecutive
fields of the same parity (odd field to odd field, or
even field to even field). A similar technique can be
used based on the differences between two consecutive
frames in a progressive video application.
The reasoning behind these subjective visual
quality adjustments is that for each of these scenes:
fading, low luminance level or low amount of movement,
the eyes are more sensitive to any compression
artifacts. As a result, it is desired to adjust the
need parameter upward in an attempt to gain more
encoding bandwidth allocation from the rate control
processor 125.
Discussion
In the following description of a stat mux, each
video service is assumed to provide a picture
complexity measure, such as an ME score or activity

' CA 02321015 2000-09-27
level, to the rate control processor 125, which handles
the tasks of allocating bandwidth for each television
service provider (TSP), e.g., channel, and modulating
the transmission rates for each channel. In an encoder
5 with look ahead capability, the ME score can be
replaced by other measurements such as the actual
number of bits coded under a constant quantization
level (QL). See the section below entitled "Discussion
of ME scores".
10 For the high-definition encoder that processes
multiple panels of a frame in parallel, the encoders
112, 114, ..., 116 collect the ME sccres from all the
panels and compute the sum along with other parameters
such as average pixel level (APL), picture resolution,
15 frame rate, frame type (I, B or P1 and total, intra-
frame activity. It also keeps a record of the sizes
and average QL for past frames. Based on the
information available, plus the look ahead parameters
from scene change, fade and film detection, the MCC 370
can derive a need parameter for that video channel.
As the rate control processor 125 receives an
updated need parameter from a buffer/need parameter
calculation function 102, 104, ..., 106, it reallocates
the bandwidths for all the video services based on the
latest information. The bandwidth allocation is sent
back to each encoder 112, 114, ..., 116 in the form of an
encoding bit rate. Moreover, the rate control
processor 125 uses the bandwidth allocation to compute

CA 02321015 2000-09-27
16
bit budgets for encoding. It keeps an approximate
video buffering verifier (VBV) model, such as is know
from the MPEG standard, to ensure that each frame is
encoded within acceptable size limits.
Note that the VBV model is only approximate
because the actual transmission rate changes that occur
at the decode time of a frame cannot be precisely
modeled in advance, at the time of encoding. The rate
control processor 125 keeps a bit accurate model of the
decoder buffer 184, and if it is given the sizes of
each encoded frame along with the decoding time stamp
(DTS), the min. and max. limits on the transmission
rate can be calculated and used before a transmission
rate change is issued. As known rrom the MPEG
standard, a DTS is a field that is present in a PAS
packet header that indicates the time that an access
unit (e. g., picture) is decoded in a system target
decoder.
Since all the video services need not be frame-
synchronized, the encoding bit rates and transmission
rates are updated as frequently as the rate control
processor can handle.
Calculation of Need Parameter
In accordance with the invention, the current
frame activity, average frame activity, current frame
ME scores, and average frame ME scores, are prefrably
directly applied in the calculation of the need
parameter. Optionally, a table look-up may be used.

CA 02321015 2000-09-27
17
The need parameter calculation functions 102, 104,
..., 106 calculate the need parameter according to the
current picture type in the beginning of a new frame,
and pass the need parameter to the rate control
processor 125 no later than, e.g., two quantization
level/bit rate (QL/BR) cycles before the start of the
slice encoding at the encoder 112, 114, ..., 116. This
lead time ensures the rate control processor 125 has
enough processing time for bandwidth allocation.
Let Np,~P; B ~n~ ( see eqns . ( 1 ) ( 2 ) ( 3 ) ) , be the need
parameter for the current frame to be coded. The
subscript denotes the possible current frame type,
e.g., I, P or B. NP_(n) is calculated by she need
parameter calculator when the current frame tc be
encoded is an I frame, and is provided to the rate
control processor 125 to determine a bit rate
allocation for the corresponding encoder. For example,
bits can be allocated for coding each current frame
using the ratio of each frame's need parameter to the
sum of all need parameters of all current frames in the
different channels, where the ratio is multiplied by
the total available channel bandwidth.
Similarly, NPF(n) and NPB(n) are calculated by an
associated one of the need parameter calculators when
the current frame to be encoded is a P- or B-frame,
respectively.
is a frame rate adjustment multiplier, such

CA 02321015 2000-09-27
.
18
that F~ =l~frame-rate , where the frame-rate may be, a . g. ,
24 or 30 frames/sec. The terms, y}, in eqns. (1), (2)
and (3) are for stat mux performance tuning, where
APL
a~ is a function of average pixel level (e.
g., the
average luminance for the current frame) and BQ~, is
function of encoder buffer level. The APL adjustment
factor can be set to: APL~~; - (3+ x) / (1+ x) where x
- (average pixel level of current frame) / (pixel level
of dark scene). The common dark scenes are found to
have a pixel level around twenty-six.
For scenes with a low amount of movement, a need
parameter adjustment factor of 1.5 may be used. For
scenes with fades, an adjustment factor of _.0 may be
used.
1J In addition, the terms, [~], in eqns. (2) and (3)
are taken into account only when the frame to be coded
is one of the first three frames immediately after a
scene change I-frame. They should be treated as unity
otherwise.
Furthermore, let X,(m) be the complexity for the
current I-frame (i.e., the I-frame of the current group
of pictures - GOP - to be encoded). That is, when the
current frame is not an I-frame, XI(m) is the complexity
of the I-frame in the GOP to which the current frame
belongs. X,(m -1) is the complexity of the I-frame from
the previous group of pictures (GOP). When the current
frame to be encoded is an I-frame, its complexity of

CA 02321015 2000-09-27
19
course is not available, so X, (m-1) is used. However,
when the current frame to be encoded is a P- or B-
frame, X,(m) is available since an I-frame is the first
frame of a GOP, so an I-frame will have already been
encoded by the time a P- or B-frame is to be encoded.
Generally, I-frames are relatively far apart
compared to P- or B-frames, so the complexity for a
current I-frame is obtain by scaling only the most
recent I-frame. However, conceivably an average of two
or more previous I-frames can be used to provide the
complexity for a current I-frame, e.g., for short GOP
lengths or relatively slowly-changing video scenes.
Also, when an average activity level or CIE score is
used in calculating the need parameter, e.g., for P-
and B-frames, this can be a weighted average, where the
more recent frames are weighted higher.
NP and N8 are the nominal numbers of P and B-
frames to be coded in the current GOP (i.e., the total
number of frames of that type in a GOP). .~P (eqn.
(4)), and Xe (eqn. (5)) are average complexities for a
number of previous P- and B-frames, respectively, which
can be in the same and/or different GOP as the current
frame. The overbar "-" denotes an average. Eqn. (6)
defines the complexity for the i-th coded frame with
respect to the frame type: I, P, or B. Q~,p,B(l) is
the average quantization scale for a frame (e.g., for

CA 02321015 2000-09-27
~ r
SDTV). When multiple panels of a frame are encoded in
parallel (e. g., for HDTV), the average quantization
scale for a frame is given by dividing the sum of the
panel quantization scales of the i-th frame over the
5 number of macroblocks coded in the i-th frame.
BC" p,B (1) is the bit count over the i-th fr ame . Each
encoder performs a bit count on a frame while the frame
is being encoded.
In eqn. ( 1 ) , Act(n) is the current frame activity,
10 and Act (eqn. (7)), is the estimated mean activity over
a number of previous frame. The first term in ean.
( 1 ) , Act(n) / Act , is to account fer the instant change
m current frame activity, and the second term, in
parentheses, represents the estimated lona-term
15 complexity. During steady scene or still frames, the
first term should have a value very close to unity, in
which case the value of the need parameter NP=(n) is
dominated by the second term.
Furthermore, during a scene change, the first term
20 should reflect how much the new scene (after the scene
change) deviates from the old scene (before the scene
change). Therefore, the need parameter is properly
scaled by the first term, Act(n) / Act .
For bzevity, the composite term
AD,T=Fd~'{AP,T~dJ}~{Bpd;}'~FADFd;~ is used.

CA 02321015 2000-09-27
.
21
Act (n)
Np!(n)= ~(XI(m-1)+NP~XP+NB~XB)~AD.I (1)
Act
_ Act(n) MEP (n) - - ~I~IEP (n)
~P(n) Act(m)~Xr(m)+ ME ~Np''Yp+Na~Xe~ K~ AD.I
MEB
NP(n)= Ac~n) .XI(m)+Np ~XP (llx)~ ~(n) +~ N~ ~XB DJ (3)
Ac~m) ~ ~ '
1 n_I
X P = - ~ ~ X P ( i ) ; N = 4, n denotes current ,frame ( 4 )
N r=n-N
1 n-'
X B = - ~ ~ .YB (i) ; N = 4, n denotes current frame (5)
N ~=n_N
X I , P B ( I ) _ ~ I P B ( I ) ~ BC' I P B ( I ) (
~r-I
Act = N . ~ Act ( i ) ; N = ~, n denotes current frame ( 7 )
=~t-rr
In eqns. (2) and (3), the multiplier,
MEP,B(n)~MEP-e , is used for an analogous reason as the
first term in eqn. ( 1 ) , Act(n) / Act . Namely, the
current frame ME score, NrE'P,8(n) , accounts for an
S instant change in motion estimation. MEP~a is the
estimated mean of the ME score of the P- or B-frame
respectively. The ratio, Act(n)lAct(m) , multiplying .Y, (m)
is to weight the influence of the complexity of the I-
frame by the similarity in activity measurements as the

CA 02321015 2000-09-27
,
r
22
slice encoding moves further into a GOP. Act(n) is the
activity measurement of the current P or B-frame, and
Acr(m) is the activity measurement of the I-frame within
the current GOP.
During steady scene or still frame, the
multipliers, ~LIEp,B(n)~VIEo~B and ~cr(n)I Act(m) , in ( 2 ) and ( 3 )
have values close to unity. Then, given the terms in (y
equal to unity, the adjustment of the need parameter is
either dominated by NP~,~P or NB~XB depending on
whether the current frame r_o be coded is a P- or H-
frame, respectively. During the scene change, and when
the current frame is within the first three frames
immediately after an I-frame, .~cr(n)l.-icr(m) should still
stay very close to unity. ~Iowever, the terms in
brackets "[~) " are no longer uni t_~. In (2; , x~~LlEo(n) is
to estimate :~IEB(n) when the current frame to be coded i s
a P-frame. In (3) , (t/x)~:~IEe(n) is to estimate .LIEp(n) when
the current frame to be coded is a B-frame, where
x=:I.IEe~~LIEP . Thus, by substituting K with MEa~MEP into
(2) and (3), two new Need Parameter equations, (8) and
are obtained for the first three P/B-frames
immediately after the scene change (SC) I-frame. Once
slice encoding passes the first four frames of a GOP,
the ratio, Acr(n)IAcr(m) , may vary depending on how close
the current activity is to the I-frame activity. The
influence of the I-frame complexity is thereby
adjusted.

CA 02321015 2000-09-27
' . r
23
NP (n)s~ = Ac~n) _ Xj (m) + ~ (n) ~ (Np . X p + NB ~ Xa ) ADJ (8)
Act(m) ~p
NPB(n)S~=CAct(m).Xf(m)+MMEBn)~(NP~XP+Ns~Xs) ADJ (9)
It is desirable to avoid rapid changes in the need
parameter. The dynamic range of the ME score is quite
large. The ME score is zero during still frames, and
in the low 200,000's for a 1920x1080 interlaced (I)
pixel picture with minimum motion. The large dynamic
range can make the need parameter described in eqns.
(2) , (3) , (8) and (9) unstable. ~ first, if the ME score
stays very low, e.g., approximately one, for a period
of rime, then a small variation in the instant ME score
can cause undesirable large swings in the need
parameter. Second, the need parameter may become
infinite as the picture transitions out of the still
frames, ~.e., MEp~B = 0 and MEp,B(n) > 0, (~I~IEp,B(n)~~~IEp,s) -goo .
Similar problems can be found in eqn. (1), as the
picture transitions out of the flat field, since the
frame activity is zero for flat field. A flat field is
a video frame that shows a blank screen with a
"flat"(i.e., constant) luminance level.
However, since the other terms in eqns. (1), (2),
(3), (8) and (9) are already bounded, the problem with
need parameter stability can be simplified to that of
finding the upper and lower bounds of the ratios. Let

CA 02321015 2000-09-27
r
. ,
24
us impose two constraints on the need parameter for
finding these bounds, such that Np,~p,g(n) « oo at all
times, and large swings in the need parameter are also
prevented.
Without losing generality, eqn. (2) is used as an
example in the following paragraph to find the upper
and lower bounds for the ratio. The derivation of the
bounds for eqn. (2) is also applicable to eqns. (1),
(3) , (8) and (9) .
One solution is to impose a non-linear operator.
0 , eqn. (10), to the numerator and denominator.
D . ~ - X - ~ ~ X
> X (10 )
Let ,~,' be an arbitrary positive number. Then, the
ratio D.rI~IEP(n)~D~MEP is bounded as follows
MME P (n) max( TIE P (n))
0 < <
o ME P
In addition, choosing a large enough ~,' can reduce
large swings in the need parameter when both the
instant and the average ME scores are small in
magnitude. Therefore, the two constraints are met with
the additional non-linear operation as well.
Furthermore, one can impose an additional linear
operation, eqn. (11) (where the operator ~~) denotes a
bounded value) , on the ratio ~~MEp(n)~~~MEp to obtain
better control over its upper and lower bounds such

CA 02321015 2000-09-27
that the ratio is upper bounded by a/c, and lower
bounded by c l a .
~~>- a-S2+c (11)
c-S2+a
For implementation purposes, one can start by
5 setting a=5 and c=1. Theoretically, the ratio
multiplying the complexity of the I-frame should be
very close to unity during the first four frames of a
scene change GOP. One can simplify (8) and (9) by
dropping the ratio. Given the non-linear and linear
10 operations, and simplification, (1), (2), (3), (8) and
(9) can be rewritten as (12), (13), (14), (15) and
(16), respectively.
D ~ Ac~(n)
NP(n)= - -(X, (m-1)+Np ~,Yp +N,~ ~,YB~ID.I (12)
D ~ Act
y ~ Act(n) y ~ NfEP (n)
NPp (n) _ ~( ) ' 'Y~ (m) + D . ME ~ NP ' fY p + NB ~ X a AD.I ( 13)
v ~ Ac m
NP (n) _ ~' Ac~n) . XI (m) + Np ~ .YP + ~' ~ NB ~ Xa DJ (14)
~- Ac~m)
D ~ ME (n)
NPB(n)S~ - X, (m) + B ~ (Np ~ X p + Ne ~ ~Ya) ADJ (15)
v ~MEB
D - MEp (n)
NPP(n)s~= XI(m)+ -(Np-Xp+NB-Xa) ADJ (l~
v . ~r~'p
x in eqn. (10) is picture resolution-dependent.

CA 02321015 2000-09-27
i
26
Actual experiments are required to optimize the value.
However, half of the ME level picture resolution is
recommended as a starting point.
In practice, with certain scenes, e.g., those
having flashing lights, the need parameter oscillates
per frame with a large dynamic range. In this case, it
is more time critical to receive the additional
bandwidth immediately as opposed to giving away surplus
bandwidth. An adaptive temporal filtering, eqn. (17),
applied to the calculated need parameter help to
minimize the large oscillations while still preserving
a quick response time for increasing needs in
bandwidth.
0.9~NP(n)+O.I~NP(n-1) ;NP(n)>NP(n-1)
NP'(n) _
0.5~NP(n)+0.5~NP(n-1) ;NP(n)<NP(n-1)
As mentioned previously, one can always brv_ng in
the two variables left out in the initial
implementation for further performance tuning, y.e.,
APLj~;; , the average pixel 1 evel ad j ustment, and B;"; , the
buffer level adjustment.
Discussion of ME scores
Generally, any type of ME score may be used with
the present invention. One suitable ME score is the
sum of the absolute values of the pixel-by-pixel
differences between the current picture to be encoded
and a motion-compensated prediction frame. The
prediction frame could be constructed from a single
previously-encoded picture, as in the P-frame type of

r
CA 02321015 2000-09-27
27
motion estimation, or it could be constructed from two
previously encoded pictures as in the B-frame type of
motion estimation.
In a hierarchical motion estimation, where both
the current picture and reference picture are decimated
in size to produce successively smaller pictures for
each level of the hierarchy, an exhaustive motion
search is usually performed on the first level of the
hierarchy whose picture sizes are the smallest. The
best motion matches for every block on the first level
are then passed onto the next level, where the motion
search may center around the best matches. For each
level, best matches are found for every block and the
ME score from those matches are summed up for the
entire frame. A hierarchical motion estimation may
have an arbitrary number of levels, and the ME score
from any one level may be used to represent the
complexity of the current picture. The ME score from
the last level of the hierarchy is usually preferred
since it gives a more accurate representation of the
difficulty in coding that picture. However, in many
implementations, the ME score from the last level may
not be available in time, so an earlier-level ME score
is used.
1.0 Bandwidth Allocation for Encoding Bit rate
The rate control processor collects the latest
need parameters from all the video channels in the stat

CA 02321015 2000-09-27
28
mux group. It also has access to the minimum and
maximum bandwidth limits set by the user for each
individual video channel. Prior to allocating encoding
bit rates for each video channel, the rate control
processor 125 sums up all the need parameters and
assigns a need bit rate to each channel in proportion
to the channel's need parameter.
In allocating encoding bit rates, the rate control
processor 125 attempts to honor all minimum bandwidth
limits first. If the sum total of all min. bandwidths
exceeds the total video bandwidth given for the stat
mux group, the rate control processor distributes the
total video bandwidth amongst all the video channels in
proportion to their min. bandwidth limits. 0ther~~ise,
each channel is given its user-requested mir..
bandwidth. That is, the human ope=ator or the star mux
system may externally set a min. bandwidth reQUirement
for each individual video channel.
The min, bandwidth assigned to any particular
channel is subtracted from the need bit rate previously
calculated for that channel, since this indicates a
portion of the need had been satisfied. After
allocating the min. bandwidths, any remaining video
bandwidth is distributed to all the channels in
proportion to their remaining needs, with the
constraint that no channel can exceed its max.
bandwidth limit.
If there is still video bandwidth left over (which

CA 02321015 2000-09-27
29
is possible if one or more channels hit their max.
bandwidth limits), the leftover bandwidth is
distributed to those channels that have not yet hit
their max. bandwidth limit. This distribution is made
according to (the channel's need parameter /
total-sneed param). The term total-sneed-param refers
to the sum of the need parameters belonging to those
channels that have not reached their max. bandwidth
limit.
1C 1.1 A C-language syntax for assigning an encoding
bit rate in accordance with the invention
The syntax should be self-explanatory to those
skilled in the art. The following notation is used ;n
naming the parameters: avail - available, br - bit
rate, dmin - difference in minimum, hmin - hard
minimum, max - maximum, mem - member, min - minimum,
nom - nominal, num - number, param - parameter, rem -
remaining, req - requested, sav - still available,
sneed - scaled need, tot - total.
(1) Initially assign a nominal bit rate to each
stat mux group member's (channel's) encoding br and
need br:
for (i=0; i<num_mem; i++){
need br[ij = nom br;

CA 02321015 2000-09-27
encoding br[i] = nom br;
br avail = br avail - nom br;
(2) Calculate the total need parameter:
5 tot need_param = 0;
for (i=0; i<num_mem; i++)
tot need_param = tot_need_param + need_param[i];
(3) Calculate the need br for each channel by
distributing the available bit rate among the channels
10 of the statistical group in nroporcion to their need
parameter.
for (i=0; i<num_mem; i++){
if (tot need_param != 0)
need br[i] = br avail * need_param[i]/tot need_param;
15 else
need_br[i] = 0;
(4) Check if total-user min in the group exceeds
the total group bit rate. If so, distribute the
20 available bit rate among the channels of the
statistical group in proportion to their user min. The
user, such as the stat mux operator can set a max

CA 02321015 2000-09-27
r 4
31
(user max) and min (user min) for the encoding bit rate
for each channel. Moreover, a higher-priority channel
may receive higher user max and/or user min values.
for (i=0; i<num_mem; i++)
tot min = tot min + user min[i];
if (tot min > br avail){
for (i=0; i<nurn_mem; i++)
encoding_br[i] = br avail * user min[i]/tot_min;
br avail = 0;
}
(5) Otherwise, allocate the user minimum requested
to each member's encoding bit rate and adjust the
available bit rate accordingly.
if (br avail > 0){
tot rem_br req = 0;
for (i=0; i<num_mem; i++)(
encoding br[i] = user min[i];
br avail = br avail - user min[i];
need_br[i] = need br[i] - user min[i];
2 o if (need br[i] <0)
need-br[i] = 0;
tot rem_br req = tot rem_br req + need br[i];

CA 02321015 2000-09-27
32
(6) The remaining available bit rate is
distributed among the members of the statistical group
in proportion to their remaining need.
if (br avail > 0){
br remain = br avail;
for (i=0; i<num_mem; i++){
if (tot rem_br req != 0){
encoding br[i] = encoding_br[i] + br avail
need_br[i]Itot rem br req;
1 o br remain = br_remain - (br avail * need_br[i)/tot_rem_br_req);
if (encoding br[i] >= user maxi]){
encoding br[i] = user max[i];
br_remain = br_remain + (encoding_br(i] - user_max(i]);
i5
(7) Next, distribute the remaining bit rate in
proportion to the scaled-need_param without exceeding
the user-defined maximum bit rate for the channel.
2 o br left = br remain;
if (br_remain> 0){
for (i=0; i<num_mem; i++){
if (encoding br(i] < user max[i])
tot sneed_param = need_param[i);

CA 02321015 2000-09-27
33
if (tot sneed_param != 0){
for (i=O;i<num_mem;i++){
if (encoding br[i] < user max[i]){
encoding br[i] = encoding br[i] + br remain
scaled need_param[i]Itot_need_param;
br left = br left - (br remain
scaled_need_param[i]/tot_need_param);
)
1 o if (encoding br[i] > user max[i]){
encoding_br[i] = user max[i];
br left = br left + (encoding_br[i]-user max[i]);
J
(8) Finally, distribute the remaining the bit rate
in proportion to how much room is left in each channel
without exceeding the user defined maximum bit rate for
20 the channel.
if (br_remain> 0){
tot leftover = 0;
for (i=O;i<num_mem;i++)
tot leftover = tot leftover + user max[iJ - encoding_br[i];
25 if (tot leftover != O){

CA 02321015 2000-09-27
34
for (i=O;i<num_mem;i++)
encoding br[i] = encoding br[i] + br remain * (user max[i] -
encoding br[iJ)Itot leftover;
}
}
} I* end (if br avail >0) from 6 *I
FIG. 4 illustrates a method for obtaining a need
parameter for an I-picture in accordance with the
present invention. A summary of one possible approach
is shown.
For a current picture which is an I-picture (block
400), a complexity measure is obtained of a previous I-
picture (e. g., in a previous GOP), and average
complexity measures of a number of previous P- and B-
pictures are obtained (block 410). The average
complexity measures are scaled by the number of the
associated picture r_ype in the current GOP (block 420).
An activity level of the current I-picture, and an
average activity level of a number of previous pictures
(typically P- and B-pictures) are obtained (block 430).
The need parameter NPI(n) is obtained by scaling the
values obtained by an activity level ratio (block 440).
Other adjustments, such as for a fade, etc., can also
be applied to the need parameter, if appropriate (block
450), and, finally, an encoding bit rate is allocated
to the current I-picture based on its need parameter
(block 460) .

CA 02321015 2000-09-27
FIG. 5 illustrates a method for obtaining a need
parameter for an P-picture in accordance with the
present invention. A summary of one possible approach
is shown.
5 For a current picture which is a P-picture (block
500), a complexity measure is obtained of the I-picture
in the current GOP, and average complexity measures of
a number of previous P- and B-pictures are obtained
(block 510). A motion estimation score of the current
10 picture, and an average motion estimation score of a
number of previous P-pictures are obtained (block 520).
An activity level of the current P-picture, and an
activity level of the I-picture in the current GOP are
obtained (block 530). To obtain the need parameter
~5 NP~(n) for the current P-picture, the complexity of the
I-picture is scaled by a ratio or the activity levels,
the average complexity measure of the previous F-
pictures is scaled by the number of P-pictures in the
current GOP and by a ratio of the motion estimation
20 scores, and the average complexity measure of the
previous B-pictures is scaled by the number of B-
pictures in the current GOP (block 540). Other
adjustments can also be applied to the need parameter,
if appropriate (block 550), and, finally, an encoding
25 bit rate is allocated to the current P-picture based on
its need parameter (block 560).
FIG. 6 illustrates a method for obtaining a need
parameter for an B-picture in accordance with the

CA 02321015 2000-09-27
36
present invention. A summary of one possible approach
is shown.
For a current picture which is a B-picture (block
600), a complexity measure is obtained of the I-picture
in the current GOP, and average complexity measures of
a number of previous P- and B-pictures are obtained
(block 610). A motion estimation score of the current
picture, and an average motion estimation score of a
number of previous B-pictures are obtained (block 620).
An activity level of the current B-picture, and an
activity level of the I-picture in the current GOP are
obtained (block 630). To obtain the need parameter
NP=(n) for the current B-picture, the complexity of the
I-picture is scaled by a ratio of the activity levels,
the average complexity measure of the previous B-
picLUres is scaled by the number of B-pictures in the
current GOP and by a ratio of the motion estimation
scores, and the average complexity measure of the
previous P-pictures is scaled by the number of P-
pictures in the current GOP (block 640). Other
adjustments can also be applied to the need parameter,
if appropriate (block 650), and, finally, an encoding
bit rate is allocated to the current B-picture based on
its need parameter (block 660).
Accordingly, it can be seen that the present
invention provides a statistical multiplexes for coding
and multiplexing multiple channels of digital
television data. A bit rate need parameter is

CA 02321015 2000-09-27
37
determined for each encoder in a stat mux group by
scaling the complexities of previous pictures of the
same and different picture types. Scaling factors
based on an activity level, motion estimation score,
and number of pictures of a certain type in a GOP, may
be used. Moreover, the scaling factors may be bounded
based on a linear or non-linear operator to prevent
large variations in the factors. An encoding bit rate
is allocated to each channel based on its need
parameter.
Although the invention has been described in
connection with various preferred embodiments, it
should be appreciated that various modifications and
adaptations may be made t:~ereto without departing from
the scope of the invention as set forth in the ci~ims.

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: IPC deactivated 2014-05-17
Inactive: IPC deactivated 2014-05-17
Inactive: IPC deactivated 2014-05-17
Inactive: IPC from PCS 2014-02-01
Inactive: IPC expired 2014-01-01
Inactive: IPC expired 2014-01-01
Inactive: IPC expired 2011-01-01
Inactive: Dead - RFE never made 2006-09-27
Application Not Reinstated by Deadline 2006-09-27
Deemed Abandoned - Failure to Respond to Maintenance Fee Notice 2006-09-27
Inactive: IPC from MCD 2006-03-12
Inactive: IPC from MCD 2006-03-12
Inactive: IPC from MCD 2006-03-12
Inactive: Abandon-RFE+Late fee unpaid-Correspondence sent 2005-09-27
Inactive: Cover page published 2002-04-02
Application Published (Open to Public Inspection) 2002-03-27
Inactive: First IPC assigned 2000-11-30
Inactive: First IPC assigned 2000-11-30
Inactive: IPC assigned 2000-11-30
Inactive: IPC removed 2000-11-30
Inactive: First IPC assigned 2000-11-30
Letter Sent 2000-10-30
Inactive: Filing certificate - No RFE (English) 2000-10-30
Filing Requirements Determined Compliant 2000-10-30
Application Received - Regular National 2000-10-28

Abandonment History

Abandonment Date Reason Reinstatement Date
2006-09-27

Maintenance Fee

The last payment was received on 2005-06-21

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Fee History

Fee Type Anniversary Year Due Date Paid Date
Application fee - standard 2000-09-27
Registration of a document 2000-09-27
MF (application, 2nd anniv.) - standard 02 2002-09-27 2002-06-19
MF (application, 3rd anniv.) - standard 03 2003-09-29 2003-06-20
MF (application, 4th anniv.) - standard 04 2004-09-27 2004-06-28
MF (application, 5th anniv.) - standard 05 2005-09-27 2005-06-21
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
GENERAL INSTRUMENT CORPORATION
Past Owners on Record
HANSON ON
JING YANG CHEN
MICHAEL CASTELOES
REBECCA LAM
ROBERT S. NEMIROFF
SIU-WAI WU
VINCENT LIU
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative drawing 2002-02-28 1 11
Description 2000-09-27 37 1,146
Cover Page 2002-04-02 1 42
Claims 2000-09-27 7 193
Drawings 2000-09-27 6 145
Abstract 2000-09-27 1 19
Courtesy - Certificate of registration (related document(s)) 2000-10-30 1 120
Filing Certificate (English) 2000-10-30 1 163
Reminder of maintenance fee due 2002-05-28 1 111
Reminder - Request for Examination 2005-05-30 1 116
Courtesy - Abandonment Letter (Request for Examination) 2005-12-06 1 166
Courtesy - Abandonment Letter (Maintenance Fee) 2006-11-22 1 175
Fees 2003-06-20 1 31
Fees 2002-06-19 1 36
Fees 2004-06-28 1 35
Fees 2005-06-21 1 29