Language selection

Search

Patent 2475779 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 2475779
(54) English Title: DIGITAL PORTABLE TERMINAL AND METHOD USING POSITIVE AND NEGATIVE ROUNDING FOR PIXEL VALUE INTERPOLATION IN MOTION COMPENSATION
(54) French Title: TERMINAL NUMERIQUE PORTATIF ET METHODE UTILISANT L'ARRONDISSEMENT POSITIVE ET NEGATIVE POUR L'INTERPOLATION DE VALEURS DE PIXELS DANS LA COMPENSATION DU MOUVEMENT
Status: Deemed expired
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04N 19/625 (2014.01)
  • H04W 88/02 (2009.01)
  • H04N 19/51 (2014.01)
  • G06T 9/00 (2006.01)
(72) Inventors :
  • NAKAYA, YUICHIRO (Japan)
  • NEJIME, YOSHITO (Japan)
(73) Owners :
  • HITACHI LTD. (Japan)
(71) Applicants :
  • HITACHI LTD. (Japan)
(74) Agent: KIRBY EADES GALE BAKER
(74) Associate agent:
(45) Issued: 2005-10-25
(22) Filed Date: 1998-06-08
(41) Open to Public Inspection: 1998-12-09
Examination requested: 2004-08-25
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
9-150656 Japan 1997-06-09

Abstracts

English Abstract

The present invention relates to a digital portable terminal. The terminal is comprised of an antenna for sending digital signals and an input device for acquiring image information. A frame memory is provided for recording a decoded image of a reference frame. A block matching section for estimates motion vectors and synthesizes a predicted image of a current frame by performing motion compensation between the decoded image of the reference frame and an input image of the current frame. A DCT converter is provided for performing DCT conversion of a difference between the input image of the current frame and the predicted image of the current frame to obtain DCT coefficients. A quantizer quantizes the DCT coefficients. A multiplexer is provided for multiplexing information related to quantized DCT coefficients, the motion vectors and a rounding method used for pixel value interpolation in the motion compensation. The block matching section includes a rounding method determination unit which decides whether a positive rounding method or a negative rounding method is used for pixel value interpolation in the motion compensation. The synthesizing of the predicted image is performed using the decided rounding method and the motion vectors. The antenna sends multiplexed coded information.


French Abstract

La présente invention concerne un terminal portable numérique. Le terminal se compose d'une antenne pour envoyer des signaux numériques et d'un dispositif d'entrée pour acquérir des informations d'image. Une mémoire d'image est prévue pour enregistrer une image décodée d'une image référence. Une section d'appariement de bloc pour estimer des vecteurs de mouvement et synthétiser une image prédite d'une image actuelle en réalisant une compensation de mouvement entre l'image décodée de l'image référence et une image d'entrée de l'image actuelle. Un convertisseur DCT est prévu pour réaliser une conversion DCT d'une différence entre l'image d'entrée de l'image actuelle et l'image prédite de l'image actuelle pour obtenir des coefficients DCT. Un quantificateur quantifie les coefficients DCT. Un multiplexeur est prévu pour multiplexer des informations concernant les coefficients DCT quantifiés, les vecteurs de mouvement et une méthode d'arrondi utilisée pour l'interpolation de valeur de pixel dans la compensation de mouvement. La section d'appariement de bloc comprend une unité de détermination de méthode d'arrondi qui décide si une méthode d'arrondi positif ou une méthode d'arrondi négatif est utilisée pour l'interpolation de valeur de pixel dans la compensation de mouvement. La synthétisation de l'image prédite est réalisée en utilisant la méthode d'arrondi décidée et les vecteurs de mouvement. L'antenne envoie des informations codées multiplexées.

Claims

Note: Claims are shown in the official language in which they were submitted.




CLAIMS

1. A digital portable terminal comprising:
an antenna for sending digital signals;
an input device for acquiring image information;
a frame memory for recording a decoded image of a
reference frame;
a block matching section for estimating motion
vectors and synthesizing a predicted image of a current
frame by performing motion compensation between the
decoded image of the reference frame and an input image
of the current frame;

a DCT converter for performing DCT conversion of a
difference between the input image of the current frame
and the predicted image of the current frame to obtain
DCT coefficients;

a quantizer for quantizing the DCT coefficients; and

a multiplexer for multiplexing information related
to quantized DCT coefficients, the motion vectors and a
rounding method used for pixel value interpolation in
said motion compensation;

wherein said block matching section includes a
rounding method determination unit which decides whether
a positive rounding method or a negative rounding method
is used for pixel value interpolation in said motion
compensation;

38





wherein said synthesizing of the predicted image is
performed using the decided rounding method and the
motion vectors; and

wherein said antenna sends multiplexed coded
information.

2. The digital portable terminal according to claim 1,
wherein said input device for acquiring image information
is a camera.

3. A digital signal processing apparatus comprising:

a camera for acquiring image information;

a frame memory for recording a decoded image of a
reference frame;

a block matching section for estimating motion
vectors and synthesizing a predicted image of a current
frame by performing motion compensation between the
decoded image of the reference frame and an input image
of the current frame;

a DCT converter for performing DCT conversion of a
difference between the input image of the current frame
and the predicted image of the current frame to obtain
DCT coefficients;

a quantizer for quantizing the DCT coefficients;

a multiplexer for multiplexing information related
to quantized DCT coefficients, the motion vectors and a
rounding method used for pixel value interpolation in
said motion compensation; and


39





a memory for storing multiplexed information;

wherein said block matching section includes a
rounding method determination unit which decides whether
a positive rounding method or a negative rounding method
is used for said pixel value interpolation in said motion
compensation; and

wherein said synthesizing of the predicted image is
performed using the decided rounding method and the
motion vectors.

4. A digital portable terminal comprising:

an antenna for sending digital signals;

an input device for acquiring image information;

a monitor for displaying the image information;

a memory for recording a coding program;

a central processing unit (CPU) for executing the
coding program, wherein the coding program comprises:

a code for estimating motion vectors by performing
motion estimation between a decoded image of a reference
frame and an input image of a current frame;

a code for deciding whether a positive rounding
method or a negative rounding method is used for pixel
value interpolation;

a code for synthesizing a predicted image of the
current frame by using the decided rounding method and
information related to the motion vectors;

40





a code for performing DCT conversion of a difference
between the input image of the current frame and the
predicted image to obtain DCT coefficients;

a code for quantizing converted DCT coefficients;

a code for multiplexing information related to
quantized DCT coefficients, the motion vectors and the
decided rounding method; and

a code for storing multiplexed coded information in
the memory.

5. A digital portable terminal comprising:

an antenna for receiving coded information of
images;
a demultiplexer for extracting motion vector
information and quantized DCT coefficients;

an inverse quantizer for inverse-quantizing the
quantized DCT coefficients to output DCT coefficients;

an inverse DCT converter for inverse-converting the
DCT coefficients to output a differential image;

a synthesizer for synthesizing a prediction image by
motion compensation using a positive rounding method and
a negative rounding method for pixel value interpolation;

an adder for adding the differential image and the
prediction image to output a decoded image; and
a monitor for displaying the decoded image.

41





6. A digital portable terminal comprising:

an antenna for receiving coded information of
images;

a memory for recording the coded information;

a memory for recording a decoding program for the
coded information;

a central processing unit (CPU) for executing the
decoding program, wherein the decoding program comprises:

a code for extracting motion vector information,
rounding method information, and information related to a
differential image between a decoded image and a
predicted image from the coded information;

a code for synthesizing the prediction image by
motion compensation; and

a code for synthesizing a decoded image by adding
the differential image obtained by an inverse
transformation of the information related to the
differential image to the predicted image;

wherein said rounding method information specifies
one of two values, and one of the two values specifies a
positive rounding method, and the other one of the two
values specifies a negative rounding method, and a
rounding method specified by said rounding information is
used in said motion compensation.

42





7. The digital portable terminal according to claim 1,
further comprising:
a decoder for reconstructing decoded images from
digital signals received, via said antenna; and
a monitor for displaying the decoded image.

8. The digital portable terminal according to claim 7,
wherein the input device for acquiring an input image is
a camera.

9. The digital portable terminal according to claim 4,
wherein the memory further records a decoding program for
execution by the central processing unit (CPU), the
decoding program comprising:

a code for extracting motion vector information,
rounding method information and information related to a
differential image between a decoded image and a
predicted image from the coded information;

a code for synthesizing the prediction image by
motion compensation; and

a code for synthesizing a decoded image by adding
the differential image obtained by an inverse
transformation of the information related to the
differential image to the predicted image;

wherein said rounding method information specifies
one of two values, and one of the two values specifies a
positive rounding method, and the other one of the two
values specifies a negative rounding method, and a

43





rounding method specified by said rounding information is
used in said motion compensation.

10. The digital portable terminal according to claim 9,
wherein said input device for acquiring image information
is a camera.

11. The digital portable terminal according to claim 1,
wherein:
said positive rounding method is performed in
accordance with the following equations:


Ib=[(La+Lb+1)/2];

Ic=[(La+Lc+1)/2];

Id=[(La+Lb+Lc+Ld+2)/4], and

said negative rounding method is performed in
accordance with the following equations:

Ib=[(La+Lb)/2] ;

Ic=[(La+Lc)/2]; and

Id=[(La+Lb+LC+Ld+1)/4],

where La is an intensity value of a first pixel in
the decoded image, Lb is an intensity value of a second
pixel in the decoded image which is horizontally adjacent
to the first pixel, Lc is an intensity value of a third
pixel in the decoded image which is vertically adjacent
to the first pixel, and Ld is an intensity value of a
fourth pixel in the decoded image which is vertically
adjacent to the second pixel and horizontally adjacent to
the third pixel, Ib is an interpolated intensity value at

44




a midpoint between a position of the first pixel and a
position of the second pixel, Ic is an interpolated
intensity value at a midpoint between the position of the
first pixel and a position of the third pixel, and Id is
an interpolated intensity value of a midpoint between the
position of the first pixel, the position of the second
pixel, the position of the third pixel, and a position of
the fourth pixel.
12. The digital signal processing apparatus according to
claim 3, wherein:
said positive rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb+1)/2];
Ic=[(La+Lc+1)/2];
Id=[(La+Lb+Lc+Ld+2)/4], and
said negative rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb)/2];
Ic=[(La+Lc)/2]; and
Id=[(La+Lb+Lc+Ld+1)/4],
where La is an intensity value of a first pixel in
the decoded image, Lb is an intensity value of a second
pixel in the decoded image which is horizontally adjacent
to the first pixel, Lc is an intensity value of a third
pixel in the decoded image which is vertically adjacent
to the first pixel, and Ld is an intensity value of a
fourth pixel in the decoded image which is vertically
45



adjacent to the second pixel and horizontally adjacent to
the third pixel, Ib is an interpolated intensity value at
a midpoint between a position of the first pixel and a
position of the second pixel, Ic is an interpolated
intensity value at a midpoint between the position of the
first pixel and a position of the third pixel, and Id is
an interpolated intensity value of a midpoint between the
position of the first pixel, the position of the second
pixel, the position of the third pixel, and a position of
the fourth pixel.
13. The digital portable terminal according to claim 4,
wherein:
said positive rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb+1)/2];
Ic=[(La+Lc+1)/2];
Id=[(La+Lb+Lc+Ld+2)/4], and
said negative rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb)/2];
Ic=[(La+Lc)/Z]; and
Id=[(La+Lb+Lc+Ld+1)/4],
where La is an intensity value of a first pixel in
the decoded image, Lb is an intensity value of a second
pixel in the decoded image which is horizontally adjacent
to the first pixel, Lc is an intensity value of a third
pixel in the decoded image which is vertically adjacent
46



to the first pixel, and Ld is an intensity value of a
fourth pixel in the decoded image which is vertically
adjacent to the second pixel and horizontally adjacent to
the third pixel, Ib is an interpolated intensity value at
a midpoint between a position of the first pixel and a
position of the second pixel, Ic is an interpolated
intensity value at a midpoint between the position of the
first pixel and a position of the third pixel, and Id is
an interpolated intensity value of a midpoint between the
position of the first pixel, the position of the second
pixel, the position of the third pixel, and a position of
the fourth pixel.
14. An image decoding method comprising:
receiving an encoded bitstream including information
of P and B frames; and
executing motion compensation by synthesizing a
predicted image of a current frame using motion vector
information included in the encoded bitstream and a
reference image which is a previously decoded image;
wherein said motion compensation includes
calculating intensity values at points where no pixels
actually exist in the reference image by interpolation;
wherein said interpolation is done according to
information specifying a positive rounding method or a
negative rounding method when the current frame is a P
frame; and
wherein said interpolation is done using a
predetermined rounding method which is a positive
47



rounding method or a negative rounding method when the
current frame is a B frame.
15. The image decoding method according to claim 14,
wherein said predetermined rounding method is a positive
rounding method.
16. The image decoding method according to claim 15,
wherein:
said positive rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb+1)/2];
Ic=[(La+Lc+1)/2];
Id=[(La+Lb+Lc+Ld+2)/4], and
said negative rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb)/2];
Ic=[(La+Lc)/2]; and
Id=[(La+Lb+Lc+Ld+1)/4],
where La is an intensity value of a first pixel in
the reference image, Lb is an intensity value of a second
pixel in the reference image which is horizontally
adjacent to the first pixel, Lc is an intensity value of
a third pixel in the reference image which is vertically
adjacent to the first pixel, and Ld is an intensity value
of a fourth pixel in the reference image which is
vertically adjacent to the second pixel and horizontally
adjacent to the third pixel, Ib is an interpolated
48



intensity value at a midpoint between a position of the
first pixel and a position of the second pixel, Ic is an
interpolated intensity value at a midpoint between the
position of the first pixel and a position of the third
pixel, and Id is an interpolated intensity value of a
midpoint between the position of the first pixel, the
position of the second pixel, the position of the third
pixel, and a position of the fourth pixel.
17. An image decoder comprising:
a memory to store a reference image which is a
previously decoded image; and
a synthesizer to receive an encoded bitstream
including information of P and B frames, and execute
motion compensation by synthesizing a predicted image of
a current frame using motion vector information included
in the encoded bitstream and the reference image;
wherein said motion compensation includes
calculating intensity values at points where no pixels
actually exist in the reference image by interpolation;
wherein said interpolation is done according to
information specifying a positive rounding method or a
negative rounding method when the current frame is a P
frame; and
wherein said interpolation is done using a
predetermined rounding method which is a positive
rounding method or a negative rounding method when the
current frame is a B frame.
49




18. The image decoder according to claim 17, wherein
said predetermined rounding method is a positive rounding
method.
19. The image decoder according to claim 18, wherein:
said positive rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb+1)/2];
Ic=[(La+Lc+1)/2];
Id=[(La+Lb+Lc+Ld+2)/4], and
said negative rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb)/2];
Ic=[(La+Lc)/2]; and
Id=[(La+Lb+Lc+Ld+1)/4],
where La is an intensity value of a first pixel in
the reference image, Lb is an intensity value of a second
pixel in the reference image which is horizontally
adjacent to the first pixel, Lc is an intensity value of
a third pixel in the reference image which is vertically
adjacent to the first pixel, and Ld is an intensity value
of a fourth pixel in the reference image which is
vertically adjacent to the second pixel and horizontally
adjacent to the third pixel, Ib is an interpolated
intensity value at a midpoint between a position of the
first pixel and a position of the second pixel, Ic is an
interpolated intensity value at a midpoint between the
position of the first pixel and a position of the third



pixel, and Id is an interpolated intensity value of a
midpoint between the position of the first pixel, the position
of the second pixel, the position of the third pixel, and a
position of the fourth pixel.
20. A computer-readable memory for storing statements or
instructions for use in the execution in a computer of the
process of image decoding method comprising:
receiving an encoded bitstream including information of P
and B frames; and
executing motion compensation by synthesizing a predicted
image of a current frame using motion vector information
included in the encoded bitstream and a reference image which
is a previously decoded image;
wherein said motion compensation includes calculating
intensity values at points where no pixels actually exist in
the reference image by interpolation;
wherein said interpolation is done according to
information specifying a positive rounding method or a
negative rounding method when the current frame is a P frame;
and
wherein said interpolation is done using a predetermined
rounding method which is a positive rounding method or a
negative rounding method when the current frame is a B frame.
21. An image coding method comprising:
estimating motion vectors between an input image to be
coded and a reference image;
51



synthesizing a prediction image of the input image
using the motion vectors and the reference image;
generating a difference image by calculating a
difference between the input image and the prediction
image: and
outputting coded information of the input image
including information related to the difference image and
the motion vectors;
wherein said synthesizing the prediction image
includes calculating intensity values at points where no
pixels actually exist in the reference image by
interpolation;
wherein said interpolation is done according to
information specifying a positive rounding method or a
negative rounding method when a current frame of the
input image is a P frame; and
wherein said interpolation is done using a
predetermined rounding method which is a positive
rounding method or a negative rounding method when the
current frame of the input image is a B frame.
22. The image coding method according to claim 21,
wherein said predetermined rounding method is a positive
rounding method.
23. The image coding method according to claim 22,
wherein:
said positive rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb+1)/2];
52



Ic=[(La+Lc+1)/2];
Id=[(La+Lb+Lc+Ld+2)/4], and
said negative rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb)/2];
Ic=[(La+Lc)/2]; and
Id=[(La+Lb+Lc+Ld+1)/4],
where La is an intensity value of a first pixel in
the reference image, Lb is an intensity value of a second
pixel in the reference image which is horizontally
adjacent to the first pixel, Lc is an intensity value of
a third pixel in the reference image which is vertically
adjacent to the first pixel, and Ld is an intensity value
of a fourth pixel in the reference image which is
vertically adjacent to the second pixel and horizontally
adjacent to the third pixel, Ib is an interpolated
intensity value at a midpoint between a position of the
first pixel and a position of the second pixel, Ic is an
interpolated intensity value at a midpoint between the
position of the first pixel and a position of the third
pixel, and Id is an interpolated intensity value of a
midpoint between the position of the first pixel, the
position of the second pixel, the position of the third
pixel, and a position of the fourth pixel.
24. An image coder comprising:
a memory to store a reference image which is a
previously decoded image; and
53


a synthesizer to estimate motion vectors between an
input image to be coded and a reference image, to
synthesize a prediction image of the input image using
the motion vectors and the reference image, to generate a
difference image by calculating a difference between the
input image and the prediction image, and to produce
coded information of the input image including
information related to the difference image and the
motion vectors;
wherein the prediction image is synthesized by
calculating intensity values at points where no pixels
actually exist in the reference image by interpolation;
and
wherein the interpolation is done according to
information specifying a positive rounding method or a
negative rounding method when the current frame is a P
frame, and using a predetermined rounding method which is
a positive rounding method or a negative rounding method
when the current frame is a B frame.
25. The image coder according to claim 24, wherein said
predetermined rounding method is a positive rounding
method.
26. The image coder according to claim 25, wherein:
said positive rounding method is performed in
accordance with the following equations:
Ib=[(La+Lb+1)/2];
Ic=[(La+Lc+1)/2];
54


Id=[(La+Lb+Lc+Ld+2)/4], and
said negative rounding method is performed in accordance
with the following equations:
Ib=[(La+Lb)/2];
Ic=[(La+Lc)/2]; and
Id=[(La+Lb+Lc+Ld+1)/4],
where La is an intensity value of a first pixel in the
reference image, Lb is an intensity value of a second pixel in
the reference image which is horizontally adjacent to the
first pixel, Lc is an intensity value of a third pixel in the
reference image which is vertically adjacent to the first
pixel, and Ld is an intensity value of a fourth pixel in the
reference image which is vertically adjacent to the second
pixel and horizontally adjacent to the third pixel, Ib is an
interpolated intensity value at a midpoint between a position
of the first pixel and a position of the second pixel, Ic is
an interpolated intensity value at a midpoint between the
position of the first pixel and a position of the third pixel,
and Id is an interpolated intensity value of a midpoint
between the position of the first pixel, the position of the
second pixel, the position of the third pixel, and a position
of the fourth pixel.

27. A computer-readable memory for storing statements or
instructions for use in the execution in a computer of the
process of an image coding method comprising:
estimating motion vectors between an input image to be
coded and a reference image;




synthesizing a prediction image of the input image
using the motion vectors and the reference image;
generating a difference image by calculating a
difference between the input image and the prediction
image; and
outputting coded information of the input image
including information related to the difference image and
the motion vectors;
wherein said synthesizing the prediction image
includes calculating intensity values at points where no
pixels actually exist in the reference image by
interpolation;
wherein said interpolation is done according to
information specifying a positive rounding method or a
negative rounding method when a current frame of the
input image is a P frame; and
wherein said interpolation is done using a
predetermined rounding method which is a positive
rounding method or a negative rounding method when the
current frame of the input image is a B frame.


56

Description

Note: Descriptions are shown in the official language in which they were submitted.



CA 02475779 2004-12-30
DIGITAL PORTABLE TERMINAL AND METHOD USING POSITIVE AND
NEGATIVE ROUNDING FOR PIXEL VALUE INTERPOLATION IN MOTION
COMPENSATION
This is a division of co-pending Canadian Patent
Application Serial No. 2,240,118, filed June 8, 1998.
BACKGROUND OF THE INVENTION
Field of the'Invention
The present invention relates to an image sequence coding
and decoding method which performs interframe prediction using
quantized values for chrominance or luminance intensity.
Description of Related Art
In high efficiency coding of image sequences, interframe
prediction (motion compensation) by utilizing the similarity
of adjacent frames over time, is known to be a highly
effective technique for data compression. Today's most
frequently used motion compensation method is block matching
with half pixel accuracy, which is used in international
standards H.263, MPEG1, and MPEG2. In this method, the image
to be coded is segmented into blocks and the horizontal and
vertical components of the motion vectors of these blocks are
estimated as integral multiples of half the distance between
adjacent pixels. This process is described using the
following equation:
[Equation 1]
P(x,y) - R(x+ui,y+vi) (x.Y) EBi~O<i<N ...(1)
where P (x, y) and R (x, y) denote the sample values (luminance or
chrominance intensity) of pixels located at coordinates (x, y)
in the predicted image P of the current frame and the
i


CA 02475779 2004-08-25
reference image (decoded image of a frame which has been
encoded before the current frame) R, respectively. x and
y are integers, and it is assumed that all the pixels are
located at points where the coordinate values are integers .
Additionally it is assumed that the sample values of the
pixels are quantized to non-negative integers. N, Bi, and
( ui , vi ) denote the number of blocks in the image, the set
of pixels included in the i-th block of the image, and the
motion vectors of the i-~h block, respectively.
1p When the values for ui and vi are not integers , it is
necessary to find the intensity value at the point where no
pixels actually exist in the reference image. Currently,
bilinear interpolation using the adjacent four pixels is the
most frequently used method for this process. This
interpolation method is described using the following
equation:
[Equation 2]
R(x + p ~Y + g ) - ~(d - 9)t~d - P)RCx~Y) + PR(x + 1,Y)) .
d d
+ ~~((d - R)R(x>Y + s> + P~(x + ~, y + 1))) l l~ z
where d is a positive integer, and p and q are smaller than
d but not smaller than 0. "//" denotes integer division
which rounds the result of normal division ( division using
real numbers) to the nearest integer.
An example of the structure of an Fi.263 video encoder
is shown in Fig. 1. As the coding algorithm, H.263 adopts
2


CA 02475779 2004-08-25
a hybrid coding method (adaptive interframe/intraframe
coding method ) which is a combination of block matching and
DCT (discrete cosine transform). A subtractor 102
calculates the difference between the input image (current
frame base image) 101 and the output image 113 (related
later) of the interframe/intraframe coding selector 119, and
then outputs an error image 103, This error image is
quantized in a quantizer 105 after being converted into DCT
coefficients in a DCT converter 104 and then forms quantized
1o DCT coefficients 206 . These quantized DCT coefficients are
transmitted through the communication channel while at the
same time used to synthesize the interframe predicted image
in the encoder. The procedure for synthesizing the
predicted image is explained next. The above mentioned
quantized DCT coefficients 106 forms the reconstructed error
image_110 (same as the reconstructed error image an the
receive side) after passing through a dequantizer 108 and
inverse DCT converter 109. This reconstructed error image
and the output image 1I3 of the interframe /intraframe coding .
2o selector 119 is added at the adder 111 and the decoded image
112 of the current frame (same image as the decoded image
of current frame reconstructed on the receiver side) is
obtained. This image is stored in a frame memory 114 and
delayed for a time equal to the frame interval. Accordingly,
at the current point, the frame memory 114 outputs the
decoded image 115 of the previous frame. This decoded image
of the previous frame and the original image 101 of the
current frame are input to the block matching section 116
3


CA 02475779 2004-08-25
and block matching is performed between these images. In the
block matching process, the original image of the current
frame is segmented into multiple blocks, and the predicted
image 117 of the current frame is synthesized by extracting
the section most resembling these blocks from the decoded
image of the previous frame. In this process, it is
necessary to estimate the motion between the prior frame and
the current frame for each block . The motion vector for each
block estimated in the motion estimation process is
io transmitted to the receiver side as motion vector data 120.
On the receiver side, the same prediction image as on the
transmitter side is synthesized using the motion vector
information and the decoding image of the previous frame.
The prediction image Il? is input along' with a "0" signal
lI8 to the interframe /intraframe coding selector 119. This
switch 119 selects interframe coding or intraframe coding
by selecting either of these inputs . Interframe coding is
performed when the prediction image 117 is selected ( this
case is shown in Fig. 2). On the other hand when the "0"
2o signal is selected, intrafrarne coding is performed since the
input image itself is converted, to a DCT coefficients and
output to the communication channel. In order for the
receiver side to correctly reconstruct the coded image, the
reciever must be informed whether intraframe coding or
interframe coding was performed on the transmitter side.
Consequently, an identifier flag 121 is output to the
communication circuit. Finally, an H.263 coded bits~:ream
123 is acquired by multiplexing the quantized DCT
4


CA 02475779 2004-08-25
coefficients, motion vectors, and the
interframe/intraframe identifier flag information in a
multiplexer 122.
The structure of a decoder 200 for receiving the coded
bit stream output from the encoder of Fig ., 1 is shown in Fig .
2. The H.263 coded bit stream 217 that is received is
demultiplexed into quantized DCT coefficients 201, motion
vector data 202 , and a interframe/intraframe identifier flag
203 in the demultiplexer 216. The quantized DCT
coefficients 201 become a decoded error image 206 after being
processed by an inverse quantizer 204 and inverse DCT
converter 205. This decoded error image is added to the
output image 215 of the interframe /intraframe coding
selector 214 in an adder 207 and the sum of these images is
output as the decoded image 208. The output of the
interframe /intraframe coding selector is switched
according to the interframe/intraframe identifier flag 203.
A prediction image 212 utilized when performing interframe
encoding is synthesized in the prediction image synthesizer
211 . In this synthesizer, the position of the blocks in the
decoded image 210 of the prior frame stored in frame memory
209 is shifted according to the motion vector data 202. On
the other hand, for intraframe coding, the interframe
/intraframe coding selector outputs the °'0" signal 213 as
i S .
SUMMARY OF THE INVENTION
The image encoded by H.263 is comprised of a luminance
5


CA 02475779 2004-08-25
plane (Y plane) containing luminance information, and two
chrominance planes (U plane and V plane) containing
chrominance information. Atthistime,characteristically,
when the image has 2m pixels in the horizontal direction and
2n pixels in the vertical direction (m and n are positive
integers), the Y plane has Zm pixels horizontally and 2n
pixels vertically, the U and V planes have m pixels
horizontally and n pixels vertically. The low resolution
on the chrominance plane is due to the fact that the human
io visual system has a comparatively dull visual faculty with
respect to spatial variations in chrominance. Having such
image as an input, H. 263 performs coding and decoding in
block units referred to as macroblocks. The structure of
a macroblock is shown in Fig . 3 . The macroblock is comprised
of three blocks; a Y block, U block and. V block. The size
of the Y block 301 containing the luminance information is
16 X 16 pixels , and the size of the U block 302 and V block
303 containing the chrominance information is 8 X 8 pixels .
In H. 263, half pixel accuracy block matching is
applied to each block. Accordingly, when the estimated
motion vector is defined as ( a , v ) , a and v are both integral
multiples of half the distance between pixels. In other
words, 1/2 is used as the minimum unit. The configuration
of the interpolation method used for the intensity values
(hereafter the intensity values for "luminance" and
"chrominance" are called by the general term "intensity
value") is shown in Fig. 4. When performing the
interpolation described in equation 2, the quotients of
6


CA 02475779 2004-08-25
division are rounded off to the nearest integer, and further,
when the quotient has a half integer value ( i . a . 0 . 5 added
to an integer) , rounding off is performed to the next integer
in the direction away from zero. In other words, in Fig.
4, when the intensity values for 401, 402, 403, 404 are
respectively La, Lb, Lc, and Ld (La, Lb, Lc, and Ld are
non-negative integers), the interpolated intensity values
Ia, Ib, Ic, and Id (Ia, Ib, Tc, and Id are non-negative
integers) at positions 405, 406, 407, 408 are expressed by
1o the following equation:
[Equation 3]
la=La
Ib = t(La + Lb + 1) I 2,
Ic = ~(La + Lc + ~) / 2] ~ ~ ~ (3)
Id =~(La+Lb+Lc+Ld +2)/4,
t
where '° [ l " denotes truncation to the nearest integer towards
0 ( i . a . the fractional part is discarded ) . The expectation
of the errors caused by this rounding to integers is
estimated as follows : It is assumed that the probability that
the intensity value at positions 405 , 406 , 407 , and 408 of
Fig. 4 is used is all 25 percent. When finding the intensity
value Ia for position 405 , the rounding error will clearly
be zero. Also, when finding the intensity value Ib for
position 406 , the error will be zero when La+Lb is an even
number, and when an odd number the error is 1/2. If the
7


CA 02475779 2004-08-25
probability that La+Lb will be an even number and an odd
number is both 50 percent , than the expectation for the error
will be 0 x 1 / 2 + 1 / 2 X 1 / 2 = 1 / 4 . Further , when
finding the intensity value Ic for position 407, the
expectation for the error is 1 / 4 as for Ib. When finding
the intensity value Id for position 408 , the error when the
residual of La+Lb+Lc+Ld divided by four are 0 , 1, 2 , and 3
are respectively 0, -I/4, I/2, and I/4. If we assume that
the probability that the residual is 0 , 1, 2 , and 3 is all
io equal (i.e- 25 percent), the expectation for the error is
0 x Z / 4 - 1 / 4 x 1 / 4 + 1 / 2 x 1 / 4 + 1 / 4 x 1 / 4
- 1 / 8 . As described above , assuming that the possibility
that the intensity value at positions 405 - 408 being used
are all equal, the final expectation for the error is 0 x
1 / 4 + 1 / 4 x 1 / 4 + 1 / 4 x 1 / 4 + I / 8 x I / 4 = 5
/ 32. This indicates that each time motion compensation is
performed by means of block matching, an error of 5/32 occurs
f
in the pixel intensity value . Generally in low rate coding ,
sufficient number of bits cannot be used for the encoding
of the interframe error difference so that the quantized step
size of the DCT coefficient is prone to be large.
Accordingly, errors occurring due to motion compensation are
corrected only when it is very large. When interframe
encoding is performed continuously without performing
intraframe coding under such environment, the errors tend.
to accumulate and cause bad effects on the reconstructed
image.
Just as explained above , the number of pixels is about
8


w
CA 02475779 2004-08-25
half in both the vertical and horizontal direction on the
chrominance plane. Therefore, for the motion vectors of the
U block and V block, half the value of the motion vector for
the Y block is used for the vertical and horizontal
components. Since the horizontal and vertical components
of the motion vector for the Y block motion vector are
integral multiples of 1/2, the motion vector components for
the U and V blocks will appear as integral multiples of 1/4
( quarter pixel accuracy) if ordinary division is implemented.
However, due to the high computational complexity of the
intensity interpolation process for motion vectors with
quarter pixel accuracy, the motion vectors for U and V blocks
are rounded to half pixel accuracy in H.263. The rounding
method utilized in H.263 is as follows: According to the
definition described above, (u, v) denotes the motion vector
of the macroblock (which is equal to the motion vector for
the Y block). Assuming that r is an integer and s is an
non-negative integer smaller than 4 , a / 2 can be rewritten
as a / 2 = r + s / 4 . When s is 0 or 2 , no rounding is required
2o since a / 2 is already an integral multiple of 1 / 2. However
when s is equal to 1 or 3 , the value of: s is rounded to 2 .
By increasing the possibility that s takes the value of 2
using this rounding method, the filtering effect of motion
compensation can be emphasized. When the probability that
the value of s prior to rounding is 0 , 1 , 2 , and 3 are all
25 percent, the probability that s will be 0 or 2 after
rounding will respectively be 25 percent and '15 percent . The
above explained process related to the horizontal component
9


CA 02475779 2004-08-25
a of the motion vector is also applied to the vertical
component v. Accordingly, in the U block and V block, the
probability for using the intensity value of the 401 position
is 1 / 4 X 1 / 4 = 1 / 16 , and the probability for using the
intensity value of the 402 and 403 positions is both 1 / 4
X 3 / 4 = 3 / 16, while the probability for using the intensity
value of position 404 is 3 / 4 X 3 / 4 = 9 / 16. By utilizing
the same method as ~ above , the expectation f or the error of
the intensity value is 0 X 1 / 16 + 1 / 4 X 3 / 16 + 1 / 4
X 3 / I6 + 1 / 8 X 9 / 16 = 21 / 128. . Just as explained above
-. for the Y block, when interframe encoding is continuously
performed, the problem of accumulated errors occurs.
As related above, for image sequence coding and.
decoding methods in which interframe prediction, is performed
and luminance or chrominance intensity is quantized, the
problem of accumulated rounding errors occurs. This rounding
error is generated when the luminance or chrominance
intensity value is quantized during the generation of the
interframe prediction image.
In view of the above problems, it is therefore an object
of this invention, to improve the quality of the
reconstructed image by preventing error accumulation.
In order to achieve the above objects, the accumulation
of errors is prevented by limiting the occurrence of errors
or performing an operation to cancel out errors that have
occurred.
~o


CA 02475779 2004-08-25
In accordance with one aspect of the present
invention there is provided a digital portable terminal
comprising: an antenna for sending digital signals; an
input device for acquiring image information; a frame
memory for recording a decoded image of a reference
frame; a block matching section for estimating motion
vectors and synthesizing a predicted image of a current
frame by performing motion compensation between the
decoded image of the reference frame and an input image
of the current~frame; a DCT converter for performing DCT
conversion of a difference between the input image of the
current frame and the predicted image of the current
frame to obtain DCT coefficients; a quantizer for
quantizing the DCT coefficients; anal a multiplexer for
multiplexing information related to quantized DCT
coefficients, the motion vectors and a rounding method
used for pixel value interpolation in said motion
compensation; wherein said block matching section
includes a rounding method determination unit which
decides whether a positive rounding method or a negative
rounding method is used for pixel value interpolation in
said motion compensation, wherein said synthesizing of
the predicted image is performed using the decided
rounding method and the motion vectors; and wherein said
antenna sends multiplexed coded information.
In accordance with another aspect of the present
invention there is provided a digital signal processing
apparatus comprising: a camera for acquiring image
information; a frame memory for recording a decoded image
of a reference frame; a block matching section for
11


CA 02475779 2004-08-25
estimating motion vectors and synthesizing a predicted
image of a current frame by performing motion
compensation between the decoded image of the reference
frame and an input image of the current frame, a DCT
converter for performing DCT conversion of a difference
between the input image of the current frame and the
predicted image of the current frame to obtain DCT
coefficients; a quantizer for quantizing the DCT
coefficients; a multiplexer for multiplexing information
related to quantized DCT coefficients, the motion vectors
and a rounding method used for pixel value interpolation
in said motion compensation; and a memory for storing
multiplexed information; wherein said block matching
section includes a rounding method determination unit
which decides whether a positive rounding method or a
negative rounding method is used for said pixel value
interpolation in said motion compensation; and wherein
said synthesizing of the predicted image is performed
using the decided rounding method and the motion nectars.
In accordance with yet another aspect of the present
invention there is provided a digital portable terminal
comprising: an antenna for sending digital signals; an
input device for acquiring image information" a monitor
for displaying the image information; a memory for
recording a coding program; a central processing unit
{CPU) for executing the coding program, wherein the
coding program comprises: a code for estimating motion
vectors by performing motion estimation between a decoded
image of a reference frame and an input image of a
current frame; a code for deciding whether a positive
11a


CA 02475779 2004-08-25
rounding method or a negative rounding method is used for
pixel value interpolation; a code for synthesizing a
predicted image of the current frame by using the decided
rounding method and information related to the motion
vectors; a code for performing DCT conversion of a
difference between the input image of the current frame
and the predicted image to obtain DCT coefficients; a
code for quantizing converted DCT coefficients; a code
for multiplexing information related to qu.antized DCT
coefficients, the motion vectors and the decided rounding
method; and a code for storing multiplexed coded
information in the memory.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be described in
conjunction with the invention described in co-pending
Canadian Patent Application Serial No. 2,240,118, filed
on June 8, 1998, with the aid of the accompanying
drawings, in which:
Figure 1 is a block diagram showing the layout of
the H.263 image encoder.
Figure 2 is a block diagram showing the layout of
the H.263 image decoder.
Figure 3 is a drawing showing the structure of the
macro block.
Figure 4 is a drawing showing the interpolation
process of intensity Values for block matching with half
pixel accuracy.
12b

CA 02475779 2004-08-25
Figure 5 is a drawing showing a coded image
sequence.
Figure 6 is a block diagram showing a software image
encoding device.
Figure 7 is a block diagram showing a software image
decoding device.
Figure 8 is a flow chart showing an example of
processing in the software image encoding device.
Figure 9 is a flow chart showing an example of the
coding mode decision processing for the soi=tware image
encoding device.
Figure 10 is a flow chart showing an example of
motion estimation and motion compensation processing in
the software image encoding device.
Figure 11 is a flow chart showing the processing in
the software image decoding device.
Figure 12 is a flow chart showing an example of
motion compensation processing in the software image
decoding device.
llc


3
CA 02475779 2004-08-25
Figure 13 is a drawing showing an example of a storage
media on which an encoded bit stream generated by an encoding
method that outputs bit streams including I , P+ and P- frames
is recorded.
Figure 14 is a set of drawings showing specif is
examples of devices using an encoding method wher~s P+ and
P- frames coexist.
Figure 15 is a drawing showing an example of a storage
media on which an encoded bit stream generated by an encoding
1o method the outputs bit streams including I, B, P-E, and P-
frames is recorded.
Figure 16 is a block diagram showing an example of a
block matching unit included in a device using an encoding
method where P+ and P- frames coexist.
Figure 17 is a block diagram showing the prediction
image synthesizer included in a device for decoding bit
streams encoded by an encoding method where P+ and P- frames
coexist.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
First , in which circumstances the accumulated rounding
errors as related in the '°Prior art" occur must be considered.
An example of an image sequences encoded by coding methods
which can perform both unidirectional predictic>n and
bidirectional prediction such as in MPEG.1, MPEG.2 and H.263
is shown in Fig. 5. An image 501 is a frame coded by means
of intraframe coding and is referred to as an I frame. In
contrast , images 503 . 505 , 507 , 509 are called P frames and
12


i ,
CA 02475779 2004-08-25
are coded by unidirectional interframe coding by 'using the
previous I or P frame as the ref erence image . Accordingly ,
when for instance encoding image 505, image 503 is used as
the reference image and interframe prediction is performed.
Images 502, 504, 506 and 508 are~called B frames and
bidirectional interframe prediction is performed 'utilizing
the previous and subsequent I or P frame . The B frame is
characterized by not being used as a reference image when
interframe prediction is performed. Since motion
io compensation is not performed in I frames, the rounding error
caused by motion compensation will not occur. In contrast,
not only is motion compensation performed in the P frames
but the P frame is also used as a reference image by other
P or B frames so that it may be a cause leading to accumulated
rounding errors . In the B frames on the other hand, motion
compensation is performed so that the effect of accumulated
rounding errors appears in the reconstructed image. However,
due to the fact that B frames are not used as reference images,
B frames cannot be a source of accumulated rounding errors .
2o Thus , if accumulated rounding errors can be prevented in the
P frame, then the bad effects of rounding errors can be
alleviated in the overall image sequence. In H.263 a frame
for coding a P frame and a B frame exists and is called a
PB frame (For instance, frames 503 and 504 can both be encoded
as a PB frame. ) If the combined two frames are viewed as
separate frames, then the same principle as above can be
applied. In other words, if countermeasures a.re taken
versus rounding errors for the P frame part within a PB frame,
13

°

CA 02475779 2004-08-25
then the accumulation of errors can be prevented.
Rounding errors occur during interpolation of
intensity values when a value obtained from normal division
( division whose operation result is a real number ) is a half
integer (0.5 added to an integer) and this result is then
rounded up to the next integer in the direction away from
zero. For instance, when dividing by 4 to find an
interpolated intensity value is performed, the rounding
errors for the cases when the residual is 1 and 3 have equal
absolute values but different signs. Consequently, the
rounding errors caused by these two cases are canceled when
the expectation for the rounding errors is calculated (in
more general words, when dividing by a positive integer d'
is performed, the rounding errors caused by the cases when
the residual is t and d' -t are cancelled) . However, when
the residual is 2 , in other words when the result of normal
division is a half integer, the rounding error cannot be
canceled and leads to accumulated errors. To solve this
problem, a method that allows the usage of two rounding
2o methods can be used . The two rounding methods used here afe
a rounding method that rounds half integers away from 0: and
a rounding method that rounds half integers towards 0. By
combining the usage of these two rounding methods, the
rounding errors can lbe canceled. Hereafter, the rounding
method that rounds the result of normal division to the
nearest integer and rounds half integer values away from O
is called °'positive rounding'° . 'Additionally, the rounding
method that rounds the result of normal division to the
l4


CA 02475779 2004-08-25
nearest integer and rounds half integer values towards 0 is
called "negative rounding". The process of positive
rounding used in block matching with half pixel accuracy is
shown in Equation 3 . When negative rounding is used instead,
this equation can be rewritten as shown below.
[Equation 4]
Ia = La
Ib = ~(La + Lb) / 2~
Ic = ~(La + Lc) / 2~
Id =~(La+Lb+Lc+Ld+1)/4,
l0 Hereafter motion compensation methods that performs
positive and negative rounding for the synthesis of
interframe prediction images are called "motion
compensation using positive rounding" and "motion
compensation using negative rounding", respectively.
Furthermore, for P frarnes which use block matching with half
pixel accuracy for motion compensation, a frame that uses
positive rounding is called a "P+ frame" and a frame that
uses negative rounding is called a "P- frame" (under this
definition, the P frames in H. 263 are all P+ frames) . The
2o expectation for the rounding errors in P+ and P- frames have
equal absolute values but different signs. Accordingly,
the accumulation of rounding errors can be prevented when
p+ frames and P- frames are alternately located along the
time axis . In the example in Fig . 5 , if the frames 503 and
507 are set as P+ frames and the frames 505 and 509 are set
1 5


CA 02475779 2004-08-25
as P- frames, then this method can be implemented. The
alternate occurrence of P+ frames and P- frames lE:ads to the
usage of a P+ frame and a P- frame in the bidirectional
prediction for B frames. Generally, the average of the
forward prediction image (i.e. the prediction .image
synthesized by using frame 503 when frame 504 in Fig. 5 is
being encoded ) and the backward prediction image ( i . a . the
prediction image synthesized by using frame 505 when frame
504 in Fig. 5 is being encoded) is frequently used for
1o synthesizing the prediction image for B frames . This means
that using a P+ frame (which has a positive value for the
expectation of the rounding error) and a P- frame (which has
a negative value for the expectation of the rounding error)
in bidirectional prediction for a B frame is effective in
canceling out the effects of rounding errors. Just as
related above, the rounding process in the B frame will not
be a cause of error accumulation. Accordingly, no problem
r
will occur even if the same rounding method is applied to
all the B frames. For instance, no serious degradation of
2o decoded images is caused even if motion compensation using
positive rounding is performed for all of the B frames 502,
504, 506, and 508 in Fig. 5. Preferably only one type of
rounding is performed for a B frame, in order t:o simplify
the B frame decoding process.
A block matching section 1600 of an image encoder
according to the above described motion compensation method
utilizing multiple rounding methods is shown in Fig. 16.
Numbers identical to those in other drawings indicate the
16


CA 02475779 2004-08-25
same part. By substituting the block matching section 116
of Fig. 1 with 1600, multiple rounding methods can be used.
Motion estimation processing between the input image 101 and
the decoded image of the previous frame is performed in a
motion estimator 1601. As a result, motion information 120
is output. This motion information is utilized in the
synthesis of the prediction image in a prediction image
synthesizer 1603., A rounding method determination device
1602 determines whether to use positive rounding oz' negative
rounding as the rounding method for the frame currently being
encoded. Information 2604 relating to the rounding method
that was determined is input to the prediction image
synthesizer 1603. In this prediction image synthesizer
1603 , a prediction image 117 is synthesized and output based
on the rounding method determined by means of information
1604. In the block matching section 116 in Fig. 1, there
are no items equivalent to 1602, 1604 of Fig. lfi, and the
prediction image is synthesized only by positive rounding.
Also, the rounding method 1605 determined at the block
2o matching section can be output , and this information can then
be multiplexed into the bit stream and be transmitted.
A prediction image synthesizer 1700 of an image decoder
which can decode bit streams generated by a coding method
using multiple rounding methods is shown in Fig. 17.
Numbers identical to those in other drawings indicate the
same part. By substituting the prediction image
synthesizer 211 of Fig . 2 by 1700 , multiple rounding methods
can be used. In the rounding method determination device
17


J
CA 02475779 2004-08-25
1701, the rounding method appropriate for prediction iri~age
synthesis in the decoding process is determined. In order
to carry out decoding correctly, the rounding method
selected here must be the same as the rounding method that
was selected for encoding. For instance the following rule
can be shared between the encoder and decoder: When the
current frame is a P frame and the number of P frames
( including the current frame ) counted from the mo st recent
I frame is odd, then the current frame is a P+ frame. When
1o this number is even , then the current frame is a I?- frame .
If the rounding method determination device on the encoding
side ( For instance , 1602 in Fig . 16 ) and the rounding method
determination device 1701 conform to this common rule, then
the images can correctly be decoded . The prediction image
is synthesized in the prediction image synthesizer 1703
using motion information 202 , decoding image 210 of the prior
frame, and information 1702 related to the rounding method
determined as just described. This prediction image 212 is t
output and then used for the synthesis of the decoded image.
2o As an alternative to the above mentioned case, a case where
the information related to the rounding method is
multiplexed in the transmitted bit stream can also be
considered ( such bit stream can be generated at the encoder
by outputting the information 1605 related to the rounding
method from the block r~ratching section depicted in Fig . 16 ) .
In such case, the rounding method determiner device 1701 is
not used, and information 1704 related to the rounding method
extracted from the encoded bit stream is used at the
is


CA 02475779 2004-08-25
prediction image synthesizer 1703.
Besides the image encoder and the image decoder
utilizing the custom circuits and custom chips of the
conventional art as shown in Fig . 1 and Fig . 2 , this :invention
can also be applied to software image encoders and software
image decoders utilizing general-purpose processors: A
software image encoder 600 and a software image decoder 700
are shown in Fig . 6 and Fig . 7 . I n the sof tware image encoder
600 , an input image 601 is first stored in the input frame
1o memory 602 and the general-purpose processor 603 loads
information from here and performs encoding. The program
for driving this general-purpose processor is loaded from
a storage device 608 which can be a hard disk, floppy disk,
etc. and stored in a program memory 604. This general-
purpose processor also uses a process memory 605 to perform
the encoding. The encoding information output by the
general-purpose processor is temporarily stored in the
r
output buffer 606 and then output as an encoded bit stream
607.
2o A flowchart for the encoding software (recording
medium readable by computer ) is shown in Fig . 8 . The process
starts in 801, and the value 0 is assigned to variable N in
802 . Next , in 803 and 804 , the value 0 is assigned to N when
the value for N is 100. N is a counter for the number of
frames . 1 is added fox each one frame whose processing is
complete, and values from 0 to 99 are allowed when performing.
coding . When the value far N is 0 , the current frame is an
I frame. When N is an odd number, the current frame is a
1 9

CA 02475779 2004-08-25
P+ frame, and when an even number other than 0 , the current
frame is a P- frame. TnThen the upper limit for the value of
N is 99, it means that one I frame is coded after 99 P frames
( P+ frames or P- frames ) are coded. By always inserting one
I frame in a certain number of coded frames , the following
benefits can be obtained: (a) Error accumulation due to a
mismatch between encoder and decoder processing can be
prevented (for instance, a mismatch in the computation of
DCT); and (b) The processing load for acquiring the
reproduced image of the target frame from the coded data
( random access ) is reduced. The optimal N value varies when
the encoder performance or the environment where the encoder
is used are changed. Lt does not mean, therefore, that the
value of N must always be 100. The process for determining
the rounding method and coding mode for each frame is
performed in 805 and the flowchart with details of this
operation is shown in Fig. 9. First of all, whether N is~
a 0 or not is checked in 901 . If N is 0 , then ° I' is output
as distinction inforrriation of the prediction mode, to the
output buffer in 902. This means that the image i~o be coded
will be coded as an I frame. Here, "output to the output
buffer°' means that after being stored in the output buffer,
the information is output to an external device as a portion
of the coded bit stream. When N is not 0, then whether N
is an odd or even number is identified in 904. When N p_s
an odd number, '+' is output to the output buffer as the
distinction information for the rounding method in 905 , and
the image to be coded will be coded as a P+ frame . On the
2 0


CA 02475779 2004-08-25
other hand, when N is an even number, '-' is output to the
output buffer as the distinction information for the
rounding method in 906 , and the image to be coded will be
coded as a P- frame. The process again returns to Fig. 8,
where after determining the coding mode in 805, the input
image is stored in the frame memory A in 806. The frame
memory A referred to here signifies a portion of the memory
zone (for instance, the memory zone maintrained in the memory
of 605 in Fig. 6) of the software encoder: In 807, it is
1o checked whether the frame currently being coded is an I frame .
When not identified as an I frame, motion estimation and
motion compensation is performed in 808. The f7_owchart in
Fig. 10 shows details of this process performed in 808.
First of all , in 1001 , motion estimation is perfort:ned between
the images stored in frame memories A and B ( just as written
in the final part of this paragraph, the decoded image of
the prior frame is stored in frame memory B). The motion
vector for each block is found, and this motion vector is
sent to the output buffer. Next, in 1002, whether or not
the current frame is a P+ frame is checked . When the current
frame is a P+ frame, 'the prediction image is synthesized in
1003 utilizing positive rounding and this prediction image
is stored in frame memory C. On the other hand, when the
current frame is a P- frame, the prediction image is
synthesized in 1004 utilizing negative rounding and this
prediction image is stored in the frame memory C . Next , in
1005, the differential image between frame memories A and
C is found and stored in frame memory A. Here, the process
21


CA 02475779 2004-08-25
again returns to Fig. 8. Prior to starting the 'processing
in 809 , the input image is stored in frame memory A when the
current frame is an I frame, and the differential image
between the input image and the prediction image is stored
in frame memory A when the current frame is a P frame (P+
or P- frame) . . In 809, DCT is applied to the image stored
in frame memory A, and the DCT coefficients calculated here
are sent to the output buffer after being quantized. In 810 ,
inverse quantization is performed to the quantized DCT
1o coefficients and inverse DCT is applied. The image obtained
by applying inverse DCT is stored in frame memory B . Next
in 811, it is checked again whether the current frame is
an I frame. When the current frame is not an I frame, the
images stored in frame memory B and C are added and the result
is stored in frame memory B. The coding process of a frame
ends here, and the image stored in frame memory B before going
into 813 is the reconstructed image of this frame ( this image
is identical with the one obtained at the decoding side).
Tn 813 , it is checked whether the frame whose coding has just
finished is the final frame in the sequence. If this is true,
the coding process ends . If this frame is not the final frame,
1 is added to N in 814 , and the process again returns to 803
and the coding process for the next frame starts.
A software decoder 700 is shown in Fig. 7. After the
coded bit stream 701 is temporarily stored iri the input
buffer 702, this bit stream is then loaded into the
general-purpose processor 703. The program for driving this
general-purpose processor is loaded from a storage device
22


i
CA 02475779 2004-08-25
708 which can be a hard disk, floppy disk, etc. and stored
in a program memory 704. This general-purpose processor
also uses a process memory 605 to perform the decocting. The
decoded image obtained by the decoding process is
temporarily stored in the output frame memory 706 and then
sent out as the output image 707.
A flowchart of the decoding software for the software
decoder 700 shown in Fig. 7 is shown in Fig. 11. The process
starts in 1101, and :it is checked in 1102 whether input
io information is present. If there is no input information,
the decoding process ends in 1103. When input information
is present, distinction information of the prediction mode
is input in 1104. The word "input" used here means that the
information stored in the input buffer ( for instance 702 of
Fig . 7 ) is loaded by the general-purpose proces sor . In 1105 ,
it is checked whether the encoding mode distinction
information is "I°'. When not "I", the distinction
information for the rounding method is input and synthesis
of the interframe prediction image is performed in 1107. A
2o flowchart showing details of the operation in 1107 is shown
in Fig. 12 . In 1201 , a motion vector is input for each block.
Then, in 1202, it is checked whether the distinction
information for the rounding method loaded in 1106 is a "+" .
When this information is "+", the frame currently being
decoded is a P+ frame . In this case , the prediction image
is synthesized using positive rounding in 1203, and the
prediction image is stored in frame memory D . Here , frame
memory D signifies a portion of the memory zone of 'the
23


3
CA 02475779 2004-08-25
software decoder (for instance, this memory zone is obtained
' in the processing memory 705 in Fig. 7). When the
distinction information of the rounding method is not "+" ,
the current frame being decoded is a P- frame. The
prediction image is synthesized using negative rounding in
1204 and this prediction image is stored in frame memory
D. At this point, if a P+ frame is decoded as a P- frame
due to some type of error, or conversely if a P.- frame is
decoded as a P+ frame , the correct prediction image is not
synthesized in the decoder and the quality of the decoded
image deteriorates. After synthesizing the prediction
image, the operation returns to Fig. 11 and the quantized
DCT coefficients is input in 1108. Inverse quantization and
inverse DCT is then applied to these coefficients and the
resulting image is stored in frame memory E. In 1109, it
is checked again whether the frame currently being decoded
is an I frame. If the current frame is not an I frame, images
r
stored in frame memory D and E are added in 1110 and the
resulting sum image is stored in frame memory E . The image
stored in frame memory E before starting the process in 1111
is the reconstructed image. This image stored in frame
memory E is output to the output frame memory (for instance,
705 in Fig . 7 ) in 1111 , and then output from the decoder as
the reconstructed image. The decoding process for a frame
is completed here and the process for the next frame starts
by returning to 1102.
When a software based on the flowchart shown in Figs .
8 - 12 is run in the software image encoders or decoders,
24


CA 02475779 2004-08-25
the same effect as when custom circuits and custom chips are
utilized are obtained.
A storage media ( recording media ) with the bit stream
generated by the software encoder 601 of Fig. 6 being
recorded is shown in Fig. 13. It is assumed that the
algorithms shown in 'the flowcharts of Figs. 8 - 10 is used
in the software encoder. Digital information is recorded
concentrically on a recording disk 1301 capable oi= recording
digital information (for instance magnetic disks, optical
1o disk, etc. ) . A portion 1302 of the information i:ecorded on
this digital disk includes: prediction mode distinction
information 1303, 1305, 1308, 1311, and 1314; rounding
method distinction information 1306 , 1309 , 1312 , and 1315 ;
and motion vector and DCT coefficient information 1304 , 1307 ,
1310, 1313, and 1316. Information representing 'I' is
recorded in 1303, 'P' is recorded in 1305, 1308, 1311, and
1314 , ' +' is recorded in 1306 , and 1312 , and ' -' is recorded t
in 1309, and 1315. In this case, 'I' and °+' can. be
represented by a single bit of 0, and 'P° and °-' can be
2o represented by a single bit of 1. Using this representation,
the decoder can correctly interpret the recorded information
and the correct reconstructed image is synthesized. By
storing a coded bit stream in a storage media using the method
described above, the accumulation of rounding errors is
prevented when the bit stream is read and decoded.
A storage media with the bit stream of the coded data
of the image sequence shown in Fig . 5 being recorded is shown
in Fig. 15. The recorded bit stream includes information


CA 02475779 2004-08-25
related to P+ , P- , and B frames . In the same way as in 1301
of Fig. 13, digital information is recorded concentrically
on a record disk 1501 capable for recording digital
information(for instance, magnetic disks, optical disks,
etc. ) . A portion 1502 of the digital information recorded
on this digital disk includes : prediction mode d:istinetion
information 1503, 1505, 1508, 1510, and 1513; :rounding
method distinction information 1506 , and 1512 ; and motion
vector and DCT coefficient information 1504, 1507, 1509,
1511, and 1514. Information representing 'I' is recorded
in 1503 , ' P' is recorded in 1505 , and 1510 , ' B' is recorded
in 1508 , and 1513 , ' +' is recorded in 1505 , and ' -' is recorded
in 1511. In this case , ' I' , ' P' and ' H' can be represented
respectively by two bit values 00 , O1, and 10 , and ' +' and
' - ° can be represented respectively by one bit values .0 and
1. Using this representation, the decoder can correctly
interpret the recorded information and the correct
r
reconstructed is synthesized. In Fig. 15, information
related to frame 501 ( I frame ) in Fig . 5 is 1503 and 1504 ,
2o information related to 502 (B frame) is 1508 and 1509,
information related to frame 503 ( P+ frame ) is 1505 and 1507 ,
information related to frame 504 ( B frame ) is 1513 and 1514 ,
and information related to frame 505 (P- frame) is 1510 and
1512 . When coding image sequences are coded using B frames ,
the transmission order and display order of frames are
usually different. This is because the previous and
subsequent reference images need to be coded before the
prediction image for the B frame is synthesized:
2 6


CA 02475779 2004-08-25
Consequently, in spite of the fact that the frame 5'02 is
displayed before frame 503 , information related to frame 503
is transmitted before information related to frame 502 . As
described above, there is no need to use multiple rounding
methods for B frames since motion compensation in B frames
do not cause accumulation of rounding errors. Therefore,
as shown in this example, information that specifies
rounding methods ( a . g . ' +' and ' -' ) is not transmitted for
B frames. Thus for instance, even if only positive rounding
is applied to B frames , the problem of accumulated rounding
errors does not occur. By storing coded bit streams
containing information related to B frames in a storage media
in the way described above, the occurrence of accumulated
rounding errors can be prevented when this bit stream is read
and decoded.
Specific examples of coders and decoders using the
coding method described in this specification is shown int
Fig. 14. The image coding and decoding method can be
utilized by installing image coding and decoding software
into a computer 1401 . This software is recorded in some kind
of storage media (CD-ROM, floppy disk, hard disk, etc. ) 1412,
loaded into a computer and then used. Additionally, the
computer can be used as an image communication terminal by
connecting the computer to a communication lines . It is also
possible to install the decoding method described in this
specification into a player device 1403 that reads and
decodes the coded bit stream recorded in a storage media 1402 .
In this case, the reconstructed image signal can be displayed
2 7


,
CA 02475779 2004-08-25
on a television monitor 1404. The device 1403 can be used
only for reading the coded bit stream, and in this case, the
decoding device can be installed in the television monitor
1404. It is well known that digital data transmission can
be realized using satellites and terrestrial waves. A
decoding device can also be installed in a television
receiver 1405 capable of receiving such digital
transmissions. Also, a decoding device can also be
installed inside a set top box 1409 connected to a
1o satellite/terrestrial wave antenna, or a cable 1408 of a
cable television system, so that the reconstructed images
can be displayed on a television monitor 1410 . In this case ,
the decoding device c:an be incorporated in the television
monitor rather than in the set top box, as in the case of
1404. The layout of a digital satellite broadcast system
is shown in 1413, 1414 and 1415. The video information in
the coded bit stream is transmitted from a broadcast station
r
1413 to a communication or broadcast satellite~14I4: The
satellite receives this information, sends it to a home 1415
2o having equipment for receivingsatellite broadcast programs,
and the video information is reconstructed and displayed in
this home using devices such as a television receiver or a
set top box. Digital image communication using mobile
terminals 1406 has recently attracted conside~:able
attention, due to the fact that image communication at very
low bit rates has become possible. Digital portable
terminals can be categorized in the following three types
a transceiver having both an encoder and decoder; a
28


A
CA 02475779 2004-08-25
transmitter having only an encoder; and a receiver having
only a decoder. An encoding device can be installed in a
video camera recorder 1407. The camera can also be used just
for capturing the video signal and this signal can be
supplied to a custom encoder 1411. All of the devices or
systems shown in this drawing can be equipped with the coding
or/and decoding method described in this specification. By
using this coding or/and decoding method in these devices
or systems, images of higher quality compared with those
obtained using conventional technologies can be obtained.
The following variations are clearly included within
the scope of this invention.
(i) A prerequisite of the above described. principle
was the use of block matching as a motion compensation method.
However, this invention is further capable of being applied
to all image.sequence coding and decoding methods in which
motion compensation is performed by taking a value for the 1
vertical and horizontal components of the pixel motion
vector that is other than an integer multiple of the sampling
2o period in the vertical and horizontal directions of the pixel ,
and then finding by interpolation, the intensity value of
a position where the sample value is not present.. Thus for
instance, the global motion compensation listed in Japanese
Patent Application No. Hei OS-060572 and the warping
prediction listed in Japanese Patent Application No. Hei
08-249601 are applicable to the method of this invention.
(ii) The description of the invention only mentioned
the case where a value integral multiple of 1J2 was taken
2 9


CA 02475779 2004-08-25
for the horizontal and vertical components of the motion
,vector. However, this invention is also generally
applicable to methods in which integral multiples of 1/d ( d
is a positive integer and also an even number) are allowed
for the horizontal and vertical components of t:he motion
vector. However, when d becomes large, the divisor for
division in bilinear interpolation (square of d, see
Equation 2) also becomes large, so that in contrast, the
probability of results from normal division reaching a value
io of 4.5 become low. Accordingly, when performing only
positive rounding, the absolute value of the expectation for
rounding errors becomes small and the bad effect s caused by
accumulated errors become less conspicuous. Also
applicable to the method of this invention, is a motion
compensation method where for instance, the d value is
variable, both positive rounding and negative rounding are
used when d is smaller than a fixed value, and only positive '
rounding or only negative rounding is used when the value
of d is larger than a fixed value.
(iii) As mentioned in the prior art, when DCT is~
utilized as an error coding method, the adverse effects from
accumulated rounding errors are prone to appear when the
quantized step size of the DCT coefficient is large. However
a method is also applicable to the invention, in which, when
the quantization step size of DCT coefficients is larger than
a threshold value then both positive rounding and negative
rounding are used. When the quantization step size of the
DCT coefficients is smaller than the threshold value then
3 0


CA 02475779 2004-08-25
only positive rounding or only negative rounding is used.
(iv) In cases where error accumulations occur on the
luminance plane and cases where error accumulations occur
on the chrominance plane, the bad effects on the
reconstructed images are generally more serious in the case
of error accumulations on the chrominance plane. This is
due to the fact that rather than cases where the image darkens
or lightens slightly, cases where overall changes in the
image color happen are more conspicuous . However, a method
io is also applicable to this invention in which both positive
rounding and negative rounding are used for the chrominance
signal , and only positive rounding or negative rounding is
used for the luminance signal.
As described in the description of related art, 1/4
z5 pixel accuracy motion vectors obtained by halving the 1/2
pixel accuracy motion vectors are rounded to 1/2 pixel
accuracy in H.263. However by adding certain changes to this
method, the absolute expectation value for rounding errors
can be reduced. In H. 263 that was mentioned in the prior
20 art, a value which is half the horizontal or vertical
components of the motion vector for the luminance plane is
expressed as r + s / 4 ( r is an integer, s is an integer less
than 4 and not smaller than 0) , and when s is 1 or 3, a rounding
operation is performed to obtain a 2. This operation can
25 be changed as follows : tnThen s is 1 , a rounding operation is
performed to obtain a 0 , and when s is 3 a 1 i.s be added to
r to make s a 0. By performing these operations, the number
of times that the intensity values at positions 406 - 408
31


CA 02475779 2004-08-25
in Fig. 4 is definitely reduced (Probability that horizontal
. and vertical components of motion vector will be an integer
become high. ) so that the absolute expectation value for the
rounding error becomes small. However, even if the size of
the error occurring in this method can be limited, the
accumulation of errors cannot be completely prevented.
(v) The invention described in this specification is
applicable to a method that obtains the final interframe
prediction image by averaging the prediction images obtained
by different motion compensation methods. For example, in
the method described in Japanese Patent Application No . Hei
8-2616, interframe predictidn images obtained 'by the
following two methods are averaged: block matching in which
a motion vector is assigned to each 16x16 pixel block; and
i5 block matching in which a motion vector is assigned to. each
8x8 pixel blocks . In this method,' rounding is also performed
when calculating the average of the two prediction images .
When only positive rounding is continuously performed in
this averaging operation, a new type of rounding error
2o accumulates . This problem can be solved by using multiple
rounding methods for this averaging operation. In this
method, negative rounding is performed in the averaging
operation when positive rounding is performed in block
matching. Conversely, positive rounding is used for the
25 averaging when negative rounding is used for block matching.
By using different rounding methods for averaging and block
matching, the rounding errors from two different sources is
cancelled within th.e same frame.
32


CA 02475779 2004-08-25
(vi) When utilizing a method that alternately locates
p+ frames and P- frames along the time axis , the encoder or
the decoder needs to determine whether the currently
processed P frame is a P+ frame or a P- frame. The following
is an example of sixch .identification method: A counter counts
the number of P frames after the most recently coded ,or
decoded I frame, and the current P frame is a P+ frame when
the number is odd, and a P- frame when the number is even
( this method is referred to as an implicit scheme ) . There
io is also a .method for instance, that writes into the header
section of the coded image information, information to
identify whether the currently coded P frame at the encoder
is a P+ frame or a P- frame ( this method is referred to as
an explicit scheme). Compared with the implicit method,
this method is well able to withstand transmission errors ,
since there is no need to count the number of P frames.
Additionally, the explicit method has the following
advantages : As described in "Description for Related Art" ,
past encoding standards ( such as MPEG-1 or MPEG-2 ) use only
2o positive rounding for motion compensation. This means for
instance that the motion estimation/rnotion compensation
devices (for example equivalent to 106 in Fig~.1) for
MPEG-1/MPEG-2 on the market are not compatible with coding
methods that uses both P+ frames and P- frames . =Ct is assumed
that there is a decoder which can decode bit streams
generated by a coding method that uses P+ frames and P- frames .
In this case if the decoder is based on the above mentioned
implicit method, then it will be difficult to develop an
33


i
CA 02475779 2004-08-25
encoder that generates bit streams that can be correctly
decoded by the above mentioned decoder, using the above
mentioned motion estimation/compensation device for
MPEG-1/MPEG-2. However, if the decoder is based on the above
mentioned explicit method, this problem can be solved. An
encoder using an MPEG-1/MPEG-2 motion estimation/motion
compensation device can continuously send P+ frames, by
continuously writing rounding method distinct~_on
information indicating positive rounding into the frame
1o information header. When this is performed, a decoder based
on the explicit method can correctly decode the bit stream
generated by this encoder. Of course, it should be more
likely in such case that the accumulation of rounding errors
occurs, since only P+ frames are present. However, error
accumulation is not a serious problem in cases where the
encoder uses only small values as the quantization step size
for the DCT coefficients (an example for such coders is a
custom encoder used only for high rate coding ) . In addition
_ to this interoperability between past standards, the
2o explicit method further have the following advantages:(a)
the equipment cost for high rate custom encoders and coders
not prone to rounding error accumulation due to frequent
insertion of I frames can be reduced by installing only
positive or negative rounding as the pixel value rounding
:25 method for motion c~mpensation; and(b) the above encoders
not prone to rounding error accumulation have the advantage
in that there is no need to decide whether to code the current
frame as a P+ or P- frame, and the processing is simplified.
34


i
CA 02475779 2004-08-25
(vii) The invention described in this specification
is applicable to coding and decoding methods that applies
filtering accompanying rounding to the interframe
prediction images. For instance, in the international
standard H. 261 for image sequence coding, a low-pass filter
(called a loop filter) is applied to block signals whose
motion vectors are not 0 in interframe prediction images.
Also, in H. 263, filters can be used to smooth: out
discontinuities on block boundaries (blocking artifacts).
1o All of these filters perform weighted averaging to pixel
intensity values and rounding is then performed on the
averaged intensity values . Even for these cases , selective
use of positive rounding and negative rounding is effective
for preventing error accumulation.
(viii) Besides I P+ P- P+ P- . . . , various methods,for
mixing P+ frames and P- frames such as I P+ P+ p- P- P+ p+ . . ,
or I P+ P- P- P+ P+ . . . are applicable to the method of this ~
invention. For instance, using a random number generator
that outputs 0 and 1 both at a probability of 50 percent,
2o the encoder can code a P+ and P- frame when the output is
0 and 1, respectively'. In any case, the less the difference
in probability that P+ frames and P- frames occur in a certain
period of time, the less the rounding error accumulation is
prone to occur. Further, when the encoder is allowed to mix
P+ frames and P-frames by an arbitrary method, the encoder
and decoder must operate based on the explicit method and
not with the implicit method described above. Accordingly,
the explicit method is superior when viewed from the


a
CA 02475779 2004-08-25
perspective of allowing flexibility configuration for the
encoder and decoder.
(ix)The invention described in this specification does
not limit the pixel value interpolation method to bilinear
interpolation . Interpolation methodsfor intensity values
can generally be described by the following equation:
[Equation 5]
x ac
R(x+r,y+s)=T(~ ~h(r-j,s-k)R(x+j,y+k)) ~~~(5)
j~_x j~_x
where , r and s are real numbers , h ( r , s ) is a function
for interpolating the real numbers, and T( z ) is a function
for rounding the real number z. The definitions of R (x,
y), x, and y are the same as in Equation 4. Motion
compensation utilizing positive rounding is performed when
T (z) is a function representing positive rounding, and
t
motion compensation utilizing negative rounding is
performed when the function representing negative rounding .
This invention is applicable to interpolation methods that
2o can be described using Equation 5. For instance, bilinear
interpolation can be described by defining h(r, s) as shown
below.
(Equation 6]
h(r,s) _ (1-'r')(1- ~s'), 0 5 ~r' s l, 0 s Is~ s 1, ~ ~ ~ (6)
0, otherwise.
36


CA 02475779 2004-08-25
However, if for instance h(r,s) is defined as shown
below.
[ Equation 7
h(r,s) = 1- ~r~ - is,, 0 s,r~ + ~sl s 1, rs < 0,
1- (rf , ~r~ a ysl, ,r~ s l, rs z 0, ~ ~ ~ (7)
1- jsf , I s~ > ~r', IsI s 1, rs > 0,
0, otherwise.
then an interpolation method different from bilinear
interpolation is implemented but the invention is still
applicable.
(x) The invention described in this specification does
not limit the coding method for error images to DCT (discrete
cosine transform). For instance, wavelet transform (for
example , M. Antonioni , et . al , "Image Coding Using Wavelet
Transform" IEEE Trans. Image Processing, vol. 1, no.2,
April 1992) and Walsh-Hadamard transform (for example, A.
N. Netravalli and B. G. Haskell, "Digital Pictures".
Plenum Press , 1998 ) are also applicable to this invention .
37

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date 2005-10-25
(22) Filed 1998-06-08
(41) Open to Public Inspection 1998-12-09
Examination Requested 2004-08-25
(45) Issued 2005-10-25
Deemed Expired 2013-06-10

Abandonment History

There is no abandonment history.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Request for Examination $800.00 2004-08-25
Registration of a document - section 124 $100.00 2004-08-25
Registration of a document - section 124 $100.00 2004-08-25
Application Fee $400.00 2004-08-25
Maintenance Fee - Application - New Act 2 2000-06-08 $100.00 2004-08-25
Maintenance Fee - Application - New Act 3 2001-06-08 $100.00 2004-08-25
Maintenance Fee - Application - New Act 4 2002-06-10 $100.00 2004-08-25
Maintenance Fee - Application - New Act 5 2003-06-09 $200.00 2004-08-25
Maintenance Fee - Application - New Act 6 2004-06-08 $200.00 2004-08-25
Maintenance Fee - Application - New Act 7 2005-06-08 $200.00 2005-05-05
Final Fee $300.00 2005-08-09
Maintenance Fee - Patent - New Act 8 2006-06-08 $200.00 2006-04-28
Maintenance Fee - Patent - New Act 9 2007-06-08 $200.00 2007-04-11
Maintenance Fee - Patent - New Act 10 2008-06-09 $250.00 2008-03-27
Maintenance Fee - Patent - New Act 11 2009-06-08 $250.00 2009-03-20
Maintenance Fee - Patent - New Act 12 2010-06-08 $250.00 2010-04-01
Maintenance Fee - Patent - New Act 13 2011-06-08 $250.00 2011-05-11
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
HITACHI LTD.
Past Owners on Record
NAKAYA, YUICHIRO
NEJIME, YOSHITO
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Claims 2004-08-25 19 687
Drawings 2004-08-25 13 357
Cover Page 2004-10-18 1 51
Representative Drawing 2004-10-15 1 14
Abstract 2004-08-25 1 42
Description 2004-08-25 40 1,813
Drawings 2004-12-30 13 345
Claims 2004-12-30 19 675
Description 2004-12-30 40 1,809
Representative Drawing 2005-10-06 1 16
Cover Page 2005-10-06 1 55
Cover Page 2005-11-15 2 197
Correspondence 2004-10-29 1 13
Prosecution-Amendment 2004-11-19 2 50
Assignment 2004-08-25 3 97
Prosecution-Amendment 2004-12-30 6 195
Prosecution-Amendment 2005-03-03 1 42
Prosecution-Amendment 2005-08-09 1 42
Correspondence 2005-08-09 1 43
Correspondence 2005-11-02 1 40
Prosecution-Amendment 2005-11-15 2 161