Language selection

Search

Patent 2949914 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2949914
(54) English Title: METHOD FOR ENCODING, COMPRESSED IMAGES IN PARTICULAR, IN PARTICULAR BY "RANGE CODER" OR ARITHMETIC COMPRESSION
(54) French Title: PROCEDE POUR ENCODER, NOTAMMENT DES IMAGES COMPRESSEES, NOTAMMENT PAR "RANGE CODER" OU COMPRESSION ARITHMETIQUE
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • H03M 7/40 (2006.01)
(72) Inventors :
  • GERVAIS, THAN MARC-ERIC (France)
  • LOUBET, BRUNO (France)
  • BESSOU, NICOLAS (France)
  • GUIMIOT, YVES (France)
  • PETIT FILS, MICKAEL (France)
  • ROQUES, SEBASTIEN (France)
(73) Owners :
  • COLIN, JEAN-CLAUDE (France)
(71) Applicants :
  • COLIN, JEAN-CLAUDE (France)
(74) Agent:
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2014-05-14
(87) Open to Public Inspection: 2014-11-20
Examination requested: 2019-05-13
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/FR2014/000106
(87) International Publication Number: WO2014/184452
(85) National Entry: 2016-11-10

(30) Application Priority Data:
Application No. Country/Territory Date
1354479 France 2013-05-17

Abstracts

English Abstract

Method for encoding a string of symbols using several models of the arithmetic or range coder type and comprising steps where: - each model is associated with a belonging criterion symbol, - the string is traversed so as to determine, for each symbol, the encoding model to which it belongs as a function of the criteria; then, - a probability of occurrence of each symbol in the corresponding model is determined; then, - the string is traversed while encoding each symbol successively; and, - a file is constructed on the basis of the code thus obtained.


French Abstract

Procédé pour encoder une suite de symboles utilisant plusieurs modèles du type arithmétique ou du range coder et comprenant des étapes où : - chaque modèle est associé à un critère d'appartenance, - on parcourt la suite pour déterminer, pour chaque symbole, le modèle d'encodage auquel il appartient, en fonction des critères; puis, - on détermine une probabilité d'occurrence de chaque symbole dans le modèle correspondant; puis, - on parcourt la suite en encodant chaque symbole successivement; et, - on constitue un fichier à partir du code ainsi obtenu.

Claims

Note: Claims are shown in the official language in which they were submitted.


13
CLAIMS
1. Entropic binary coding method for a series (S) of symbols (Sn) using at
least
two models (m), each model being associated with a belonging criterion,
comprising steps during which:
¨ said series is run through in order, for each symbol, to determine the
encoding model (m) to which it belongs, according to said belonging
criteria; then
¨ for each model and for each symbol, a probability (P) of occurrence of
the
symbol in the model is determined; then
¨ said series is run through once again by encoding each symbol
successively according to the model to which it belongs; and
¨ the models or information enabling them to be reconstituted and the
binary code thus obtained are stored.
2. Encoding method according to claim 1, characterised in that the series is
first
run through in order to determine the criteria of belonging to each of the
models.
3. Encoding method according to one of claims 1 or 2, characterised in that
the
belonging of a current symbol (Sn) to a model is determined according to a
belonging function mm(n) calculated from one or more symbols (Sn-4 - Sn-1)
preceding said current symbol in the series.
4. Encoding method according to one of claims 1 to 3, characterised in that
each
symbol is a number, preferably a number in base 10.
5. Encoding method according to one of claims 2 and 3, characterised in that
the
belonging function is the mean (mm) of absolute values of a given number of
reference symbols, preferable four reference symbols (Sn-4 ¨ Sn-1),
immediately
preceding said current symbol in the series.

14
6. Encoding method according to claim 4, characterised in that, for
calculating
the belonging function, the list is preceded by a sufficient number of
arbitrary
signais, the value of each preferably being zero.
7. Method according to one of claims 5 or 6, characterised in that the
criteria of
belonging of a current symbol to one of the models is a lower bound l(m) of
the
range covered by the model and the comparison of the mean (mm) of the
absolute values of the symbols preceding said current symbol.
8. Method according to claim 7, characterised in that, the bounds of each of
the
models being stored in an increasing order, the difference between two
successive bounds increases when the value of said bounds increases.
9. Encoding method according to claim 7 or 8, characterised in that, in order
to
determine the bound l(m):
- the mean of the values of the symbols in the series is calculated; then
- the value of the difference between the maximum value and the mean on
one hand and the mean and the minimum value on the other hand among
the values of symbols in the series are calculated, and a distance (D)
equal to the maximum from these two is deduced therefrom; then
- a deviation (DV) equal to the mean of the absolute values of the
differences between each element of the signal and the mean of the
series is calculated; then
- a spacing (E) is calculated according to the formula:
Image
then
- the bound l(m) between the moving averages is calculated for each of the
successive models (m) in accordance with the following formula:
Image

15
10. Method for compressing a medium of the image, video or sound type,
characterised in that it uses an encoding method according to one of claims 4
to
9.
11. Image compression method according to claim 10, characterised in that it
is
applied to compressed symbols of said image, each corresponding to a box of a
matrix, the sequence being formed by putting said symbols in a line.
12. Compression method according to claim 11, characterised in that, for
putting
the symbols in a line, for each row the row is run through in a first
direction and
then the following row, if applicable, in the opposite direction to the first.
13. Entropic binary decoding method for a series of symbols using at least two

models and encoded by means of a method according to one of the preceding
claims, characterised in that:
¨ each of the models used on encoding is extracted;
¨ the criteria for belonging to each of these models is extracted and
recalculated, and
¨ the same belonging criteria are used as on encoding for decoding each
symbol by means of the model used on encoding.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02949914 2016-11-10
1
METHOD FOR ENCODING, COMPRESSED IMAGES IN PARTICULAR, IN
PARTICULAR BY "RANGE CODER" OR ARITHMETIC COMPRESSION
The present invention relates mainly to the field of entropic binary encoding,
in
particular encoding using encoding models of the "range coder" type, that is
to
say using ranges, or of the arithmetic type.
Encoders of the range coder or arithmetic type make it possible to encode a
series of symbols without loss. These symbols may be of any type, in
particular
alphanumeric characters or punctuation characters. In the case of methods for
compressing an image, the symbols are numbers resulting from the prior
compression of said image, for example by a differential compression or a
wavelet compression generally preceded by colorimetric transformation.
A so-called entropic binary encoding makes it possible to reduce the number of

bits necessary for encoding a signal, here represented by the series of
symbols
to be encoded, without loss on the content thereof. The level of reduction
depends on the probability of occurrence of the symbols in the signal. In
particular, so-called "arithmetic" and "range coder" encodings use probability

models in which each symbol is associated with a probability. The theoretic
number of bits necessary for encoding the symbol is, in the cont d of an
encoder of the "range coder" or arithmetic coding type, -log2(P), where P is
the
probability of occurrence of this symbol in the signal.
An encoder of the range coder or arithmetic coder type must always have,
during
the encoding or decoding of a symbol, a probability model comprising one or
more symbols, as well as the probability of occurrence thereof, including at
least
the current symbol. The probability of the symbol is used to encode it. Ta
encode
the same signal, several probability models are possible. The most suitable is

the model for which the signal is the most compressed, that is to say for
which
the code resulting from the encoding has the lowest weight.

CA 02949914 2016-11-10
2
For a binary coding to be efficient, it is necessary for:
¨ the decoded data to be identical to the input data;
¨ the weight of the compressed data to be as small as possible;
¨ the encoding time to be as short as possible;
- the decoding time to be as short as possible.
This depends on two main factors:
¨ the encoding in itself, which must, for a given probability P, encode the

symbol on a number of bits as close as possible to -log2(P), while
restoring the same symbol on decoding if the same model is supplied to it;
¨ the computation of the models, which must make it possible to supply the
model most suited to each of the symbols, while being as quick as
possible.
It is thus possible to use one model for the entire signal. Then a compression
level close to Shannon entropy is obtained. On the other hand, some encoders
use completely adaptive models, such as the PPMd method, from the English
"Prediction by Partial Matching, escape method D". In this case, the models
are
established as the encoding progresses. These encoders make it possible to
obtain multiple models, suitable for each symbol, but the processing periods
are
much longer.
The aim of the invention is to propose an encoding method that makes it
possible to obtain a code the weight of which is lower than that generally
obtained by means of a single model, and where the processing periods are
shorter than those generally obtained with multiple models.
In the context of multimedia compression, it is found that, after
transformation of
the signal, for example by so-called wavelet methods, the low values are
generally close to one another, just as the highest values are doser to one
another.

CA 02949914 2016-11-10
3
According to the invention, such an entropic binary coding method for a series
of
symbols using at least two models, each model being associated with a
belonging criterion, comprising steps during which:
¨ said series is run through in order, for each symbol, to determine the
encoding model to which it belongs, according to said belonging criteria;
then
¨ for each model and for each symbol, a probability of occurrence of the
symbol in the model is determined; then
¨ said series is run through once again by encoding each symbol
successively according to the model to which it belongs; and
¨ the models or information enabling them to be reconstituted and the
binary code thus obtained are stored.
Advantageously, the series is run through in advance in order to determine the
criteria of belonging to each of the models.
Preferably, the belonging of a current symbol to a model is determined
according
to a belonging function calculated from one or more symbols preceding said
current symbol in the series. Each symbol being able to be a number,
preferably
a number in base 10, the belonging function may be the mean of the absolute
values of a given number of reference symbols, preferably four reference
symbols, immediately preceding said current symbol in the series.
For calculating the belonging function the list is advantageously preceded by
a
sufficient number of arbitrary symbols, the value of each preferably being
zero.
The criterion of belonging to one of the models may be a lower bound for a
range covered by the model and the comparison of the mean of the preceding
symbols with said bound. The bounds of each of the models being stored in an
increasing order, the difference between two successive bounds advantageously
increases when the value of said bounds increases. To determine the bound of
each model it is possible to:

CA 02949914 2016-11-10
4
- calculate the mean of the values of the symbols in the series; then
calculate the value of the difference between the maximum value and the
mean on one hand and the mean and the minimum value on the other
hand among the values of symbols in the series, and to deduce therefrom
a distance equal to the maximum from these two; then
- calculate a deviation equal to the mean of the absolute values of the
differences between each element of the signal and the mean of the
series; then
- calculate a spacing according to the formula:
( DD:srciaatncon)
Spacing ¨
1n(2)
then
¨ calculate the bound between the moving averages each of the successive
models in accordance with the following formula:
(in)
= Dutance*
.sezuriber o mode: spozen g
The invention also relates to a method for compressing a medium of the image,
video or sound type, characterised in that it uses an encoding method
according
to the invention. A method for compressing an image applies preferentially to
compressed symbols of the image, each corresponding to a box in a matrix, the
series being formed by putting said symbols in a line. For putting the symbols
in
a line, for each row it is possible to run through a row in a first direction
and then
the following row, if applicable, in the opposite direction to the first.
Accord ing to another subject matter of the invention, an entropic binary
decoding
method for a series of symbols using at least two models and encoded by means
of a method according to the invention is characterised in that:
each of the models used on encoding is extracted;

CA 02949914 2016-11-10
¨ the criteria for belonging to each of these models are extracted and
recalculated, and
¨ the same belonging criteria are used as on encoding for decoding each
symbol by means of the model used on encoding.
5
Several embodiments of the invention will be described below, by way of non-
limitative examples, with reference to the accompanying drawings, in which:
¨ figure 1 illustrates a layer of an image to which a method according to
the
invention is applied;
¨ figure 2 illustrates a sub-matrix, referred to as level-3 LH, of
coefficients,
in base 10, resulting from a wavelet transformation of the image of figure
1 followed by quantisation and rounding;
¨ figure 3 illustrates the method for putting the coefficients of the
matrix of
figure 2 in a line, so as to form a series of symbols to be processed by the
method according to the invention;
¨ figure 4 illustrates a table giving, for each model used, the value of a
corresponding lower bound;
¨ figure 5 is graphical representation of the table in figure 4;
¨ figure 6 is a table showing ail the values in base 10 corresponding to
the
symbols of the series and, for each value, its number of occurrences in
the series and therefore its probability in the context of a single model;
¨ figure 7 is a graphical representation of the table in figure 6; and
¨ figure 8 is a table showing, for each value, its number of occurrences in

each of the models.
To illustrate an example of a method according to the invention, an original
image is used, the pixels of which are disposed in 320 columns and 240 rows
and encoded with three components R (red), G (green) and B (blue). This image
then underwent a colorimetric transformation of the Y, Cb, Cr type. Figure 1
illustrates, in the fornn of an image, the Y luminance component resulting
from
the colorimetric transformation.

CA 02949914 2016-11-10
6
A two-dimensional CDF 5/3 wavelet transformation using fixed-point numbers is
first of ail applied to the image 1. Figure 2 illustrates a matrix LHQ
corresponding
to a so-called level-3 LH sub-matrix resulting from this wavelet
transformation to
which a quantisation by means of a coefficient equal to 3.53 was next applied,
and then rounding to the closest integer. This wavelet transformation is
effected
for each level in two dimensions: a vertical pass and then a horizontal pass.
The
vertical wavelet transformation generates a so-called detail matrix, or H
matrix,
and a so-called approximation matrix, or L matrix. The application of a
horizontal
wavelet pass to the L matrix generates an LH detail matrix and a LL
approximation matrix. The application of a vertical wavelet pass to the H
matrix
generates two detail matrices HL and HH. New wavelet levels are then applied
recursively to successive approximation matrices LL. Thus the level-3 LH
matrix
is the type-LH matrix obtained during the third level of wavelets. Once the LH

matrix is obtained, it is quantised by a factor 3.53 and then its values are
rounded in order to obtain the LHQ matrix.
The LHQ matrix comprises 40 columns and 30 rows, that is to say 1200 values
each corresponding to a symbol to be encoded. To apply the processing
according to the invention, the 1200 values are put in a line, that is to say
a
series S of the 1200 values is formed. In the example illustrated, the putting
in a
line is done as illustrated in figure 3, the first row of the LHQ matrix being
run
through from left to right, then the second from right to left, so that, in a
series S,
the last value of the first row of the LHQ matrix precedes the last value of
the
second row. More generally, a row being run through in one direction, the
following row is run through in the opposite direction. In this way a signal
formed
by the series of symbols Sn is obtained, n integer varying from 1 to N, with
N=.1200, each symbol Sn having a value denoted V(n).
In the example illustrated, in order to determine the models to be applied to
the
signal S, an analysis of this signal is first of ail made.

CA 02949914 2016-11-10
7
First of ail an arithmetic mean M of ail the values of the signal are
computed, in
accordance with the formula:
in _ ________ =-0.07833
The minimum value Min[V(n)] and the maximum value Max[V(n)] are then
determined, that is to say, in the example illustrated:
Min[V(n)] = -42
Max[V(n)] = 35
1.0 A distance D is deduced from this, where D is equal to a maximum from the
value of the difference between the mean M and the minimum value of the signal

Min[V(n)] on the one hand and the value of the difference between the maximum
value of the signal Max[V(n)] and the mean M on the other hand; that is to
say:
D = Distance = max(Mean ¨ Minimum; Maximum ¨ Mean)
= max ((-0.07833) - (-42); 35 --0.07833)
= 41.9216
Next a deviation DV is calculated, that is to say a mean dispersion, of the
value
around the mean M. This dispersion is calculated as the mean of the absolute
values of the differences between the values V(n) of each symbol Sn of the
signal and the mean M; that is to say, in the example illustrated:
zpv .13 s(V(n)¨Itt ean
DV = deviation ¨ - 1.2934
Next a spacing E between the models is calculated. In the example illustrated,
this spacing is calculated in accordance with the formula:
:n( Dutencr in41S21.6
E = spacoui _ oFouitler. 1,2934, _ 5.0183
Iries2) 41(2)

CA 02949914 2016-11-10
8
Advantageously, the wider the signal to be coded, the larger the number of
models. It is an input parameter that can depend on the quantity of
information
present in the signal. In the example illustrated, it has been chosen to use
five
models.
For each model numbered from 0 to 4, a lower bound is defined as from which
the symbol can belong to this model. Preferably, the smaller the variations
relating to the model, the closer the thresholds to each other. In this way
the
following formula is defined to calculate, in the context of the example, the
lower
bounds of each model:
i.(m) = distance * (iumberno'f modes) sPacttig
Where m is the number of one model among the 5, m taking the integer values 0
to 4.
The lower values thus calculated are listed in the table Tabl, depicted in
figure 4
and illustrated on the graph in figure 5.
In order to associate each symbol Sn with a model, it is necessary to define a
belonging criterion that can be compared with the previously calculated bounds

1(m). Furthermore, it is necessary for this criterion to be identical on
encoding
and decoding, so that the same model is applied to the same symbol, so that
the
restored value of this symbol during decoding is the same as its initial
value. For
this purpose, the belonging criterion is chosen as a function of values of
symbols
preceding the current symbol, that is to say the one to be encoded or decoded.

The encoding and decoding taking place without loss, the values preceding the
current symbol will be identical on compression and decompression. Thus
applying the same belonging criterion to these same values on encoding and on
decoding will allocate the same model to the current symbol.
In the example illustrated, the values after wavelets are assumed to be
centred
or almost centred on zero. This is because the mean M is substantially equal
to

CA 02949914 2016-11-10
9
zero. Because of this, the function determining the belonging criterion chosen
is
a mean of the absolute values of the four symbols immediately preceding the
current symbol, rounded to four decimals. The number of preceding symbols
used may be different, but sufficient to limit the influence of a value that
deviated
excessively from the others, which would give rise to an unwanted change in
model, and advantageously a power of 2, to facilitate its binary notation. The

number of decimals may also be different from 4, but advantageously a power of

2, to facilitate its binary notation.
The criterion of belonging to a model, in the example illustrated, is
therefore
determined by the formula:
unze of neaving avera,ge
abs (V(n ¨ n'))
mm (n) =
sua of mov:ng average
nr=1
where the size of the moving average T is equal to 4, that is to say the
number of
preceding symbols taken into account for calculation thereof, n' varying from
1 to
4.
This makes it possible to select a suitable model for each symbol Sn. The
moving average mm(n) is calculated with a given precision in a parameter, here
four decimals, that is identical on encoding and decoding.
Each symbol Sn of value V(n) belongs to the largest model m the lower bound
1(m) of which is less than or equal to the moving average of the preceding
absolute values mm(n): 1(m) 5 mm(n).
The table Tab2, illustrated in figure 6, presents the number of occurrences of

each value V(n) in the signal S. Figure 7 is the graphical representation
thereof.
It will be noted that the zero value is over represented therein. The use of a
single model would therefore be particularly unsuitable.

CA 02949914 2016-11-10
The table Tab3, illustrated in figure 8, presents the number of occurrences of

each value V(n) in each model M.
5 At the end of the selection of a model m for each symbol Sn, an encoder
of the
"range coder" or arithmetic type is applied to the values of this model. For
this
purpose the number of occurrences of each symbol Sn in this model m is first
of
ail calculated and a probability of appea rance of this symbol in this model m
is
deduced therefrom.
For encoding, the signal is run through in the direction of increasing indices
n, as
defined previously with reference to figure 3. For each symbol, the model to
which it belongs is determined, in accordance with the belonging criterion
defined previously. Next this model is used in the chosen encoder, for example
an encoder of the "range coder" or arithmetic type. For each model it is also
possible to choose an encoder of a type different from that chosen for another

model.
The first symbols to be encoded being preceded by a number of symbols that is
insufficient for calculating the belonging criterion, the signal is preceded
by a
number of arbitrary values sufficient for this calculation. Thus, in the
example
illustrated, the signal is preceded by four values, arbitrarily chosen so as
to be
zero; these values make it possible to calculate the belonging criterion of
the first
four symbols S1-S4 to be encoded.
Advantageously, a file F is created containing the binary code C obtained by
the
encoding of the signal S. To enable decoding of the binary code C, ail the
information necessary for decoding is disposed in a header of the file, in
particular, in the example illustrated:
- the number of models;
¨ the mean M;
¨ the deviation DV;

CA 02949914 2016-11-10
11
¨ the number of preceding elements to be used for calculating the moving
average mm, in the form a power of two; and
¨ the precision to be used for calculating the moving average, in the form
of
a power of two.
For decoding, the bounds of the models are recovered, simply by reading or
recalculating, and then the belonging of a symbol Sn to be decoded to a model
is
determined in the same way as for encoding, from the symbols previously
decoded without loss, and then the model found is used for decoding the
symbol.
In the same way as on encoding, the first symbols to be decoded being
preceded by a insufficient number of symbols for calculating the belonging
criterion, the code corresponding to the encoded signal S is preceded by an
arbitrary number of values sufficient for this calculation. These values are
identical to those used for encoding. Thus, in the example illustrated, the
code is
preceded by four values, arbitrarily chosen so as to be zero; these values
make
it possible to calculate the belonging criterion of the first four symbols S1-
S4 fo
be decoded.
The values of theoretical weights are calculated with the following
hypotheses:
= the weight of the models and other header information necessary for
decoding the signal are ignored; and
= the binary encoder used is a perfect encoder encoding each symbol Sn
according to its probability P(Sn) in ¨Iog2(P(Sn)) bits, P(Sn) being its
probability according to the model provided.
For the theoretical calculation the following notations are used:
= P(Sn): probability of the symbols, equal to the number of occurrences of
this symbol divided by the number of symbols for this model;
= N(Sn) the number of occurrences of the symbol Sn.

CA 02949914 2016-11-10
12
Thus each symbol encountered will in theory weigh ¨log2(P(Sn)), and ail the
symbols s in the context of a given mode! will therefore weigh:
-N(Sn) x log2(P(Sn)).
In the case in question the symbols lie between -42 and 35, the theoretical
weight of ail the symbols in the context of the single model illustrated by
the table
Tab2 is:
p = N(5n)* log2(1)(5n))
n=4/
With the above values, a weight of P
- single model = 1941 bits is obtained.
In the context of the separation into 5 models the same formula is used,
calculating the probability with respect to the number of symbols of the
current
model. The following is then obtained:
= Model 0: 300 bits;
= Model 2: 92 bits;
= Model 3: 505 bits;
= Model 4: 588 bits;
= Model 5: 34 bits.
The total weight is therefore the sum if the weights of the symbols encoded
with
each of these models, that is to say:
P5 models = 300 + 92 + 505 + 588 + 34 = 1519 bits.
According to these calculation hypotheses, 422 bits have been gained with
respect to the single model. The greater the number of symbols to be encoded,
the greater the predictable gain. Furthermore, the more different symbols
there
are in a signal, the more it may be advantage to increase the number of models

used.
Naturally the invention is not limited to the examples that have just been
described.

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(86) PCT Filing Date 2014-05-14
(87) PCT Publication Date 2014-11-20
(85) National Entry 2016-11-10
Examination Requested 2019-05-13
Dead Application 2023-11-16

Abandonment History

Abandonment Date Reason Reinstatement Date
2022-11-16 FAILURE TO PAY APPLICATION MAINTENANCE FEE
2023-06-30 Appointment of Patent Agent

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Registration of a document - section 124 $100.00 2016-11-10
Reinstatement of rights $200.00 2016-11-10
Application Fee $200.00 2016-11-10
Maintenance Fee - Application - New Act 2 2016-05-16 $50.00 2016-11-10
Maintenance Fee - Application - New Act 3 2017-05-15 $50.00 2017-05-12
Maintenance Fee - Application - New Act 4 2018-05-14 $50.00 2018-05-14
Request for Examination $400.00 2019-05-13
Maintenance Fee - Application - New Act 5 2019-05-14 $100.00 2019-05-14
Maintenance Fee - Application - New Act 6 2020-05-14 $100.00 2020-05-08
Maintenance Fee - Application - New Act 7 2021-05-14 $100.00 2021-11-10
Late Fee for failure to pay Application Maintenance Fee 2021-11-10 $150.00 2021-11-10
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
COLIN, JEAN-CLAUDE
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Examiner Requisition 2020-05-22 4 185
Amendment 2020-09-21 19 1,149
Claims 2020-09-21 5 188
Examiner Requisition 2021-03-24 5 218
Amendment 2021-07-22 19 665
Claims 2021-07-22 5 187
Examiner Requisition 2022-02-11 4 192
Amendment 2022-06-13 19 680
Interview Record with Cover Letter Registered 2022-06-01 2 15
Claims 2022-06-13 5 265
Abstract 2016-11-10 1 13
Claims 2016-11-10 3 95
Drawings 2016-11-10 4 144
Description 2016-11-10 12 453
Representative Drawing 2016-11-10 1 5
Representative Drawing 2016-12-16 1 5
Cover Page 2016-12-16 1 38
Request for Examination 2019-05-13 2 63
Patent Cooperation Treaty (PCT) 2016-11-10 1 42
Patent Cooperation Treaty (PCT) 2016-11-21 2 44
International Search Report 2016-11-10 22 718
Amendment - Abstract 2016-11-10 2 80
Declaration 2016-11-10 2 108
National Entry Request 2016-11-10 9 257
Correspondence 2016-11-28 4 222