Language selection

Search

Patent 2581466 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 2581466
(54) English Title: A METHOD AND A SYSTEM FOR AUTOMATIC EVALUATION OF DIGITAL FILES
(54) French Title: METHODE ET SYSTEME D'EVALUATION AUTOMATIQUE DES FICHIERS NUMERIQUES
Status: Deemed expired
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06F 16/65 (2019.01)
  • G10L 25/54 (2013.01)
(72) Inventors :
  • DESBIENS, JOCELYN (Canada)
(73) Owners :
  • HITLAB ULC (Not Available)
(71) Applicants :
  • WEBHITCONTEST INC. (Canada)
(74) Agent: LAVERY, DE BILLY, LLP
(74) Associate agent:
(45) Issued: 2014-01-28
(22) Filed Date: 2007-03-12
(41) Open to Public Inspection: 2008-09-12
Examination requested: 2010-03-09
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data: None

Abstracts

English Abstract

There is provided a method for automatic evaluation of target files, comprising the steps of building a database of reference files; for each target file, forming a training set comprising files from the database of reference files and building a test set from features of the target file; dynamically generating a learning model from the training set; and applying the learning model to the test set, whereby a value corresponding to the target file is predicted.


French Abstract

La présente offre une méthode pour l'évaluation automatique des fichiers cibles, composée des étapes visant à construire une base de données de fichiers de référence; pour chaque fichier cible, formant un ensemble de formation contenant des fichiers de la base de données de référence et construisant un ensemble de test à partir de fonctions du fichier cible; générant dynamiquement un modèle d'apprentissage à partir de l'ensemble d'apprentissage; et appliquant le modèle d'apprentissage à l'ensemble de test, où une valeur correspondant au fichier cible est prédite.

Claims

Note: Claims are shown in the official language in which they were submitted.



16
CLAIMS
1. A method for automatic ranking of target files according to a predefined
scheme,
comprising the steps of:
building a database of reference files already ranked according to the
predefined
scheme;
for each target file:
i) determining a neighborhood of the target file among the reference
files in the database of reference files, and forming a training set
comprising
reference files of this neighborhood, versus which neighborhood as a whole the

target file is to be assessed, wherein said step of forming a training set
comprises extracting a feature vector of the target file and finding n closest

neighbors of the feature vector of the target file among features vectors in
the
database of reference files, and wherein said finding n closest neighbors
comprises using one of: i) Euclidean distance, ii) cosine distance and iii)
Jensen-
Shannon distribution similarity;
ii) building a test set from features of the target file;
iii) dynamically generating a learning model from the training set, the
learning model defining a correlation between the reference files in the
training
set and a rank thereof according to the predefined scheme; and
iv) applying the learning model to the test set;
whereby a rank corresponding to the target file is predicted according to the
predefined scheme.
2. The method of claim 1, further comprising storing the predicted rank in
a result
database.
3. The method of any one of claims 1 and 2, wherein said step of building a
database of reference files comprises collecting files previously ranked
according to the
predefined scheme, under a digital format; obtaining feature vectors of each
of the
collected files; and storing the feature vectors in a database of reference
files.


17
4. The method of claim 3, wherein said step of building a database of
reference files
further comprises storing a rank, defined according to the predefined scheme,
of each
of the reference files in a score database.
5. The method of claim 3, wherein said step of obtaining feature vectors of
each of
the collected files comprises extracting, from the collected files, a number
of features to
yield reference feature vectors.
6. The method of claim 3, wherein said step of storing the feature vectors
in a
database of reference files comprises storing the feature vectors along with
information
about the corresponding reference files.
7. The method of any one of claims 1 to 6, wherein said step of forming a
training
set comprising files from the database of reference files and building a test
set from
features of the target file further comprises reducing the dimensionality of
the training
set and reducing the dimensionality of the test set.
8. The method of claim 7, wherein said steps of reducing the dimensionality
are
done by using one of: i) Principal Component Analysis (PCA) and ii) Singular
Value
Decomposition (SVD).
9. The method of claim 7, wherein said steps of reducing the dimensionality
are
done by a non-linear regression technique.
10. The method of claim 7, wherein said steps of reducing the
dimensionality are
done by one of: Neural Networks, Support Vector Machines, Generalized Additive

Model, Classification and Regression Tree, Multivariate Adaptative Regression
Splines,
Hierarchical Mixture of Experts and Supervised Principal Component Analysis.
11. The method of any one of claims 1 to 10, wherein said step of
dynamically
generating a learning model comprises using closest neighbors of the target
file in the
database of reference files.

18
12. The method of any one of claims 1 to 10, wherein said step of
dynamically
generating a learning model comprises using the n closest neighbors of the
target file's
feature vector among the feature vectors in the database of reference files.
13. The method of any one of claims 1 to 10, wherein said step of
dynamically
generating a learning model comprises reducing the dimension of a set formed
of the
closest neighbors of the target file in the database of reference files.
14. The method of any one of claims 1 to 10, wherein said step of
dynamically
generating a learning model comprises reducing the dimension of a set formed
of the
closest neighbors of the target file in the database of reference files.
15. The method of any one of claims 1 to 10, wherein said step of
dynamically
generating a learning model comprises applying a Support Vector Model.
16. The method of any one of claims 1 to 10, wherein said step of
dynamically
generating a learning model comprises applying a Support Vector Model to the n

closest neighbors of the target file's feature vector in the database of
reference files.
17. The method of any one of claims 1 to 16, further comprising discarding
the
learning model after prediction for the target file.
18. The method of any one of claims 1 to 17, wherein said step of building
a training
set comprises rebuilding the training set as new ranked files appear in the
database of
reference files.
19. The method of any one of claims 1 to 18, wherein said step of forming a
training
set comprises finding new closest neighbors in the database of reference files
as new
reference files appear in the database of reference files.
20. The method of any one of claims 1 to 19, wherein said step of forming a
training
set comprises updating the closest neighbors as new reference files appear in
the
database of reference files.


19
21. The method of any one of claims 1 to 20, wherein said step of
generating a
learning model comprises automatically generating a learning model based on a
dynamic neighborhood of the target file as represented by the training set.
22. The method of any one of claims 1 to 21, wherein the target files are
song files,
the reference files are songs previously ranked according to the predefined
scheme,
and the target files are assessed according to the previously ranked songs.

Description

Note: Descriptions are shown in the official language in which they were submitted.



CA 02581466 2007-03-12
1

TITLE OF THE INVENTION

A method and a system for automatic evaluation of digital files
FIELD OF THE INVENTION

[0001] The present invention relates to a method and a system for
automatic evaluation of digital files. More specifically, the present
invention is
concerned with a method for dynamic hit scoring.

BACKGROUND OF THE INVENTION

[0002] A number of files classification or prediction methods have
been developed over the years.

[0003] Li et al. (US 2004/0231498) present a method for music
classification comprising extracting features of a target file; extracting
features
of a training set; and classifying music signals.

[0004] Blum et al. (US 5,918,223) describe a method for classifying
and ranking the similarity between individual audio files comprising supplying
sets containing the features of classes of sound to a training algorithm
yielding
a set of vectors for each class of sound; submitting a target audio file to
the
same training algorithm to obtain a vector for the target file; and
calculating the
correlation distance between the vector for the target file and the vectors of
each class, whereby the class which has the smallest distance to the target
file
is the class assigned to the target file.


CA 02581466 2007-03-12

2
[0005] Alcade et al. (US 7,081,579, US 2006/0254411) teach a
method and system for music recommendation, comprising the steps of
providing a database of references, and extracting features of a target file
to
determine its parameter vector using a FTT analysis method. Then the distance
between the target file's parameter vector and each file's parameter vector of
the database of references is determined to score the target file according to
the target file's distance with each file of database of references via a
linear
regression method.

[0006] Foote et al. (US 2003/0205124), Platt et al. (US
2006/0107823), Flannery et al. (US 6,545,209) present methods for classifying
music according to similarity using a distance measure.

[0007] Gang et al. (US 2003/0089218) disclose a method for
predicting musical preferences of a user, comprising the steps of building a
first
set of information relative to a catalog of musical selection; building a
second
set of information relative to the tastes of the user; and combining the
information of the second set with the information of the first set to provide
an
expected rating for every song in the catalog.

[0008] There is a need in the art for a method for dynamic hit
scoring.

SUMMARY OF THE INVENTION

[0009] More specifically, there is provided a method for automatic
evaluation of target files, comprising the steps of building a database of
reference files; for each target file, forming a training set comprising files
from
the database of reference files and building a test set from features of the


CA 02581466 2007-03-12

3
target file; dynamically generating a learning model from the training set;
and
applying the learning model to the test set, whereby a value corresponding to
the target file is predicted.

[0010] There is further provided a method for automatic evaluation
of songs, comprising the step of building a database of hit songs; for each
song
to be evaluated, forming a training set comprising songs from the database of
hit songs and building a test set from features of the song to be evaluated;
dynamically generating a learning model from the training set; and applying
the
learning model to the test set; whereby a score corresponding to the song to
be
evaluated is predicted.

[0011] Other objects, advantages and features of the present
invention will become more apparent upon reading of the following non-
restrictive description of embodiments thereof, given by way of example only
with reference to the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS
[0012] In the appended drawings:

[0013] Figure is a flow chart of an embodiment of a method
according to an aspect of the present invention; and

[0014] Figure 2 illustrates a class separating hyperplane in a
Support Vector Model technique used in the method of Figure 1.


CA 02581466 2007-03-12

4
DESCRIPTION OF EMBODIMENTS OF THE INVENTION

[0015] An embodiment of the method according to an aspect of the
present invention generally comprises an analysis step (step 100) and a
dynamic scoring step (step 200).

[0016] The method will be described herein in the case of music files
for example, in relation to the flowchart of Figure 1.

[0017] In the analysis step (step 100), a database of reference files
is built. In the case of music files, the database of reference files
comprises hit
songs for example.

[0018] A number of files, such as MP3 files or other digital format,
for example, of songs identified as hits are gathered, and numerical features
that represent each one of them are extracted to form n-dimensional vectors of
numerical features that represent each file, referred to as feature vectors,
as
well known in the art.

[0019] A number of features, including for example timbre, rhythm,
melody frequency etc, are extracted from the files to yield feature vectors
corresponding to each one of them. In a hit score method, a number of 84
features were extracted for example.

[0020] The feature vectors are stored in a database along with
relevant information, such as for example, artist's name, genre etc (112).
Each
MP3 file is rated, according to a predefined scheme, and also stored in a
database (113).


CA 02581466 2007-03-12

[0021] The references files, here exemplified as hit songs MP3, are
selected according to a predefined scheme of rating. In the case of hit songs,
scoring may originate from a number of sources, including for example,
compilation of top 50 rankings, sales, air play etc.

[0022] For each target file, i.e. each song to be assessed in the
present example, numerical features that represent the target file are
extracted
to form corresponding feature vectors (114).

[0023] The dynamic scoring step (step 200) generally comprises a
learning phase and a predicting phase.

[0024] In the learning phase, files from the reference database in
regards to which the target file will be assessed are selected in a training
set.
The training set is built by finding n closest feature vectors of the target
file's
feature vector in the database of feature vectors of the hits (116). The
distance/similarity between the target file's feature vector and each feature
vector of the database of hits may be determined by using the Euclidian
distance, the cosine distance or the Jensen-Shannon distribution similarity,
as
well known to people in the art.

[0025] The training set is then simplified by reducing its dimension
(118), but using either Principal Component Analysis (PCA) or Singular Value
Decomposition (SVD) for example or non linear regression techniques known
in the art such as (but not limited to): Neural Networks, Support Vector
Machines, Generalized Additive Model, Classification and Regression Tree,
Multivariate Adaptative Regression Splines, Hierarchical Mixture of Experts,
Supervised Principal Component Analysis.


CA 02581466 2007-03-12

6
[0026] PCA is an orthogonai linear transformation that transforms
the data to a new coordinate system such that the greatest variance by any
projection of the data comes to lie on the first coordinate (called the first
principal component), the second greatest variance on the second coordinate,
and so on. PCA can be used for dimensionality reduction in a data set while
retaining those characteristics of the data set that contribute most to its
variance, by keeping lower-order principal components and ignoring higher-
order ones. Such low-order components often contain the "most important"
aspects of the data. But this is not necessarily the case, depending on the
application.

[0027] The main idea behind the principal component analysis is to
represent multidimensional data with less number of variables retaining main
features of the data. It is inevitable that by reducing dimensionality some
features of the data will be lost. It is hoped that these lost features are
comparable with the "noise" and they do not tell much about underlying
population.

[0028] PCA is used to project multidimensional data to a lower
dimensional space retaining as much as possible variability of the data. This
technique is widely used in many areas of applied statistics. It is natural
since
interpretation and visualization in a fewer dimensional space is easier than
in
many dimensional space. Especially, dimensionality can be reduced to two or
three, then plots and visual representation may be used to try and find some
structure in the data.


CA 02581466 2007-03-12

7
[0029] PCA is one of the techniques used for dimension reductions,
as will now be briefly described.

[0030] Suppose M is an m-by-n matrix whose entries come from the
field K, which is either the field of real numbers or the field of complex
numbers.
Then there exists a factorization of the form

M= UEV*
where U is an m-by-m unitary matrix over K, the matrix I is m-by-n with
nonnegative numbers on the diagonal and zeros off the diagonal, and V
denotes the conjugate transpose of V, an n-by-n unitary matrix over K. Such a
factorization is called a singular-value decomposition of M.

[0031] The matrix V thus contains a set of orthonormal "input" or
"analysing" basis vector directions for M. The matrix U contains a set of
orthonormal "output" basis vector directions for M. The matrix F contains the
singular values, which can be thought of as scalar "gain controls" by which
each corresponding input is multiplied to give a corresponding output.

[0032] A common convention is to order the values 7;,; in non-
increasing fashion. In this case, the diagonal matrix 7- is uniquely
determined by
M (though the matrices U and V are not).

[0033] Assuming zero empirical mean (the empirical mean of the
distribution has been subtracted from the data set), the principal component
w,
of a data set x can be defined as:


CA 02581466 2007-03-12

8
w, = arg max i var {w'x} = a.rg ~mi E I (wTx) 2
i ~
W~.~ ~ =1

[0034] With the first k - 1 components, the k-th component can be
found by subtracting the first k - 1 principal components from x:

k-1
lik-1 = X - ~ WiT-'4T'TX
i=1

and by substituting this as the new data set to find a principal component in
:~
Wk = aY~ 1~~1~. E (WTJ~I~-1) 1 =
IIWII=r
[0035] The PCA transform is therefore equivalent to finding the
singular value decomposition of the data matrix X,

X = ~EVT )

and then obtaining the reduced-space data matrix Y by projecting X down into
the reduced space defined by only the first L singular vectors, WL:

y = WLTX = EI,VL3'

[0036] The matrix W of singular vectors of X is equivalently the
matrix W of eigenvectors of the matrix of observed covariance C = X XT,


CA 02581466 2007-03-12

9
.~X'T = WE 2 w T

[0037] It is often the case that different variables have completely
different scaling. For examples one of the variables may have been measured
in meters and another one in centimeters (by design or accident). Eigenvalues
of the matrix is scale dependent. If one column of the data matrix X is
multiplied
by some scale factor (say s) then variance of this variable is increase by S2
and
this variable can dominate whole covariance matrix and hence the whole
eigenvalues and eigenvectors. It is necessary to take precautions when dealing
with the data. If it is possible to bring all data to the same scale using
some
underlying physical properties then it should be done. If scale of the data is
unknown then it is better to use correlation matrix instead of the covariance
matrix. It is in general a recommended option in many statistical packages.

[0038] It should be noted that since scale affects eigenvalues and
eigenvectors then interpretation of the principal components derived by these
two methods can be completely different. In real life applications care should
be
taken when using correlation matrix. Outliers in the observation can affect
covariance and hence correlation matrix. It is recommended to use robust
estimation for covariance (in a simple case by rejecting of outliers). When
using
robust estimates covariance matrix may not be non-negative and some
eigenvalues might be negative. In many applications, it is not important since
only the principal components corresponding to the largest eigenvalues are of
interest.

[0039] In either case, the number of significant variables (principal
axis or singular axis) is kept to a minimum. There are many recommendations
for the selection of dimension, as follows.


CA 02581466 2007-03-12

[0040] i) The proportion of variances : if the first two components
account for 70%-90% or more of the total variance then further components
might be irrelevant (See problem with scaling above).

[0041] ii) Components below certain level can be rejected. If
components have been calculated using a correlation matrix, often those
components with variance less than 1 are rejected. It might be dangerous.
Especially if one variable is almost independent of others then it might give
rise
to the component with variance less than 1. It does not mean that it is
uninformative.

[0042] iii) If the uncertainty (usually expressed as standard
deviation) of the observations is known, then components with variances less
than that, certainly can be rejected.

[0043] iv) If scree plots (scree plot is the plot of the eigenvalues, or
variances of principal components, against their indices) show elbow then
components with variances less than this elbow can be rejected.

[0044] According to a cross-validation technique, one value of the
observation is removed (x;j) then, using principal components, this value is
predicted and it is done for all data points. If adding the component does not
improve prediction power, then this component can be rejected. This technique
is computer intensive.

[0045] PCA was described above as a technique, in Step 118, for
reducing dimensionality of the learning set feature space, the learning set
comprising nearest neighbors from the target file.


CA 02581466 2007-03-12

11
[0046] Based on these n closest feature vectors, a learning model is
dynamically generated (130), using a well-known theoretical algorithm called
Support Vector Model (SVM) for example, as will now be described, using a
software MCubixTM developed by Diagnos Inc. for example.

[0047] SVM is a supervised learning algorithm that has been
successful in proving itself an efficient and accurate text classification
technique. Like other supervised machine learning algorithms, an SVM works
in two steps. In the first step - the training step - it learns a decision
boundary in input space from preclassified training data. In the second step -
the classification step - it classifies input vectors according to the
previously
learned decision boundary. A single support vector machine can only separate
two classes - a positive class (y = +1) and a negative class (y = -1).

[0048] In the training step the following problem is solved. A set of
training examples S, ={(xj,yj),(x2,y2),...,(xj,y~)} of size 1 from a fixed but
unknown
distribution p(x,y) describing the learning task is given. The term-frequency
vectors x; represent documents and y; = 1 indicates whether a document has
been labeled with the positive class or not. The SVM aims to find a decision
rule h c: x-+ {-1,+1} that classifies the documents as accurately as possible
based on the training set Si.

[0049] An hypothesis space is given by the functions f(x) = sgn(wx +
b) where w and b are parameters that are learned in the training step and
which determine the class separating hyperplane, shown in Figure 2.
Computing this hyperplane is equivalent to solving the following optimization
problem:


CA 02581466 2007-03-12

12

1 '
t '(w. b. 2, ww 4' C57;,
minimize: 71

t'(W, b. ~ vt~w-- ~:"

subject to: [0050] The constraints require that all training examples are

classified correctly, allowing for some outliers symbolized by the slack
variables
i; ;. If a training example lies on the wrong side of the hyperplane, the
corresponding i; ; is greater than 0. The factor C is a parameter that allows
for
trading off training error against model complexity. In the limit C---). - no
training error is allowed. This setting is called hard margin SVM. A
classifier
with finite C is also called a soft margin Support Vector Machine. Instead of
solving the above optimization problem directly, it is easier to solve the
following dual optimisation problem:

+ 2-
minimize:

0
subject to:

[0051] All training examples with a; > 0 at the solution are called
support vectors. The Support vectors are situated right at the margin (see the
solid circle and squares in Figure 2) and define the hyperplane. The
definition
of a hyperplane by the support vectors is especially advantageous in high
dimensional feature spaces because a comparatively small number of
parameters - the a in the sum of equation - is required.


CA 02581466 2007-03-12

13
[0052] SVM have been introduced within the context of statistical
learning theory and structural risk minimization. In the methods one solves
convex optimization problems, typically quadratic programs. Least Squares
Support Vector Machines (LS-SVM) are reformulations to standard SVM. LS-
SVM are closely related to regularization networks and Gaussian processes but
additionally emphasize and exploit primal-dual interpretations. Links between
kernel versions of classical pattern recognition algorithms such as kernel
Fisher
discriminant analysis and extensions to unsupervised learning, recurrent
networks and control also exist.

[0053] In order to make an LS-SVM model, two hyper-parameters
are needed, including a regularization parameter y, determining the trade-off
between the fitting error minimization and smoothness, and the bandwidth ul,
at least in the common case of the RBF kernel. These two hyper-parameters
are automatically computed by doing a grid search over the parameter space
and picking the minimum. This procedure iteratively zooms to the candidate
optimum.

[0054] As the learning model is thus generated (130), in the
predicting phase (300), a test set is built from the features of the target
file
(140), and the test set feature space dimensionality is reduced (142) as known
in the art, by using a technique such as Principal component analysis (PCA) or
Singular Value Decomposition (SVD), keeping the same number of significant
variables (principal axis or singular axis) as the number of significant
variables
used in the learning set, as described hereinabove.

[0055] Then, the learning model generated in step 130 is applied to
the test set, so as to determine a value corresponding to the target song
(150).


CA 02581466 2007-03-12

14
The rating of the target file is based on the test set and the learning set,
the
target file being assessed relative to the training set.

[0056] A storing phase may further comprise storing the predicted
values in a result database.

[0057] The learning model is discarded after prediction for the target
file (160), before the method is applied to another file to be evaluated
(170).
[0058] As new files (hit songs) in the database of reference file
appear, the training set is rebuilt by updating the closest neighbours and
hyper-
parameters are automatically updated, resulting in a dynamic scoring method.
[0059] As people in the art will appreciate, the present method
allows an automatic learning on a dynamic neighbourhood.

[0060] As exemplified hereinabove, the method may be used for
pre-selecting songs in the contest of a hit contest for example, typically
based
on the popularity of the songs.

[0061] Depending on a nature of the scale used for evaluation, the
present adaptative method may be applied to evaluate a range of type of files,
i.e. compression format, nature of files etc... with an increased accuracy in
highly non-linear fields, by providing a dynamic learning phase.

[0062] Although the present invention has been described
hereinabove by way of embodiments thereof, it may be modified, without


CA 02581466 2007-03-12

departing from the nature and teachings of the subject invention as defined in
the appended claims.

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date 2014-01-28
(22) Filed 2007-03-12
(41) Open to Public Inspection 2008-09-12
Examination Requested 2010-03-09
(45) Issued 2014-01-28
Deemed Expired 2020-03-12

Abandonment History

There is no abandonment history.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $200.00 2007-03-12
Registration of a document - section 124 $100.00 2007-03-22
Maintenance Fee - Application - New Act 2 2009-03-12 $100.00 2009-03-06
Registration of a document - section 124 $100.00 2009-05-05
Request for Examination $800.00 2010-03-09
Maintenance Fee - Application - New Act 3 2010-03-12 $100.00 2010-03-09
Registration of a document - section 124 $100.00 2010-03-31
Maintenance Fee - Application - New Act 4 2011-03-14 $100.00 2011-01-18
Maintenance Fee - Application - New Act 5 2012-03-12 $200.00 2012-02-03
Maintenance Fee - Application - New Act 6 2013-03-12 $200.00 2013-03-06
Final Fee $300.00 2013-11-05
Maintenance Fee - Patent - New Act 7 2014-03-12 $200.00 2014-03-05
Maintenance Fee - Patent - New Act 8 2015-03-12 $200.00 2015-02-27
Maintenance Fee - Patent - New Act 9 2016-03-14 $200.00 2016-03-07
Maintenance Fee - Patent - New Act 10 2017-03-13 $450.00 2018-03-12
Maintenance Fee - Patent - New Act 11 2018-03-12 $250.00 2018-03-12
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
HITLAB ULC
Past Owners on Record
DESBIENS, JOCELYN
WEBHITCONTEST INC.
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Abstract 2007-03-12 1 12
Description 2007-03-12 15 487
Claims 2007-03-12 6 167
Drawings 2007-03-12 2 31
Representative Drawing 2008-08-19 1 8
Cover Page 2008-08-29 2 38
Claims 2012-10-17 4 138
Cover Page 2013-12-27 1 35
Correspondence 2007-05-28 1 18
Assignment 2007-03-12 5 170
Assignment 2010-03-31 4 149
Maintenance Fee Payment 2018-03-12 1 33
Correspondence 2007-03-22 2 66
Prosecution-Amendment 2010-03-09 1 30
Assignment 2009-05-05 7 170
Fees 2009-03-06 1 46
Prosecution-Amendment 2012-10-17 7 231
Prosecution-Amendment 2012-05-16 2 75
Returned mail 2019-05-21 2 83
Correspondence 2013-05-10 1 30
Correspondence 2013-11-05 1 39