Language selection

Search

Patent 2318502 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 2318502
(54) English Title: N-TUPLE OR RAM BASED NEURAL NETWORK CLASSIFICATION SYSTEM AND METHOD
(54) French Title: PROCEDE ET SYSTEME DE CLASSIFICATION DANS UN RESEAU NEURONAL ORIENTE RAM OU N-LIGNE
Status: Term Expired - Post Grant Beyond Limit
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06F 15/76 (2006.01)
(72) Inventors :
  • JORGENSEN, THOMAS MARTINI (Denmark)
  • LINNEBERG, CHRISTIAN (Denmark)
(73) Owners :
  • INTELLIX A/S
(71) Applicants :
  • INTELLIX A/S (Denmark)
(74) Agent: DEETH WILLIAMS WALL LLP
(74) Associate agent:
(45) Issued: 2008-10-07
(86) PCT Filing Date: 1999-02-02
(87) Open to Public Inspection: 1999-08-12
Examination requested: 2004-01-30
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/DK1999/000052
(87) International Publication Number: DK1999000052
(85) National Entry: 2000-07-13

(30) Application Priority Data:
Application No. Country/Territory Date
0162/98 (Denmark) 1998-02-05
98201910.1 (European Patent Office (EPO)) 1998-06-09

Abstracts

English Abstract


The invention relates to a system and a method of training a computer
classification system which can be defined by a network
comprising a number of n-tuples or Look Up Tables (LUTs), with each n-tuple or
LUT comprising a number of rows corresponding to at
least a subset of possible classes and comprising columns being addressed by
signals or elements of sampled training input data examples,
each column being defined by a vector having cells with values, the method
comprising determining the column vector cell values based
on one or more training sets of training input data examples for different
classes so that at least part of the cells comprise or point to
information based on the number of times the corresponding cell address is
sampled from one or more sets of training input examples, and
determining weight cell values corresponding to one or more column vector
cells being addressed or sampled by the training examples to
thereby allow weighting of one or more column vector cells of positive value
during a classification process, said weight cell values being
determined based on the information of at least part of the determined column
vector cell values and by use of at least part of the training
set(s) of input examples. A second aspect of the invention is a system and a
method for determining - in a computer classification
system -- weight cell values corresponding to one or more column vector cells
being addressed by the training examples, wherein the determination is
based on the information of at least part of the determined vector cell
values, said determination allowing weighting of column vector cells
having a positive value or a non-positive value. Finally the invention
provides a method and a system for classifying input data examples
into a plurality of classes using the computer classification systems.


French Abstract

L'invention concerne un système et un procédé d'apprentissage d'un système de classification informatique qui peut être défini par un réseau comprenant un certain nombre de n-lignes ou des tableaux de recherche (LUT), chaque n-ligne ou LUT comprenant un certain nombre de rangées correspondant à au moins un sous-ensemble de classes possibles et comprenant des colonnes adressées par des signaux ou éléments d'exemples échantillonnés de données constituant les entrées d'apprentissage, chaque colonne étant définie par un vecteur présentant des cellules dotées de valeur. Ce procédé consiste à déterminer les valeurs de cellules du vecteur définissant la colonne en fonction d'un ou plusieurs ensemble(s) d'apprentissage des exemples de données constituant les entrées pour différentes classes de manière qu'au moins une partie des cellules comprennent ou signalent des informations basées sur le nombre de fois que l'adresse des cellules correspondante est échantillonnée à partir d'un ou plusieurs ensemble(s) d'exemples des entrées d'apprentissage et à déterminer les valeurs des cellules de pesage correspondant à une ou plusieurs cellule(s) de vecteur de colonne adressée(s) ou échantillonnée(s) par les exemples d'apprentissage afin de permettre le pesage d'une ou plusieurs cellules du vecteur définissant la colonne positives au cours de la classification, lesdites valeurs de pesage étant déterminées en fonction des informations d'au moins une partie des valeurs des cellules du vecteur définissant la colonne déterminées et à l'aide d'au moins une partie de l'ensemble ou des ensembles d'apprentissage d'exemple d'entrées. Selon une autre variante, l'invention concerne un système et un procédé permettant de déterminer, dans un système de classification neuronal, les valeurs des cellules de pesage correspondant à une ou plusieurs cellules de vecteur déterminées adressées par les exemples d'apprentissage. Cette détermination s'appuie sur les informations d'au moins une partie des valeurs des cellules de vecteur déterminées et elle permet de peser les cellules du vecteur définissant la colonne dont la valeur est positive ou négative. Finalement l'invention concerne un procédé et un système de classification des exemples de données constituant les entrées en une pluralité de classes au moyen de systèmes de classification informatiques.

Claims

Note: Claims are shown in the official language in which they were submitted.


35
CLAIMS:
1. A method of training a computer classification system which can be defined
by a
network comprising a number of n-tuples or Look Up Tables (LUTs), with each n-
tuple or LUT comprising a number of rows corresponding to at least a subset of
possible classes and further comprising a number of columns being addressed by
signals or elements of sampled training input data examples, each column being
defined by a vector having cells with values, said method comprising
determining
the column vector cell values based on one or more training sets of input data
examples for different classes so that at least part of the cells comprise or
point to
information based on the number of times the corresponding cell address is
sampled from one or more sets of training input examples, and determining
weight
cell values corresponding to one or more column vector cells being addressed
or
sampled by the training examples to thereby allow weighting of one or more
column
vectors cells of positive value during a classification process, said weight
cell values
being determined based on the information of at least part of the determined
column
vector cell values and by use of at least part of the training set(s) of input
examples:
2. A method of training a computer classification system which can be defined
by a
network comprising a number of n-tuples or Look Up Tables (LUTs), with each n-
tuple or LUT comprising a number of rows corresponding to at least a subset of
possible classes and further comprising a number of columns being addressed by
signals or elements of sampled training input data examples, each column being
defined by a vector having cells with values, said method comprising
determining
the column vector cell values based on one or more training sets of input data
examples for different classes so that at least part of the cells comprise or
point to
information based on the number of times the corresponding cell address is
sampled from one or more sets of training input examples, and determining
weight
cell values corresponding to at least a subset of the column vector cells to
thereby
allow boosting of one or more column vector cells during a classification
process,
said weight cell values being determined based on the information of at least
part of
the determined column vector cell values and by use of at least part of the
training
set(s) of input examples.

36
3. A method according to claim 2, wherein the weight cell values are
determined so
as to allow suppressing of one or more column vector cells during a
classification
process.
4. A method according to any of the claims 1-3, wherein the determination of
the
weight cell values allows weighting of one or more column vector cells having
a
positive value (greater than 0) and one or more column vector cells having a
non-
positive value (lesser than or equal to 0).
5. A method according to any of the claims 1-4, wherein the weight cell values
are
arranged in weight vectors corresponding to at least part of the column
vectors.
6. A method of determining weight cell values in a computer classification
system
which can be defined by a network comprising a number of n-tuples or Look Up
Tables (LUTs), with each n-tuple or LUT comprising a number of rows
corresponding to at least a subset of possible classes and further comprising
a
number of column vectors with at least part of said column vectors having
corresponding weight vectors, each column vector being addressed by signals or
elements of a sampled training input data example and each column vector and
weight vector having cells with values being determined based on one or more
training sets of input data examples for different classes, said method
comprising
determining the column vector cell values based on the training set(s) of
input
examples so that at least part of said values comprise or point to information
based
on the number of times the corresponding cell address is sampled from the
set(s) of
training input examples, and determining weight vector cell values
corresponding to
one or more column vector cells based on the information of at least part of
the
determined column vector cell values and by use of at least part of the
training set(s)
of input examples, said determination allowing weighting of column vector
cells
having a positive value (greater than 0) and column vector cells having a non-
positive value (lesser than or equal to 0).
7. A method according to any of the claims 1-6, wherein determination of the
weight
cells allows weighting of any column vector cell.

37
8. A method according to any of the claims 1-7, wherein the weight cells are
arranged in weight vectors and the determination of the weight cell values
comprises initialising one or more sets of weight vectors corresponding to at
least
part of the column vectors, and adjusting weight vector cell values of at
least part of
the weight vectors based on the information of at least part of the determined
column vector cell values and by use of at least part of the training set(s)
of input
examples.
9. A method according to any of the claims 1-8, wherein at least part of the
column
cell values are determined as a function of the number of times the
corresponding
cell address is sampled from the set(s) of training input examples.
10. A method according to any of the claims 1-9, wherein the maximum column
vector value is 1, but at least part of the values have an associated value
being a
function of the number of times the corresponding cell address is sampled from
the
training set(s) of input examples.
11. A method according to any of the claims 8-10, wherein the column vector
cell
values are determined and stored in storing means before the adjustment of the
weight vector cell values.
12. A method according to any of the claims 1-11, wherein the determination of
the
column vector cell values comprises the training steps of
a) applying a training input data example of a known class to the
classification
network, thereby addressing one or more column vectors,
b) incrementing, preferably by one, the value or vote of the cells of the
addressed
column vector(s) corresponding to the row(s) of the known class, and
c) repeating steps (a)-(b) until all training examples have been applied to
the
network.
13. A method according to any of the claims 5-12, wherein all column vectors
have
corresponding weight vectors.

38
14. A method according to any of the claims 8-13, wherein the initialisation
of the
weight vectors comprises setting all weight vector cell values to a
predetermined
constant value, said predetermined value preferably being 1.
15. A method according to any of the claims 8-13, wherein the initialisation
of the
weight vectors comprises setting each weight vector cell to a predetermined
specific
cell value.
16. A method according to any of the claims 8-15, wherein the adjustment of
the
weight vector cell values comprises the steps of determining a global quality
value
based on at least part of the weight and column vector cell values,
determining if the
global quality value fulfils a required quality criterion, and adjusting at
least part of
the weight cell values until the global quality criterion is fulfilled.
17. A method according to any of the claims 8-15, wherein the adjustment of
the
weight cell values comprises the steps of
a) selecting an input data example from the training set(s),
b) determining a local quality value corresponding to the sampled training
input
example, the local quality value being a function of at least part of the
addressed
weight and column vector cell values,
c) determining if the local quality value fulfils a required local quality
criterion, if not,
adjusting one or more of the addressed weight vector cell values if the local
qua
lity criterion is not fulfilled,
c) selecting a new input example from a predetermined number of examples of
the
training set(s),
d) repeating the local quality test steps (b)-(d) for all the predetermined
training in
put examples,

39
e) determining a global quality value based on at least part of the weight and
column vectors being addressed during the local quality test,
f) determining if the global quality value fulfils a required global quality
criterion,
and,
h) repeating steps (a)-(g) until the global quality criterion is fulfilled.
18. A method according to claim 17, wherein steps (b)-(d) are carried out for
all
examples of the training set(s).
19. A method according to any of the claims 16-18, wherein the global and/or
the
local quality criterion is changed during the adjustment iteration process.
20. A method according to any of the claims 16-19, wherein the adjustment
iteration
process is stopped if the global quality criterion is not fulfilled after a
given number of
iterations.
21. A method according to any of the claims 16-20, wherein the adjusted weight
cell
values are stored after each adjustment, and wherein the determination of the
global
quality value further is followed by separately storing the hereby obtained
weight cell
values or classification system configuration values if the determined global
quality
value is closer to fulfil the global quality criterion than the global quality
value
corresponding to previously separately stored weight cell values or
configuration
values.
22. A method of classifying input data example into at least one of a
plurality of
classes using a computer classification system configured according to a
method of
any of the claims 1-21, whereby the column vector cell values and the
corresponding weight vector cell values are determined for each n-tuple or LUT
based on one or more training sets of input data examples, said method
comprising
a) applying an input data example to be classified to the configured
classification network thereby addressing column vectors and
corresponding weight vectors in the set of n-tuples or LUTs,

40
b) selecting a class thereby addressing specific rows in the set of n-tuples
or LUTs,
c) determining an output value as a function of values of addressed
weight cells,
d) repeating steps (b)-(c) until an output has been determined for all
classes,
e) comparing the calculated output values, and
f) selecting the class or classes having maximum output value.
23. A method according to claim 22, wherein the output value further is
determined
as a function of values of addressed column cells.
24. A method according to claim 23, wherein said output value is determined as
a
first summation of all the addressed weight vector cell values corresponding
to
column vector cell values greater than or equal to a predetermined value, said
predetermined value preferably being 1.
25. A method according to claim 23, wherein said step of determining an output
value comprises determining a first summation of all the addressed weight
vector
cell values corresponding to column vector cell values greater than or equal
to a
predetermined value, determining a second summation of all the addressed
weight
vector cell values, and determining the output value by dividing the first
summation
by the second summation.
26. A system for training a computer classification system which can be
defined by
a network comprising a stored number of n-tuples or Look Up Tables (LUTs),
with
each n-tuple or LUT comprising a number of rows corresponding to at least a
subset
of possible classes and further comprising a number of columns being addressed
by
signals or elements of sampled training input data examples, each column being
defined by a vector having cells with values, said system comprising input
means for

41
receiving training input data examples of known classes, means for sampling
the
received input data examples and addressing column vectors in the stored set
of n-
tuples or LUTs, means for addressing specific rows in the set of n-tuples or
LUTs,
said rows corresponding to a known class, storage means for storing determined
n-
tuples or LUTs, means for determining column vector cell values so as to
comprise
or point to information based on the number of times the corresponding cell
address
is sampled from the training set(s) of input examples, and means for
determining
weight cell values corresponding to one or more column vector cells being
addressed or sampled by the training examples to thereby allow weighting of
one or
more column vectors cells of positive value during a classification process,
said
weight cell values being determined based on the information of at least part
of the
determined column vector cell values and by use of at least part of the
training set(s)
of input examples.
27. A system for training a computer classification system which can be
defined by
a network comprising a stored number of n-tuples or Look Up Tables (LUTs),
with
each n-tuple or LUT comprising a number of rows corresponding to at least a
subset
of possible classes and further comprising a number of columns being addressed
by
signals or elements of sampled training input data examples, each column being
defined by a vector having cells with values, said system comprising input
means for
receiving training input data examples of known classes, means for sampling
the
received input data examples and addressing column vectors in the stored set
of n-
tuples or LUTs, means for addressing specific rows in the set of n-tuples or
LUTs,
said rows corresponding to a known class, storage means for storing determined
n-
tuples or LUTs, means for determining column vector cell values so as to
comprise
or point to information based on the number of times the corresponding cell
address
is sampled from the training set(s) of input examples, and means for
determining
weight cell values corresponding to at least a subset of the column vector
cells to
thereby allow boosting of one or more column vectors cells during a
classification
process, said weight cell values being determined based on the information of
at
least part of the determined column vector cell values and by use of at least
part of
the training set(s) of input examples.

42
28. A system according to claim 27, wherein means for determining the weight
cell
values is adapted to determine these values so as to allow suppressing of one
or
more column vector cells during a classification process.
29. A system according to any of the claims 26-28, wherein the means for
determining the weight cell values is adapted to determine these values so as
to
allow weighting of one or more column vector cells having a positive value
(greater
than 0) and one or more column vector cells having a non-positive value
(lesser
than or equal to 0).
30. A system according to any of the claims 26-29, wherein the means for
determining the weight cell values is adapted to determine these values so
that the
weight cell values are arranged in weight vectors corresponding to at least
part of
the column vectors.
31. A system for determining weight cell values of a classification network
which can
be defined by a stored number of n-tuples or Look Up Tables (LUTs), with each
n-
tuple or LUT comprising a number of rows corresponding to at least a subset of
the
number of possible classes and further comprising a number of column vectors
with
at least part of said column vectors having corresponding weight vectors, each
column vector being addressed by signals or elements of a sampled training
input
data example and each column vector and weight vector having cell values being
determined during a training process based on one or more sets of training
input
data examples, said system comprising: input means for receiving training
input
data examples of known classes, means for sampling the received input data
examples and addressing column vectors and corresponding weight vectors in the
stored set of n-tuples or LUTs, means for addressing specific rows in the set
of n-
tuples or LUTs, said rows corresponding to a known class, storage means for
storing determined n-tuples or LUTs, means for determining column vector cell
values so as to comprise or point to information based on the number of times
the
corresponding cell address is sampled from the training set(s) of input
examples,
and means for determining weight vector cell values corresponding to one or
more
column vector cells based on the information of at least part of the
determined
column vector cell values and by use of at least part of the training set(s)
of input
examples, said determination allowing weighting of one or more column vector
cells

43
having a positive value (greater than 0) and one or more column vector cells
having
a non-positive value (lesser than or equal to 0).
32. A system according to any of the claims 26-31, wherein the means for
determining the weight cell values is adapted to allow weighting of any column
vector cell.
33. A system according to any of the claims 26-32, wherein the means for
determining the weight cell values comprises means for initialising one or
more sets
of weight vectors corresponding to at least part of the column vectors, and
means
for adjusting weight vector cell values of at least part of the weight vectors
based on
the information of at least part of the determined column vector cell values
and by
use of at least part of the training set(s) of input examples.
34. A system according to any of the claims 26-33, wherein the means for
determining the column vector cell values is adapted to determine these values
as a
function of the number of times the corresponding cell address is sampled from
the
set(s) of training input examples.
35. A system according to any of the claims 26-33, wherein the means for
determining the column vector cell values is adapted to determine these values
so
that the maximum value is 1, but at least part of the values have an
associated
value being a function of the number of times the corresponding cell address
is
sampled from the training set(s) of input examples.
36. A system according to any of the claims 26-35, wherein, when a training
input
data example belonging to a known class is applied to the classification
network
thereby addressing one or more column vectors, the means for determining the
column vector cell values is adapted to increment the value or vote of the
cells of the
addressed column vector(s) corresponding to the row(s) of the known class,
said
value preferably being incremented by one.
37. A system according to any of the claims 26-36, wherein all column vectors
have
corresponding weight vectors.

44
38. A system according to any of the claims 33-37, wherein the means for
initialising
the weight vectors is adapted to setting all weight vector cell values to a
predetermined constant value, said predetermined value preferably being one.
39. A system according to any of the claims 33-38, wherein the means for
initialising
the weight vectors is adapted to setting each weight vector cell to a
predetermined
specific value.
40. A system according to any of the claims 33-39, wherein the means for
adjusting
the weight vector cell values is adapted to determine a global quality value
based on
at least part of the weight and column vector cell values, determine if the
global
quality value fulfils a required global quality criterion, and adjust at least
part of the
weight cell values until the global quality criterion is fulfilled.
41. A system according to any of the claims 33-39, wherein the means for
adjusting
the weight vector cell values is adapted to
a) determine a local quality value corresponding to a sampled training
input example, the local quality value being a function of at least part of
the addressed weight and column vector cell values,
b) determine if the local quality value fulfils a required local quality crite-
rion,
c) adjust one or more of the addressed weight vector cell values if the
local quality criterion is not fulfilled,
d) repeat the local quality test for a predetermined number of training
input examples,
d) determine a global quality value based on at least part of the weight
and column vectors being addressed during the local quality test,
e) determine if the global quality value fulfils a required global quality
crite
rion, and,

45
g) repeat the local and the global quality test and associated weight
adjustments until the global quality criterion is fulfilled.
42. A system according to claim 40 or 41, wherein the means for adjusting the
weight vector cell values is adapted to stop the iteration process if the
global quality
criterion is not fulfilled after a given number of iterations.
43. A system according to claim 40 or 41, wherein the means for storing n-
tuples or
LUTs comprises means for storing adjusted weight cell values and separate
means
for storing best so far weight cell values, said means for adjusting the
weight vector
cell values further being adapted to replace previously separately stored best
so far
weight cell values with obtained adjusted weight cell values if the determined
global
quality value is closer to fulfil the global quality criterion than the global
quality value
corresponding to previously separately stored best so far weight values.
44. A system for classifying input data examples into at least one of a
plurality of
classes, said system comprising: storage means for storing a number or set of
n-
tuples or Look Up Tables (LUTs) with each n-tuple or LUT comprising a number
of
rows corresponding to at least a subset of the number of possible classes and
further comprising a number of column vectors with corresponding weight
vectors,
each column vector being addressed by signals or elements of a sampled input
data
example and each column vector and weight vector having cells with values
being
determined during a training process based on one or more sets of training
input
data examples, said system further comprising: input means for receiving an
input
data example to be classified, means for sampling the received input data
example
and addressing columns and corresponding weight vectors in the stored set of n-
tuples or LUTs, means for addressing specific rows in the set of n-tuples or
LUTs,
said rows corresponding to a specific class, means for determining an output
value
as a function of addressed weight cells, and means for comparing calculated
output
values corresponding to all classes and selecting the class or classes having
maximum output value.
45. A system according to claim 44, wherein the output value further is
determined
as a function of values of addressed column cells.

46
46. A system according to claim 44 or 45, wherein the output determining means
comprises means for producing a first summation of all the addressed weight
vector
cell values corresponding to a specific class and corresponding to column
vector cell
values greater than or equal to a predetermined value.
47. A system according to claim 46, wherein the output determining means
further
comprises means for producing a second summation of all the addressed weight
vector cell values corresponding to a specific class, and means for
determining the
output value by dividing the first summation by the second summation.
48. A system according to any of the claims 44-47, wherein the cell values of
the
column vectors and the weight vectors have been determined by use of a
training
system according to any of the systems of claims 26-43.
49. A system according to any of the claims 44-47, wherein the cell values of
the
column vectors and the weight vectors have been determined during a training
process according to any of the methods of claims 1-21.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
1
N-TUPLE OR RAM BASED NEURAL NETWORK CLASSIFICATION SYSTEM AND
METHOD
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to n-tuple or RAM based neural network
classification systems and, more particularly, to n-tuple or RAM based
classification
systems having weight vectors with element values being determined during a
training process.
2. Description of the Prior Art
A known way of classifying objects or patterns represented by efectric signals
or
binary codes and, more precisely, by vectors of signals applied to the inputs
of
neural network classification systems lies in the implementation of a so-
called
learning or training phase. This phase generally consists of the configuration
of a
classification network that fulfils a function of performing the envisaged
classification
as efficiently as possible by using one or more sets of signals, called
learning or
train~lng sets, where the membership of each of these signals in one of the
classes in
which it is desired to classify them is known. This method is known as
supervised
learning or learning with a teacher.
A subclass of classification networks using supervised learning are networks
using
memory-based learning. Here, one of the oldest memory-based networks is the "n-
tuple network" proposed by Bledsoe and Browning (Bledsoe, W.W. and Browning,
l,
1959, "Pattern recognition and reading by machine", Proceedings of the Eastern
Joint Computer Conference, pp. 225-232) and more recenly described by
Morciniec
and Rohwer (Morciniec, M. and Rohwer, R.,1996, "A theoretical and experimental
account of n-tuple classifier performance", Neural Comp., pp. 629-642).
One of the benefits of such a memory-based system is a very fast computation
time,
both during the learning phase and during classification. For the known types
of n-
tuple networks, which is also known as "RAM networks" or "weightless heural

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
2
networks", learning may be accomplished by recording features of patterns in a
random-access memory (RAM), which requires just one presentation of the
training
set(s) to the system.
The training procedure for a conventional RAM based neural network is
described
by Jorgensen (co-inventor of this invention) et al. (Jorgensen, T.M.,
Christensen, S.
S. and Liisberg, C.,1995, "Cross-validation and information measures for RAM
based neural networks", Proceedings of the Weightless Neural Network Workshop
WNNW95 (Kent at Canterbury, UK) ed. D. Bisset, pp.76-81) where it is described
how the RAM based neural network may be considered as comprising a number of
Look Up Tables (LUTs). Each LUT may probe a subset of a binary input data
vector.
In the conventional scheme the bits to be used are selected at random. The
sampled bit sequence is used to construct an address. This address corresponds
to
a specific entry (column) in the LUT. The number of rows in the LUT
corresponds to
the number of possible classes. For each class the output can take on the
values 0
or 1. A value of 1 corresponds to a vote on that specific class. When
performing a
classification, an input vector is sampled, the output vectors from all LUTs
are
added, and subsequently a winner takes all decision is made to classify the
input
vector. In order to perform a simple training of the network, the output
values may
initially be set to 0. For each example in the training set, the following
steps should
then be carried out:
Present the input vector and the target class to the network, for all LUTs
calculate
their corresponding column entries, and set the output value of the target
class to 1
in all the "active" columns.
By use of such a training strategy it may be guaranteed that each training
pattern
always obtains the maximum number of votes. As a result such a network makes
no
misclassification on the training set, but ambiguous decisions may occur.
Here, the
generalisation capability of the network is directly related to the number of
input bits
for each LUT. If a LUT samples all input bits then it will act as a pure
memory device
and no generalisation will be provided. As the number of input bits is reduced
the
generalisation is increased at an expense of an increasing number of ambiguous
decisions. Furthermore, the classification and generalisation performances of
a LUT
are highly dependent on the actual subset of input bits probed. The purpose of
an

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
3
"intelligent" training procedure is thus to select the most appropriate
subsets of input
data.
Jorgensen et al. further describes what is named a "cross validation test"
which
suggests a method for selecting an optimal number of input connections to use
per
LUT in order to obtain a low classification error rate with a short overall
computation
time. In order to perform such a cross validation test it is necessary to
obtain a
knowledge of the actual number of training examples that have visited or
addressed
the cell or element corresponding to the addressed column and class. It is
therefore
suggested that these numbers are stored in the LUTs. It is also suggested by
Jorgensen et al. how the LUTs in the network can be selected in a more optimum
way by successively training new sets of LUTs and performing cross validation
test
on each LUT. Thus, it is known to have a RAM network in which the LUTs are
selected by presenting the training set to the system several times.
In an article by Jorgensen (co-inventor of this invention) (Jorgensen. T.M.
"Classification of handwritten digits using a RAM neural net architecture",
February
1997, International Journal of Neural Systems, Vol. 8, No. 1, pp. 17-25 it is
suggested how the class recognition of a RAM based network can be further
improved by extending the traditional RAM architecture to include what is
named
"inhibition". This method deals with the problem that in many situations two
different
classes might only differ in a few of their features. In such a case, an
example
outside the training set has a high risk of sharing most of its features with
an
incorrect class. So, in order to deal with this problem it becomes necessary
to
weight different features differently for a given class. Thus, a method is
suggested
where the network includes inhibition factors for some classes of the
addressed
columns. Here, a confidence measure is introduced, and the inhibition factors
are
calculated so that the confidence after inhibition corresponds to a desired
level.
The result of the preferred inhibition scheme is that all addressed LUT cells
or
elements that would be set to 1 in the simple system are also set to 1 in the
modified
version, but in the modified version column cells being set to 1 may further
comprise
information of the number of times the cell has been visited by the training
set.
However, some of the cells containing 0's in the simple system will have their
contents changed to negative values in the modified network. In other words,
the

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
4
conventional network is extended so that inhibition from one class to another
is
allowed.
In order to encode negative values into the LUT cells, it is not sufficient
with one bit
per cell or element as with a traditional RAM network. Thus, it is preferred
to use
one byte per cell with values below 128 being used to represent different
negative
values, whereas values above 128 are used for storing information concerning
the
number of training examples that have visited or addressed the cell. When
classifying an object the addressed cells having values greater than or equal
to 1
may then be counted as having the value 1.
By using inhibition, the cells of the LUTs are given different values which
might be
considered a sort of "weighting". However, it is only cells which have not
been
visited by the training set that are allowed to be suppressed by having their
values
changed from 0 to a negative value. There is no boosting of cells having
positive
values when performing classification of input data. Thus, very well
performing LUTs
or columns of LUTs might easily drown when accompanied by the remaining
network.
Thus, there is a need for a RAM classification network which allows a very
fast
training or learning phase and subsequent classification, but which at the
same time
allows real weights to both boost and suppress cell values of LUT columns in
order
to obtain a proper generalisation ability of the sampled number of input bits
based
on access information of the training set. Such a RAM based classification
system is
provided according to the present invention.
SUMMARY OF THE INVENTION
According to a first aspect of the present invention there is provided a
method for
training a computer classification system which can be defined by a network
comprising a number of n-tuples or Look Up Tables (LUTs), with each n-tuple or
LUT comprising a number of rows corresponding to at least a subset of possible
classes and further comprising a number of columns being addressed by signals
or
elements of sampled training input data examples, each column being defined by
a
vector having cells with values, said method comprising determining the column

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
vector cell values based on one or more training sets of input data examples
for
different classes so that at least part of the cells comprise or point to
information
based on the number of times the corresponding cell address is sampled from
one
or more sets of training input examples, and determining weight cell values
5 corresponding to one or more column vector cells being addressed or sampled
by
the training examples.
According to a second aspect of the present invention there is provided a
method of
determining weight cell values in a computer classification system which can
be
defined by a network comprising a number of n-tuples or Look Up Tables (LUTs),
with each n-tuple or LUT comprising a number of rows corresponding to at least
a
subset of possible classes and further comprising a number of column vectors
with
at least part of said column vectors having corresponding weight vectors, each
column vector being addressed by signals or elements of a sampled training
input
data example and each column vector and weight vector having cells with values
being determined based on one or more training sets of input data examples for
different classes, said method comprising determining the column vector cell
values
based on the training set(s) of input examples so that at least part of said
values
comprise or point to information based on the number of times the
corresponding
cell address is sampled from the set(s) of training input examples, and
determining
weiglit vector cell values corresponding to one or more column vector cells.
Preferably, the weight cell values are determined based on the information of
at
least part of the determined column vector cell values and by use of at least
part of
the training set(s) of input examples. According to the present invention the
training
input data examples may preferably be presented to the network as input signal
vectors.
It is preferred that determination of the weight cell values is performed so
as to allow
weighting of one or more column vectors cells of positive value and/or to
allow
boosting of one or more column vector cells during a classification process.
Furthermore, or alternatively, the weight cell values may be determined so as
to
allow suppressing of one or more column vector cells during a classification
process.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
6
The present invention also provide a method wherein the determination of the
weight cell values allows weighting of one or more column vector cells having
a
positive value (greater than 0) and one or more column vector cells having a
non-
positive value (lesser than or equal to 0). Preferably, the determination of
the weight
cells allows weighting of any column vector cell.
In order to determine or calculate the weight cell values, the determination
of these
values may comprise initialising one or more sets of weight cells
corresponding to at
least part of the column cells, and adjusting at least part of the weight cell
values
based on the information of at least part of the determined column cell values
and
by use of at least part of the training set(s) of input examples. When
determining the
weight cell values it is preferred that these are arranged in weight vectors
corresponding to at least part of the column vectors.
In order to determine or adjust the weight cell values according to the
present
invention, the column cell values should be determined. Here, it is preferred
that at
least part of the column cell values are determined as a function of the
number of
times the corresponding cell address is sampled from the set(s) of training
input
examples. Alternatively, the information of the column cells may be determined
so
that the maximum column cell value is 1, but at least part of the cells have
an
associated value being a function of the number of times the corresponding
cell
address is sampled from the training set(s) of input examples. Preferably, the
column vector cell values
are determined and stored in storing means before the adjustment of the weight
vector cell values.
According to the present invention, a preferred way of determining the column
vector cell values may comprise the training steps of
a) applying a training input data example of a known class to the
classification network, thereby addressing one or more column
vectors,

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
7
b) incrementing, preferably by one, the value or vote of the cells of the
addressed column vector(s) corresponding to the row(s) of the known
class, and
C) repeating steps (a)-(b) until all training examples have been applied to
the network.
However, it should be understood that the present invention also covers
embodiments where the information of the column cells is determined by
alternative
functions of the number of times the cell has been addressed by the input
training
set(s). Thus, the cell information does not need to comprise a count of all
the times
the cell has been addressed, but may for example comprise an indication of
when
the cell has been visited zero times, once, more than once, and/or twice and
more
than twice and so on.
So far it has been mentioned that weight cell values may be determined for one
or
more column cells, but in a preferred embodiment all column vectors have
corresponding weight vectors.
When initialising weight cell values according to embodiments of the present
invention, the initialisation may comprise setting each weight cell value to a
predetermined specific cell value. These values may be different for different
cells,
but all weight cell values may also be set to a predetermined constant value.
Such a
value may be 0 or 1, but other values may be preferred.
In order to determine the weight cell values, it is preferred to adjust these
values,
which adjustment process may comprise one or more iteration steps. The
adjustment of the weight cell values may comprise the steps of determining a
global
quality value based on at least part of the weight and column vector cell
values,
determining if the global quality value fulfils a required quality criterion,
and adjusting
at least part of the weight cell values until the global quality criterion is
fulfilled.
The adjustment process may also include determination of a local quality value
for
each sampled training input example, with one or more weight cell adjustments
being performed if the local quality value does not fulfil a specified or
required local

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
8
quality criterion for the selected input example. As an example the adjustment
of the
weight cell values may comprise the steps of
a) selecting an input example from the training set(s),
b) determining a local quality value corresponding to the sampled training
input example, the local quality value being a function of at least part of
the addressed weight and column cell values, .
c) determining if the local quality value fulfils a required local quality
criterion, if not, adjusting one or more of the addressed weight vector
cell values if the local quality criterion is not fulfilled,
c) selecting a new input example from a predetermined number of
examples of the training set(s),
e) repeating the local quality test steps (b)-(d) for all the predetermined
training input examples,
f) determining a global quality value based on at least part of the weight
and column vectors being addressed during the local quality test,
g) determining if the global quality value fulfils a required global quality
criterion, and,
h) repeating steps (a)-(g) until the global quality criterion is fulfilled.
Preferably, steps (b)-(d) of the above mentioned adjustment process
may be carried out for all examples of the training set(s).
The local and/or global quality value may be defined as functions of at least
part of
the weight and/or column cells. Correspondingly, the global and/or the local
quality
criterion may also be functions of the weight and/or column cells. Thus, the
quality
criterion or criteria need not be a predetermined constant threshold value,
but may
be changed during the adjustment iteration process. However, the present
invention

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
9
also covers embodiments in which the quality criterion or criteria is/are
given by
constant threshold values.
It should be understood that when adjusting the weight cell values by use of
one or
more quality values each with a corresponding quality criterion, it may be
preferred
to stop the adjustment iteration process if a quality criterion is not
fulfilled after a
given number of iterations.
It should also be understood that during the adjustment process the adjusted
weight
cell values are preferably stored after each adjustment, and when the
adjustment
process includes the determination of a global quality value, the step of
determination of the global quality value may further be followed by
separately
storing the hereby obtained weight cell values or classification system
configuration
values if the determined global quality value is closer to fulfil the global
quality
criterion than the global quality value corresponding to previously separately
stored
weight cell values or configuration values.
A main reason for training a classification system according to an embodiment
of the
present invention is to obtain a high confidence in a subsequent
classification
process of an input example of an unknown class.
Thus, according to a further aspect of the present invention, there is also
provided a
method of classifying input data examples into at least one of a plurality of
classes
using a computer classification system configured according to any of the
above
described methods of the present invention, whereby the column cell values and
the
corresponding weight cell values are determined for each n-tuple or LUT based
on
one or more training sets of input data examples, said method comprising
a) applying an input data example to be classified to the configured
classification network thereby addressing column vectors and
corresponding weight vectors in the set of n-tuples or LUTs,
b) selecting a class thereby addressing specific rows in the set of n-tuples
or LUTs,

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
b) determining an output value as a function of values of addressed
weight cells,
d) repeating steps (b)-(c) until an output has been determined for all
5 classes,
d) comparing the calculated output values, and
f) selecting the class or classes having maximum output value.
10 When classifying an unknown input example, several functions may be used
for
determining the output values from the addressed weight cells. However, it is
preferred that the parameters used for determining the output value includes
both
values of addressed weight cells and addressed column cells. Thus, as an
example,
the output value may be determined as a first summation of all the addressed
weight
cell values corresponding to column cell values greater than or equal to a
predetermined value. In another preferred embodiment, the step of determining
an
output value comprises determining a first summation of all the addressed
weight
cell values corresponding to column cell values greater than or equal to a
predetermined value, determining a second summation of all the addressed
weight
cell values, and determining the output value by dividing the first summation
by the
second summation. The predetermined value may preferably be set to 1.
The present invention also provides training and classification systems
according to
the above described methods of training and classification.
Thus, according to the present invention there is provided a system for
training a
computer classification system which can be defined by a network comprising a
stored number of n-tuples or Look Up Tables (LUTs), with each n-tuple or LUT
comprising a number of rows corresponding to at least a subset of possible
classes
and further comprising a number of columns being addressed by signals or
elements of sampled training input data examples, each column being defined by
a
vector having cells with values, said system comprising input means for
receiving
training input data examples of known classes, means for sampling the received
input data examples and addressing column vectors in the stored set of n-
tuples or
LUTs,

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
11
means for addressing specific rows in the set of n-tuples or LUTs, said rows
corresponding to a known class, storage means for storing determined n-tuples
or
LUTs,
means for determining column vector cell values so as to comprise or point to
information based on the number of times the corresponding cell address is
sampled from the training set(s) of input examples, and means for determining
weight cell values corresponding to one or more column vector cells being
addressed or sampled by the training examples.
The present invention also provides a system for determining weight cell
values of a
classification network which can be defined by a stored number of n-tuples or
Look
Up Tables (LUTs), with each n-tuple or LUT comprising a number of rows
corresponding to at least a subset of the number of possible classes and
further
comprising a number of column vectors with at least part of said column
vectors
having corresponding weight vectors, each column vector being addressed by
signals or elements of a sampled training input data example and each column
vector and weight vector having cell values being determined during a training
process based on one or more sets of training input data examples, said system
comprising: input means for receiving training input data examples of known
classes, means for sampling the received input data examples and addressing
column vectors and corresponding weight vectors in the stored set of n-tuples
or
LUTs, means for addressing specific rows in the set of n-tuples or LUTs, said
rows
corresponding to a known class, storage means for storing determined n=tuples
or
LUTs, means for determining column vector cell values so as to comprise or
point to
information based on the number of times the corresponding cell address is
sampled from the training set(s) of input examples, and means for determining
weight vector cell values corresponding to one or more column vector cells.
Here, it is preferred that the means for determining the weight cell values is
adapted
to determine these values based on the information of at least part of the
determined column vector cell values and by use of at least part of the
training set(s)
of input examples.
Preferably, the means for determining the weight cell values is adapted to
determine
these values so as to allow weighting of one or more column cells of positive
value

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
12
and/or to allow boosting of one or more column cells during a classification
process.
The determining means may furthermore, or alternatively, be adapted to
determine
the weight cell values so as to allow suppressing of one or more column vector
cells
during a classification process.
According to an embodiment of the present invention the weight determining
means
may be adapted to determine the weight cell values so as to allow weighting of
one
or more column vector cells having a positive value (greater than 0) and one
or
more column vector cells having a non-positive value (lesser than or equal to
0).
Preferably, the means may further be adapted to determine the weight cell
values so
as to allow weighting of any column cell. It is also preferred that the means
for
determining the weight cell values is adapted to determine these values so
that the
weight cell values are arranged in weight vectors corresponding to at least
part of
the column vectors.
In order to determine the weight cell values according to a preferred
embodiment of
the present invention, the means for determining the weight cell values may
comprise means for initialising one or more sets of weight vectors
corresponding to
at least part of the column vectors, and means for adjusting weight vector
cell values
of at least part of the weight vectors based on the information of at least
part of the
determined column vector cell values and by use of at least part of the
training set(s)
of input examples.
As already discussed above the column cell values should be determined in
order to
determine the weight cell values. Here, it is preferred that the means for
determining
the column vector cell values is adapted to determine these values as a
function of
the number of times the corresponding cell address is sampled from the set(s)
of
training input examples. Alternatively, the means for determining the column
vector
cell values may be adapted to determine these cell values so that the maximum
value is 1, but at least part of the cells have an associated value being a
function of
the number of times the corresponding cell address is sampled from the
training
set(s) of input examples.
According to an embodiment of the present invention it is preferred that when
a
training input data example belonging to a known class is applied to the

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
13
classification network thereby addressing one or more column vectors, the
means
for determining the column vector cell values is adapted to increment the
value or
vote of the cells of the addressed column vector(s) corresponding to the
row(s) of
the known class, said value preferably being incremented by one.
In order to initialise the weight cells according to an embodiment of the
invention, it
is preferred that the means for initialising the weight vectors is adapted to
setting the
weight cell values to one or more predetermined values.
For the adjustment process of the weight cells it is preferred that the means
for
adjusting the weight vector cell values is adapted to determine a global
quality value
based on at least part of the weight and column vector cell values, determine
if the
global quality value fulfils a required global quality criterion, and adjust
at least part
of the weight cell values until the global quality criterion is fulfilied.
As an example of a preferred embodiment according to the present invention,
the
means for adjusting the weight vector cell values may be adapted to
a) determine a local quality value corresponding to a sampled training input
example, the local quality value being a function of at least part of the
addressed
weight and column vector cell values,
b) determine if the local quality value fulfils a required local quality
criterion,
b) adjust one or more of the addressed weight vector cell values if the local
quality
criterion is not fulfilled,
c) repeat the local quality test for a predetermined number of training input
exam
pies,
d) determine a global quality value based on at least part of the weight and
column
vectors being addressed during the local quality test,
e) determine if the global quality value fulfils a required global quality
criterion, and,
_,__

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
14
f) repeat the local and the global quality test until the global quality
criterion is ful-
filled.
The means for adjusting the weight vector cell values may further be adapted
to
stop the iteration process if the global quality criterion is not fulfilled
after a given
number of iterations. In a preferred embodiment, the means for storing n-
tuples or
LUTs comprises means for storing adjusted weight cell values and separate
means
for storing best so far weight cell values or best so far classification
system
configuration values. Here, the means for adjusting the weight vector cell
values
may further be adapted to replace previously separately stored best so far
weight
cell values with obtained adjusted weight cell values if the determined global
quality
value is closer to fulfil the global quality criterion than the global quality
value
corresponding to previously separately stored best so far weight values. Thus,
even
if the system should not be able to fulfil the global quality criterion within
a given
number of iterations, the system may always comprise the "best so far" system
configuration.
According to a further aspect of the present invention there is also provided
a
system for classifying input data examples of unknown classes into at least
one of a
plurality of classes, said system comprising: storage means for storing a
number or
set of n-tuples or Look Up Tables (LUTs) with each n-tuple or LUT comprising a
number of rows corresponding to at least a subset of the number of possible
classes
and further comprising a number of column vectors with corresponding weight
vectors, each column vector being addressed by signals or elements of a
sampled
input data example and each column vector and weight vector having cells with
values being determined during a training process based on one or more sets of
training input data examples, said system further comprising: input means for
receiving an input data example to be classified, means for sampling the
received
input data example and addressing columns and corresponding weight vectors in
the stored set of n-tuples or LUTs, means for addressing specific rows in the
set of
n-tuples or LUTs, said rows corresponding to a specific class, means for
determining an output value as a function of addressed weight cells, and means
for
comparing calculated output values corresponding to all classes and selecting
the
class or classes having maximum output value.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
According to a preferred embodiment of the classification system of the
present
invention, the output determining means comprises means for producing a first
summation of all the addressed weight vector cell values corresponding to a
specific
class and corresponding to column vector cell values greater than or equal to
a
5 predetermined value. It is also preferred that the output determining means
further
comprises means for producing a second summation of all the addressed weight
vector cell values corresponding to a specific class, and means for
determining the
output value by dividing the first summation by the second summation.
10 It should be understood that it is preferred that the cell values of the
column and
weight vectors of the classification system according to the present invention
are
determined by use of a training system according to any of the above described
systems. Accordingly, these cell values may be determined during a training
process according to any of the above described methods.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention and in order to show how
the
same may be carried into effect, reference will now be made by way of example
to
the accompanying drawings in which:
Fig. 1 shows a block diagram of a RAM classification network with Look Up
Tables
(LUTs),
Fig. 2 shows a detailed block diagram of a single Look Up Table (LUT)
according to
an embodiment of the present invention,
Fig. 3 shows a block diagram of a computer classification system according to
the
present invention,
Fig. 4 shows a flow chart of a learning process for LUT column cells according
to an
embodiment of the present invention,

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
16
Fig. 5 shows a flow chart of a learning process for weight cells according to
a first
embodiment of the present invention,
Fig. 6 shows a flow chart of a learning process for weight cells according to
a
second embodiment of the present invention, and
Fig. 7 shows a flow chart of a classification process according to the present
invention.
DETAILED DESCRIPTION OF THE INVENTION
In the following a more detailed description of the architecture and concept
of a
classification system according to the present invention will be given
including an
example of a training process of the column cells of the architecture and an
example
of a classification process. Furthermore, different examples of learning
processes
for weight cells according to embodiments of the present invention are
described.
Notation
The notation used in the following description and examples is as follows:
,l': The training set.
i: An example from the training set.
Nx: Number of examples in the training set X.
ii: The j'th example from a given ordering of the training set X.
y: A specific example (possible outside the training set).
C: Classlabel.
C( i): Class label corresponding to example .i (the true class).
Cu.: Winner Class obtained by classification.
CR: Runner Up Class obtained by classification.
A(x): Leave-one-out cross-validation classification for example x.
Nc: Number of training classes corresponding to the maximum number of
rows in a LUT.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
17
S2; Set of LUTs (each LUT may contain only a subset of all possible
address columns, and the different columns may register only subsets
of the existing classes).
N1U,.: Number of LUTs.
NrOL : Number of different columns that can be addressed in a specific LUT
(LUT dependent).
S(.: The set of training examples labelled class C.
tir,c: Weight for the cell addressed by the i'th column and the C'th class.
vw: Entry counter for the cell addressed by the i'th column and the C'th
class.
u;(v): Index of the column in the i'th LUT being addressed by example S.
i-'; Vector containing all v,(. elements of the LUT_ network.
TIT: Vector containing alf elements of the LUT network.
Ql (i~, tit~,.c, X): Local quality function.
Q; (T, T%7, X): Global quality function.
Description of architecture and concept
In the following references are made to Fig. 1, which shows a block diagram of
a
RAM classification network with Look Up Tables (LUTs), and Fig. 2, which shows
a
detailed block diagram of a single Look Up Table (LUT) according to an
embodiment
of the present invention.
A RAM-net or LUT-net consists of a number of Look Up Tables (LUTs) (1.3). Let
the
number of LUTs be denoted NLUT. An example of an input data vectory to be
classified may be presented to an input module (1.1) of the LUT network. Each
LUT
may sample a part of the input data, where different numbers of input signals
may
be sampled for different LUTs (1.2) (in principle it is also possible to have
one LUT
sampling the whole input space). The outputs of the LUTs may be fed (1.4) to
an
output module (1.5) of the RAM classification network.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
18
In Fig. 2 it is shown that for each LUT the sampled input data (2.1) of the
example
presented to the LUT-net may be fed into an address selecting module (2.2).
The
address selecting module (2.2) may from the input data calculate the address
of one
or more specific columns (2.3) in the LUT. As an example, let the index of the
column in the i'th LUT being addressed by an input example }~ be calculated as
a, (y). The number of addressable columns in a specific LUT may be denoted
N(.,)L ,
and varies in general from one LUT to another. The information stored in a
specific
row of a LUT may correspond to a specific class C (2.4). The maximum number of
rows may then correspond to the number of classes, N(.. In a preferred
embodiment, every column within a LUT contains two sets of cells. The number
of
cells within each set corresponds to the number of rows within the LUT. The
first set
of cells may be denoted column vector cells and the cell values may correspond
to
class specific entry counters of the column in question. The other set of
cells may be
denoted weight cells or weight vector cells with cell values which may
correspond to
weight factors, each of which may be associated with one entry counter value
or
column vector cell value. The entry counter value for the cell addressed by
the i'th
column and class C is denoted r,,. (2.5). The weight value for the cell
addressed by
the i'th column and class C is denoted i(',,. (2.6).
The v,,.- and )(',,.-values of the activated LUT columns (2.7) may be fed
(1.4) to the
output module (1.5), where a vote number may be calculated for each class and
where finally a winner-takes-all (WTA) decision may be performed.
Let .7 E,l' denote an input data example used for training and let y denote an
input
data example not belonging to the training set. Let C(.i ) denote the class to
which
i belongs. The class assignment given to the example }= is then obtained by
calculating a vote number for each class. The vote number obtained for class C
is
calculated as a function of the v,(. and ir,(. numbers addressed by the
example i- :
VoICNo(C,)') =
f1/fiCf(O!)(Võ,(i=).C+1Uõi(}).C+vn.(i).C+lt'n.7).C,.....,11n1117(i=).C+lNasur4.
7)=Cl
From the calculated vote numbers the winner class, C,, can be obtained as:

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
19
Cw = argrnax(VoteNo(C, v)), I<_ C<_ Nc.
C
An example of a sensible choice of VoteNo(C, is the following expression:
)('n,(V-CO((1'n,(y).C)
VoteNo(C,y)=
~ lt'n,(G1C
IEn
n,(.T).n,(?')
iES2 aES,=
iESl
where 6, is Kroneckers delta ( cS, ~= 1 if i= j and 0 otherwise), and
i if =1r
~"(r) 0 if -t- <ir
92. describes the set of LUTs making up the whole LUT network. S. denotes the
set
of training examples labelled class C. The special case with all tir,,-values
set to 1
gives the traditional LUT network,
Cw = argmax jO,(vn,(;=).C) .
C iER
Figure 3 shows an example of a block diagram of a computer classification
system
according to the present invention. Here a source such as a video camera or a
database provides an input data signal or signals (3.0) describing the example
to be
classified. These data are fed to a pre-processing module (3.1) of a type
which can
extract features, reduce, and transform the input data in a predetermined
manner.
An example of such a pre-processing module is a FFT-board (Fast =Fourier
Transform). The transformed data are then fed to a classification unit (3.2)
comprising a RAM network according to the present invention. The
classification unit
(3.2) outputs a ranked classification list which might have associated
confidences.
The classification unit can be implemented by using software to programme a

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
standard Personal Com puter or programming a hardware device, e.g. using
programmable gate arrays combined with RAM circuits and a digital signal
processor. These data can be interpreted in a post-processing device (3.3),
which
could be a computer module combining the obtained classifications with other
5 relevant information. Finally the result of this interpretation is fed to an
output device
(3.4) such as an actuator.
Initial training of the architecture
The flow chart of Fig. 4 illustrates a one pass learning scheme or process for
the
10 determination of the column vector entry counter or cell distribution, r,,.-
distribution
(4.0), according to an embodiment of the present invention, which may be
described
as follows:
1. Initialise all entry counters or column vector cells by setting the cell
values,
15 v, to zero and initialise the weight values, tii . This could be performed
by
setting all weight values to a constant factor, or by choosing random values
from within a specific range (4.1).
2. Present the first training input example, .i, from the training set X to
the
network (4.2, 4.3)
20 3. Calculate the columns addressed for the first LUT (4.4, 4.5).
4. Add 1 to the entry counters in the rows of the addressed columns that
correspond to the class label of .i (increment in all LUTs) (4.6).
5. Repeat step 4 for the remaining LUTs (4.7, 4.8).
6. Repeat steps 3-5 for the remaining training input examples (4.9, 4.10). The
number of training examples is denoted N...
Classification of an unknown input example
When the RAM network of the present invention has been trained to thereby
deter-
mine values for the column cells and the weight cells whereby the LUTs may be
defined, the network may be used for classifying an unknown input data
example.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
21
In a preferred example according to the present invention, the classification
is per-
formed by determining the class having a maximum vote number, VoteNo, where
VoteNo is given by the expression
,
I '..~;,.VoteNo(C, p) _ "
If the denominator is zero the VoteNo can be defined to be 0.
Thus, the classification example may be described as follows by reference to
Figs. 1
and 2:
= Present an unknown input example, y, to the network (1.1).
= For all LUTs calculate the columns a, (.T) addressed by -= (2.3).
= For each class (corresponding to a specific row in each of the addressed
columns) produce the sum (sum_1) of w,,.(.~,c0, (I'õ (;,).C) . The term
implies that components only are included it ? 1 (1.5).
= For each class (corresponding to a specific row in each of the addressed
columns) produce the sum (sum_2) of -tõ ~~.~~. (1.5).
= Calculate the output value corresponding to class C as Out(C) = sum_1 /
sum_2
(1.5).
= Choose the class (or classes) that maximise Out[CJ (1.5).
Figure 7 shows a block diagram of the operation of a computer classification
system
in which a classification process (7.0) is performed. The system acquires one
or
more input signals (7.1) using e.g. an optical sensor system. The obtained
input
data are pre-processed (7.2) in a pre-processing module, e.g. a low-pass
filter, and
presented to a classification module (7.3) which according to an embodiment of
the
invention may be a LUT-network. The output data from the classification module
is
then post-processed in a post-processing module (7.4), e.g. a CRC algorithm
calculating a cyclic redundancy check sum, and the result is forwarded to an
output
device (7.5), which could be a monitor screen.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
22
Weight adiustments
Usually the initially determined weight cell values will not present the
optimal choice
of values. Thus, according to a preferred embodiment of the present invention,
an
optimisation or adjustment of the weight values should be performed.
In order to select or adjust the weight values to improve the performance of
the
classification system, it is suggested according to an embodiment of the
invention to
define proper quality functions for measuring the performance of the weight
values.
Thus, a local quality function Q,(i7,,Tv7,T.X) may be defined, where i~
denotes a
vector containing all v,(. elements of the LUT network, and T7 denotes a
vector
containing all iv,(. elements of the LUT network. The local quality function
may give a
confidence measure of the output classification of a specific example .i- . If
the
quality value does not satisfy a given criterion (possibly dynamically changed
during
the iterations), the weights tit are adjusted to make the quality value
satisfy or closer
to satisfying the criterion (if possible).
Furthermore a global quality function: Q,,(T,tii=,X) may be defined. The
global
quality function may measure the performance of the input training set as a
whole.
Fig. 5 shows a flow chart for weight cell adjustment or learning according to
the
present invention. The flow chart of Fig. 5 illustrates a more general
adjustment or
learning process, which may be simplified for specific embodiments.
Example 1
The vote number function for an input example y is given as
I tiV.,1rev,, 4:1.c)
VoteNo(C,y)
rn
With this definition of the VoteNo() function a leave-one-out cross-validation
r.laccifir.ation for an inout examDle Y of the training set may be calculated
as:

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
23
,
)v,,<<).c0is,:~, ', a).c)
A(Y) = argmax rEs)
(' ~ tt~n ta).C
IE~2
This expression is actually explained above except for the factor 0+~~;~~ (1,~
~~.) c)
which is equal to O (,'õ a) c) if C# C(i) and equal to O,(v"t ~) c) if C=
C(i).
is only 1 if 1"" õ) X _ 2, else it is 0. This simply assures that an example
cannot obtain contributions from itself when calculating the leave-one-out
cross-
validation.
Let the local quality function calculated for the example .~ be defined as:
,
Ql.(i'.tt'.=i. a ) = 6,~( c).c,.rI=
Here Q, is 0 if .i generates a cross-validation error, else Q! is 1. So if Ql
= 0 then
weight changes are made.
Let the global quality function be defined as:
Qc(i', tt'' X)= Qr.(T',)t'=l,X),
FEX 7EX
This global quality function measures the number of examples from the training
set
X that would be correctly classified if they were left out of the training set
as each
term in the sum over the training set is 1 if C(x) = A(T) and else 0. The
global
quality criterion may be to satisfy Qc > FNX , where c is a parameter
determining the
fraction of training examples demanded to be correctly classified in a leave-
one-out
crossvalidation test.
An updating scheme for improving QG can be implemented by the following rules:

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
24
For all input examples i of the training set with a wrong cross-validation
classification (11(x) ~ C(.i)) adjust the weights by:
11'n e.r) - max(11'n(a).('(.r) +k ~2(1n,(.i).C(.'e))-k=(I -O2(l~n,(.F).C(.-
e)))~)r
where k is a small constant. A feasible choice of k could, be one tenth of the
mean of
the absolute values of the w,(. values.
., + k if >_ 2 and that
This updating rule implies that wN" )(~0111
1un,(i).C.i ) = 1t'a, c.!i.C(X) - k if '',,,c i).roo < 2 = The maxO function
ensures that the
weights cannot become negative.
Referring now to Fig. 5, the weight update or adjustment steps of example 1
may be =
described as:
= Initialise all -r,,. values to zero (5.1).
= Loop through all examples in the training set (5.2, 5.10, 5.3).
= Calculate the local quality value for each example (can the example be
correctly classified if excluded from the training set? ) (5.4, 5.5).
= If yes proceed with next example (5.10). If not increase the addressed
weights of the "true" class if the corresponding column cells adds a
positive contribution to the VoteNo() function (i.e. if I'a (O.C(; ) - 2) and
decrease the weights of the "true" class otherwise (i.e. if Võ ( L.) CM < 2)
(5.6-5.9).
= Calculate the global quality value. If the quality is the highest obtained
hitherto store the LUT network (5.11).
= Repeat until global quality value is satisfactory or other exit condition is
fulfilled (5.12, 5.13).
Example 2
Let the vote number function for an input example y be given as

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
. ~~trlr~r.Ol(1,n=Irl.t')Oll~~lrl"rl..=~
VoteNo(C, y)
.cn
5 For the true class C(Y) sum the -+~n,(.c~.~(.L~ values for a given v-value
using the
function
Hist(l,C(.i))=~bc,= -t'nc.~>.cc.,~
~ES)
10 The parameter / runs over the possible values of v;,., 0<v, r _< N, .
A confidence CoirJ between the winning class, C,,., and the runner-up class C.
may
then be defined as:
trU(CNl)(Cq., -va,(.C11'N. -'rU(CNo(Cr,. X) N'q1.f1CK ff(C1S= = C(.l'))
Collf = .ef2 refl
10, lf(Cls. # C(.1))
A value m may be determined by the function:
n
Irr = max arg Hist(l, C( i)) <_ Conf
n r=~
The upper limit of the summation index n can vary from 1 to the maximum v;c
value
within the v vector. The expression states that m is chosen as the largest
value of n
n
for which Hist(1, C( i)) _ Conf
~=c
A local quality function may now be defined as:
_ _ ))1 - ~~Irhrrsh if C,,, = C( i )
~L(v'-V'~'X) 0 if C,,, ~ C(i)

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
26
where nrt,,rec,, is a threshold constant. If Q,. < 0 then the weights )t'Y are
updated to
make Q,, increase, by adjusting the weights on the runner-up class, CR:
ll='n,(i).Ck =)V~(i).Cx -ke 1(Va,(.F) . C k )+k2 - (1-e(.-r).C'))
This updating rule implies that tiV"~'_:') Xx =)V ~; ) XA - k, if 1 and that
1l'O(t.i).(~ _ )Vn, +k_ if ha,(.-).CK < 1.
The global quality criterion may be based on two quality functions:
~
n~;l(t',~( '1 )
and
Q<;:(t',tii','l-) _ Oo(Oi.(i',~l~x))
iEx
Here Oo(Q, ) is 1 if Q, _ 0 and 0 if Q, < 0. Q(;i measures the number of
examples
from the training set that can pass a leave one out cross-validation test and
Q(;, measures the number of examples that can pass the local quality
criterion.
These two quality functions can then be combined in to one quality function
based
on the following boolean expression (a true expression is given the value 1
and a
false expression is given the value 0):
Q(;(V,W,X) (Q(;1(VrTF+/Y) > r1N.1')A (Q(;2(V,lt',X) J c,Nx)
Here E, and c: are two parameters determining the fractions of training
examples
demanded to pass a leave-one-out crossvalidation test and the local quality
criterion, respectively. If both of these criterions are passed the global
quality
criterion is passed in which case Q(; (i-,, tiv, X) is 1, otherwise it is 0.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
27
With reference to Fig. 5, the weight updating or adjustment steps of example 2
may
be described as:
= lnitialise all ir,,. values to zero (5.1).
= Loop through all examples in the training set (5.2, 5.10, 5.3).
= Calculate the local quality value for each example (5.4) (Does the
example have sufficient "support" ? (5.5)).
= If yes process next example (5.10), if not decrease the weights associated
with voting for the runner-up class and increase the weights associated
with cells having < 1(5.6-5.9).
= Calculate the global quality value. If the quality is the highest obtained
hitherto store the network (5.11).
= Repeat until global quality value is satisfactory or other exit condition is
fulfilled (5.12, 5.13).
Example 3
Again the vote number function for an input example y is given as
VoccNo(C, i=)
,
A local quality function QL(v, w, i, X) is defined as a measure of a vote
confidence
for an input training example Y. For an example i the confidence ConJ between
the true class, C(Y), and the runner-up class C. may be determined as:
~1tln,(.'r).C(.i)O1(va.(.i).C(1)1 ~lvrt(a).CN~I(vn,(.r).Cx/
Co)1J*( Y ) = IES2 _ iES2
Y, wrt,(x).C(.'r) YyVrt,(A).Ck
iES2 iES2
The confidence can be zero stating that the runner up class has a vote level
equal to
that of the true class (if one or more classes have a vote level equal to that
of the

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
28
true class we will define one of the classes different from the true one as
the runner
up class). The local quality function may now be defined as:
Ql,(i),w,x,X) = ConJ (i).
A threshold value may be determined for the calculated local quality value and
if
QL < Qth.eshõr,r then the weights are updated to make Q, increase. A possible
value of
would be 0.1 stating that the difference between the vote level of the true
and that of the runner up class should at least be 10% of the maximum vote
level.
The weights may be updated by adjusting the weights for the runner-up class,
C.:
Nem- i1.CR - 0141
lt'n,(
-J~I?QI(1a,(F1.CR )- where k is a small constant, and adjusting the weights
for the true class
11.\'ru _ )(()1J). )(1+k 'f?E) (l' ~-lD.
The small constant k determines the relative change in the weights to be
adjusted.
One possible choice would be k=0.05.
Again the number of cross-validation errors is a possible global quality
measure.
Qc(v,tiv,X) = 1 aA(T).c(fl =
iEX
The global quality criterion may be to satisfy QG > ENX, where c is a
parameter
determining the fraction of training examples demanded to be correctly
classified in
a leave-one-out crossvalidation test.
With reference to Fig. 5, the weight updating or adjustment steps of example 3
may
be described as:
= Initialise all values to zero (5.1).
= Loop through all examples in the training set (5.2,5.10, 5.3).

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
29
= Calculate the local quality value for each example (5.4) (Can the example
be correctly classified if excluded from the training set and at the same
time have sufficient "support" ? (5.5) ).
= If yes process next example, if not update the weights associated with
cells voting on the runner-up class and update the weights associated
with cells voting for the true class (5.6-5.9) in order to increase the vote
level on the true class and decrease the vote level on the runner up class.
= Calculate the global quality value. If the quality is the highest obtained
hitherto store the network (5.11).
= Repeat until global quality value is satisfactory or other exit condition is
fulfilled (5.12, 5.13).
Example 4
Again the vote number function for an input example J7 is defined as
L,~~.;v=0~(~.,:.)
VotcNo(C, i=)
~ ~t...,;,..
..a
The vote levels obtained for a training example when performing a cross-
validation
test is then:
(
VoteNocv (C, i ) _ 'Eo
,ESl
Again the runner-up class obtained using VoteNo(C,y) may be denoted C,Q (if
one
or more classes have a vote level equal to that of the true class we will
define one of
the classes different from the true one as the runner up class).
The local quality function Ql (v, tt~X) may now be defined by a Boolean
expression.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
Ql (i~, li~, i, X) = (VoteNo(-, (C(z). =-i ) > k) ) A (VotcNocv (Cti ,Y) < k:
) A (/\(-x) = C(.i ))
where k, and k, are two constants between 0 and 1 and k) > k,. If all three
criteria
5 ( VoteNocõ (C( .V), -V) > k1, VoteNoc,(C,t, i ) < k: , and n(x) = C(.i ))
are satisfied
then Q,. (i~, X) is 1 otherwise it is 0. The two criteria corresponds to
demanding
the vote level of the true class in a leave-one-out cross-validating test to
be larger
than k) and the vote level of the runner up class to be below k: with level k,
being
larger than level k,. The VoteNo() function used in this example will have
value
10 between 0 and 1 if we restrict the weights to have positive value in which
case a
possible choice of k values are k) equal to 0.9 and k, equal to 0.6.
If the given criteria for the local quality value given by Q,.(7,1ti X) is not
satisfied
then the weights 1v,c', are updated to satisfy , if. possible, the criteria
for Qr , by
15 adjusting the weights on the runner-up class, C,,:
nld
~1(l,i,(.i).('N)-~
11dl.iLtN -t1"r, (A).(.kt[? _
where k, . is a small constant, and adjusting the weights 11=,(.N on the true
class:
11 n7ax1V~(~i).C(i) + ~4 [?O) (1++,(S).C(.i) ) - IJ , Ol.
k3 determines the relative change in the weights to be adjusted for the runner
up
class. One possible choice would be k3 is 0.1. A feasible choice of k, could
be one
tenth of the mean of the absolute values of the w,(. values.
A suitable global quality function may be defined by the summation of the
local
quality values for all the training input examples:
Qc(v,l+',X) I Q, (v,l',i,X).
.rEX

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
31
The global quality criterion may be to satisfy Q,; >.FN,, where c is a
parameter
determining the fraction of training examples demanded to pass the local
quality
test.
With reference to Fig. 5, the weight updating or adjustment steps of example 4
may
be described as:
= Initialise all iv,,. values to zero (5.1).
= Loop through all examples in the training set (5.2,5.10, 5.3).
= Calculate the local quality for each example (5.4) (Can the example be
correctly classified if excluded from the training set and at the same time
have sufficient vote "support" ? (5.5) ).
= If yes process next example, if not update the weights associated with
cells voting on the runner-up class and update the weights associated
with cells voting on the true class (5.6-5.9) in order to increase the vote
level on the true class and decrease the vote level on the runner up class.
= Calculate the global quality function. If the quality is the highest
obtained
hitherto store the network (5.11).
= Repeat until global quality value is satisfactory or other exit condition is
fulfilled (5.12, 5.13).
Example 5
In this example the vote number function for an input example y is given as
VoteNo(C, v) = Ztivac-.)COl(Va,(Y).C)
iEt2
The local quality function and the threshold criteria is now defined so that
the
answer to the question "is Qiocal OK" will always be no. Thus, the local
quality
function may be defined as:

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
32
Q,. = FALSE
With these definition all training examples will be used for adjusting as the
answer to (5.5) will always be no.
The weight updating rule is:
v.=õ=
Nr1,(.C').l' - Jirll-o,10
where / (z) is defined as:
/' 2 if<.>a
Ja(: 1 lf ;, <a
and a is the iteration number.
The global quality function for the a''' iteration may be defined as:
Q ~_ S
C - ~j vA (.i.er).C(.Y) .i E \
where
A(X, a) = argmax Ja.40:,.,, (va,(.r).c)G1+S,.,rõ- (1'a,(.Y).c)
C iESl
With reference to Fig. 5, the weight updating or adjustment steps of example 5
may
be described as:
= Initialise all w,l. values to zero (5.1)
= Loop through all examples in the training set (5.2, 5.10, 5.3).
= Calculate the local quality for each example (5.4) (In this example it will
always be false, i.e. it will not fulfil the quality criteria).
= If QL= TRUE (5.5) proceed with next example (it will never be true for
this example), else set the addressed weights using /~(vR (x) c) which
depends on the actual iteration (5.6-5.9).

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
33
= Calculate the global quality value. If the quality value is the highest
obtained hitherto store the network (5.11).
= Repeat until the last iteration (5.12, 5.13).
Thus, the above described example 5 fits into the flowchart structure shown in
Fig.
5. However as the answer to (5.5) is always no, the weight assignment
procedure
can be simplified in the present case as described below with reference to
Fig. 6,
which shows the flow chart of a more simplified weight cell adjustment process
according to the present invention:
A number of schemes, ahr,(.,. , for setting the iv,,. values may be defined as
follows
(6.1, 6.6, 6.7):
Scheme a:
(6.2)
for all LUTs do:
for all i do:
for all C do:
tt'x = 1õ 0"r )
For each a scheme a global quality function for the classification performance
may
be calculated (6.3). One possibility for the global quality function is to
calculate a
cross-validation error:
Q a
G = UA(.r.a).C; 1Ex
where
/\(_r, a) = aramax iES2 ~ ja+sr,r,,=(v ,(-~).C)QI+S,=,ri.r (v" (=F).C)
C
The network having the best quality value Q," may then be stored (6.4, 6.5).
Here, it should be understood that another number of iterations may be
selected and
other suitable global quality functions may be defined.

CA 02318502 2000-07-13
WO 99/40521 PCT/DK99/00052
34
The foregoing description of preferred exemplary embodiments of the invention
has
been presented for the purpose of illustration and description. It is not
intended to be
exhaustive or to limit the invention to the precise form disclosed, and
obviously
many modifications and variations are possible in light of the present
invention to
those skilled in the art. All such modifications which retain the basic
underlying
principles disclosed and claimed herein are within the scope of this
invention.

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: IPC expired 2023-01-01
Inactive: IPC expired 2023-01-01
Inactive: Expired (new Act pat) 2019-02-02
Inactive: IPC expired 2019-01-01
Maintenance Request Received 2018-01-24
Maintenance Request Received 2017-01-23
Maintenance Request Received 2016-01-22
Maintenance Request Received 2015-01-29
Maintenance Request Received 2014-01-28
Maintenance Request Received 2013-01-11
Grant by Issuance 2008-10-07
Inactive: Cover page published 2008-10-06
Pre-grant 2008-07-21
Inactive: Final fee received 2008-07-21
Notice of Allowance is Issued 2008-01-23
Notice of Allowance is Issued 2008-01-23
Inactive: IPC removed 2008-01-23
Inactive: First IPC assigned 2008-01-23
Inactive: IPC assigned 2008-01-23
Inactive: IPC assigned 2008-01-23
Inactive: IPC removed 2008-01-23
Inactive: IPC assigned 2008-01-23
Letter Sent 2008-01-23
Inactive: Approved for allowance (AFA) 2008-01-15
Inactive: IPC from MCD 2006-03-12
Inactive: IPC from MCD 2006-03-12
Inactive: IPRP received 2004-08-20
Amendment Received - Voluntary Amendment 2004-04-08
Letter Sent 2004-02-20
Request for Examination Received 2004-01-30
Request for Examination Requirements Determined Compliant 2004-01-30
All Requirements for Examination Determined Compliant 2004-01-30
Inactive: Entity size changed 2002-01-30
Letter Sent 2001-12-05
Letter Sent 2001-09-06
Letter Sent 2001-09-06
Inactive: Transfer information requested 2001-09-05
Inactive: Single transfer 2001-08-01
Inactive: Transfer information requested 2001-07-24
Inactive: Correspondence - Transfer 2001-06-26
Inactive: Courtesy letter - Evidence 2001-04-25
Inactive: Courtesy letter - Evidence 2001-04-24
Inactive: Single transfer 2001-03-22
Inactive: Cover page published 2000-10-20
Inactive: First IPC assigned 2000-10-18
Inactive: Courtesy letter - Evidence 2000-10-10
Inactive: Notice - National entry - No RFE 2000-10-05
Application Received - PCT 2000-10-03
Application Published (Open to Public Inspection) 1999-08-12

Abandonment History

There is no abandonment history.

Maintenance Fee

The last payment was received on 2008-01-08

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
INTELLIX A/S
Past Owners on Record
CHRISTIAN LINNEBERG
THOMAS MARTINI JORGENSEN
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column (Temporarily unavailable). To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.

({010=All Documents, 020=As Filed, 030=As Open to Public Inspection, 040=At Issuance, 050=Examination, 060=Incoming Correspondence, 070=Miscellaneous, 080=Outgoing Correspondence, 090=Payment})


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative drawing 2000-10-19 1 12
Description 2000-07-12 34 1,380
Abstract 2000-07-12 1 81
Claims 2000-07-12 12 560
Drawings 2000-07-12 7 106
Representative drawing 2008-09-18 1 13
Reminder of maintenance fee due 2000-10-04 1 110
Notice of National Entry 2000-10-04 1 193
Request for evidence or missing transfer 2001-07-15 1 108
Courtesy - Certificate of registration (related document(s)) 2001-09-05 1 137
Courtesy - Certificate of registration (related document(s)) 2001-09-05 1 136
Reminder - Request for Examination 2003-10-05 1 112
Acknowledgement of Request for Examination 2004-02-19 1 174
Commissioner's Notice - Application Found Allowable 2008-01-22 1 164
Correspondence 2000-10-04 1 15
PCT 2000-07-12 10 401
Correspondence 2001-04-24 1 25
Correspondence 2001-07-23 1 23
Fees 2003-01-20 1 35
Fees 2002-01-13 1 38
Fees 2001-01-23 1 35
Fees 2004-01-29 1 37
PCT 2000-07-13 4 215
Fees 2005-01-30 1 33
Fees 2006-02-01 1 32
Fees 2007-02-01 1 34
Fees 2008-01-07 1 34
Correspondence 2008-07-20 1 36
Fees 2009-01-29 1 34
Fees 2009-12-20 1 37
Fees 2011-01-05 1 38
Fees 2012-01-25 1 38
Fees 2013-01-10 1 39
Fees 2014-01-27 1 38
Fees 2015-01-28 1 40
Maintenance fee payment 2016-01-21 1 40
Maintenance fee payment 2017-01-22 1 39
Maintenance fee payment 2018-01-23 1 42