Language selection

Search

Patent 2826177 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 2826177
(54) English Title: SYSTEMS AND METHODS FOR IMAGE-TO-TEXT AND TEXT-TO-IMAGE ASSOCIATION
(54) French Title: SYSTEMES ET PROCEDES POUR UNE ASSOCIATION IMAGE-TEXTE ET TEXTE-IMAGE
Status: Expired and beyond the Period of Reversal
Bibliographic Data
(51) International Patent Classification (IPC):
(72) Inventors :
  • TAIGMAN, YANIV (Israel)
  • HIRSCH, GIL (Israel)
  • SHOCHAT, EDEN (Israel)
(73) Owners :
  • FACEBOOK, INC.
(71) Applicants :
  • FACEBOOK, INC. (United States of America)
(74) Agent:
(74) Associate agent:
(45) Issued: 2017-08-08
(86) PCT Filing Date: 2011-03-31
(87) Open to Public Inspection: 2012-08-09
Examination requested: 2015-10-01
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/IL2011/000287
(87) International Publication Number: IL2011000287
(85) National Entry: 2013-08-01

(30) Application Priority Data:
Application No. Country/Territory Date
61/439,021 (United States of America) 2011-02-03

Abstracts

English Abstract

A computerized system for classifying facial images of persons including a computerized facial image attribute-wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives and a computerized classifier which classifies the facial image in accordance with the plural ones of the discrete facial attributes.


French Abstract

L'invention porte sur un système informatisé pour classer des images faciales de personnes, comprenant un évaluateur informatisé pour attributs d'image faciale, affectant des valeurs représentant une image faciale à plusieurs attributs faciaux distincts parmi des attributs faciaux distincts de l'image faciale, les valeurs étant représentées par des adjectifs, et un classificateur informatisé qui classe l'image faciale conformément aux différents attributs faciaux distincts parmi les attributs faciaux distincts.

Claims

Note: Claims are shown in the official language in which they were submitted.


Claims
1. A computerized system for classifying facial images of persons
comprising:
a computerized facial image attribute-wise evaluator, assigning values
representing a
facial image to plural ones of discrete facial attributes of said facial
image, said values being
represented by adjectives, wherein said evaluator comprises:
a database comprising a multiplicity of stored facial images, and a
multiplicity of
stored values, each of said stored facial images having at least some of said
plurality of
discrete facial attributes, at least some of said discrete facial attributes
having said
values, represented by adjectives, associated therewith; and
an adjective-based comparator, comparing a facial image with said multiplicity
of
stored facial images by comparing said plurality of discrete facial attributes
of said
facial image, attribute-wise and adjective-wise with said multiplicity of
stored facial
images; and
a computerized classifier which classifies said facial image in accordance
with said
plural ones of said discrete facial attributes,
wherein in response to said comparing, said facial images not found to match
in said
multiplicity of stored facial images are stored in said database as one of
said multiplicity of
stored facial images.
2. A computerized system for classifying facial images of persons according
to claim 1
and also comprising:
facial attribute statistic reporting functionality providing statistical
information derived
from said multiplicity of stored values.
3. A computerized system for classifying facial images of persons according
to claim 1
and wherein said adjective-based comparator queries said database in an
adjective-wise
manner.
4. A computerized system for classifying facial images of persons according
to any of
claims 1 - 3 and also comprising a computerized identifier operative in
response to an output
27

from said computerized classifier for identifying at least one stored facial
image
corresponding to said output.
5. A computerized system for classifying facial images of persons according
to claim 4
and wherein said computerized identifier is operative for generating a ranked
list of stored
facial images corresponding to said output.
6. A computerized system for classifying facial images of persons according
to any of
claims 1 - 5 and also comprising a social network interface for making
available information
from a social network to said computerized facial image attribute-wise
evaluator.
7. A computerized system for classifying facial images of persons according
to any of
claims 1 - 6 and also comprising face model generation functionality operative
to generate a
face model corresponding to said facial image.
8. A computerized system for classifying facial images of persons according
to claim 4
and also comprising face model generation functionality operative to generate
a face model
corresponding to said facial image and wherein said computerized identifier
employs said
face model.
9. A computerized method for classifying facial images of persons
comprising:
assigning, by a computerized facial image attribute-wise evaluator, values
representing
a facial image to plural ones of discrete facial attributes of said facial
image, said values being
represented by adjectives, wherein said evaluator comprises a database
comprising a
multiplicity of stored facial images, and a multiplicity of stored values,
each of said stored
facial images having at least some of said plurality of discrete facial
attributes, at least some
of said discrete facial attributes having said values, represented by
adjectives, associated
therewith; and
classifying, by a computerized classifier, said facial image in accordance
with said
plural ones of said discrete facial attributes; and
comparing, by an adjective-based comparator, a facial image with said
multiplicity of
stored facial images by comparing said plurality of discrete facial attributes
of said facial
image, attribute-wise and adjective-wise with said multiplicity of stored
facial images, and
28

in response to said comparing, storing, by said computerized facial image
attribute-wise
evaluator, said facial images not found to match in said multiplicity of
stored facial images in
said database as one of said multiplicity of stored facial images.
10. A computerized method for classifying facial images of persons
according to claim 9
and also comprising:
providing, by a computerized provider, statistical information derived from
said
multiplicity of stored values.
11. A computerized method for classifying facial images of persons according
to claim 9
and wherein said comparing queries a database in an adjective-wise manner.
12. A computerized method for classifying facial images of persons
according to any of
claims 9 - 11 and also comprising identifying, by a computerized identifier,
at least one stored
facial image corresponding to an output of said classifying.
13. A computerized method for classifying facial images of persons
according to claim 12
and wherein said identifying is operative for generating a ranked list of
stored facial images
corresponding to said output.
14. A computerized method for classifying facial images of persons
according to any of
claims 9 - 13 and also comprising making available, by a social network
interface,
information from a social network to said computerized facial image attribute-
wise evaluator.
15. A computerized method for classifying facial images of persons
according to any of
claims 9 - 14 and also comprising generating, by a computerized face model
generator, a face
model corresponding to said facial image.
16. A computerized method for classifying facial images of persons
according to claim 12
and also comprising generating, by a computerized face model generator, a face
model
corresponding to said facial image and where said identifying employs said
face model.
17. A method for generating a computerized facial image attribute-wise
evaluator, capable
of assigning values, each represented by an adjective, to plural ones of
discrete facial
attributes of a facial image, the method comprising:
29

gathering a multiplicity of facial images and a multiplicity of values, each
facial image
having at least one facial image attribute, at least some of the discrete
facial attribute having
at least one of said values, characterized by an adjective, associated
therewith;
comparing a facial image with said multiplicity of gathered facial images by
comparing
said plurality of discrete facial attributes of said facial image, attribute-
wise and adjective-
wise with said multiplicity of gathered facial images;
in response to said comparing, gathering as one of said multiplicity of facial
images said
facial images not found to match in said multiplicity of stored facial images;
and
generating a function operative to receive a facial image to be evaluated and
to utilize
results of said gathering for assigning values to plural ones of discrete
facial attributes of said
facial image to be evaluated, said values being represented by adjectives.
18. A method for generating a computerized facial image attribute- wise
evaluator
according to claim 17 and wherein said gathering comprises:
collecting a multiplicity of facial images, each having at least one facial
image attribute,
characterized by an adjective, associated therewith from publicly available
sources; and
employing crowdsourcing to enhance correspondence between adjectives and
facial
attributes appearing in said multiplicity of facial images.
19. A method for generating a computerized facial image attribute-wise
evaluator according
to claim 18 and wherein said crowdsourcing comprises:
employing multiple persons who view ones of said multiplicity of facial images
and
said adjectives and indicate their views as to the degree of correspondence
between said
adjectives and said facial attributes in said ones of said multiplicity of
images.
20. A method for generating a computerized facial image attribute-wise
evaluator according
to any of claims 17 - 19 and wherein said values are numerical values.
21. A system for recognizing user reaction to at least one stimulus
comprising:
a computerized facial image attribute-wise evaluator, assigning values
representing a
facial image obtained at a time corresponding to user reaction to a stimulus
to plural ones of

discrete facial attributes of said facial image, said values being represented
by adjectives,
wherein said evaluator comprises:
a database comprising a multiplicity of stored facial images, and a
multiplicity of
stored values, each of said stored facial images having at least some of said
plurality of
discrete facial attributes, at least some of said discrete facial attributes
having said
values, represented by adjectives, associated therewith; and
an adjective-based comparator, comparing a facial image with said multiplicity
of
stored facial images by comparing said plurality of discrete facial attributes
of said
facial image, attribute-wise and adjective-wise with said multiplicity of
stored facial
images; and
a computerized classifier which classifies said facial image in accordance
with said
plural ones of said discrete facial attributes,
wherein in response to said comparing, said facial images not found to match
in said
multiplicity of stored facial images are stored in said database as one of
said multiplicity of
stored facial images.
22. A system for recognizing user reaction to at least one stimulus
according to claim 21
and also comprising a computerized attribute comparator comparing said plural
ones of said
discrete facial attributes prior to and following application of said at least
one stimulus.
23. A method for recognizing user reaction to at least one stimulus
comprising:
assigning values representing a facial image obtained at a time corresponding
to user
reaction to a stimulus to plural ones of discrete facial attributes of said
facial image, said
values being represented by adjectives, wherein said assigning includes:
storing a multiplicity of facial images and a multiplicity of stored values,
each of
said stored facial images having at least some of said plurality of discrete
facial
attributes, at least some of said discrete facial attributes having said
values, represented
by adjectives, associated therewith; and
31

comparing a facial image with said multiplicity of stored facial images by
comparing said plurality of discrete facial attributes of said facial image,
attribute-wise
and adjective-wise with said multiplicity of stored facial images; and
classifying said facial image in accordance with said plural ones of said
discrete facial
attributes,
wherein in response to said comparing, said facial images not found to match
in said
multiplicity of stored facial images are stored as one of said multiplicity of
stored facial
images.
24. A method for recognizing user reaction to at least one stimulus
according to claim 23
and also comprising comparing said plural ones of said discrete facial
attributes prior to and
following application of said at least one stimulus.
32

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02826177 2017-01-18
WO 2012/104830
PCT/1L2011/000287
SYSTEMS AND METHODS FOR IMAGE-TO-TEXT AND TEXT-TO-IMAGE
ASSOCIATION
10
FIELD OF THE INVENTION
The present invention relates generally to image-to-text and text-to-
image association.
BACKGROUND OF THE INVENTION
The following patents and patent publications are believed to represent
the current state of the art:
US Patent Nos.: 4,926,491; 5,164,992; 5,963,670; 6,292,575; 6,301,370;
6,819,783; 6,944,319; 6,990,217; 7,274,822 and 7,295,687; and
US Published Patent Application Nos.: 2006/0253491; 2007/0237355
and 2009/0210491.
1

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
SUMMARY OF THE INVENTION
The present invention seeks to provide improved systems and
methodologies for image-to-text and text-to-image association. There is thus
provided
in accordance with a preferred embodiment of the present invention a
computerized
system for classifying facial images of persons including a computerized
facial image
attribute-wise evaluator, assigning values representing a facial image to
plural ones of
discrete facial attributes of the facial image, the values being represented
by adjectives
and a computerized classifier which classifies the facial image in accordance
with the
plural ones of the discrete facial attributes.
In accordance with a preferred embodiment of the present invention, the
computerized facial attribute-wise evaluator includes a database including a
multiplicity
of stored values corresponding to a plurality of facial images, each of the
facial images
having at least some of the plurality of discrete facial attributes, at least
some of the
discrete facial attributes having the values, represented by adjectives,
associated
therewith.
Preferably, the system also includes facial attribute statistic reporting
functionality providing statistical information derived from the multiplicity
of stored
values.
Preferably, the computerized facial attribute-wise evaluator includes a
database including a multiplicity of stored facial images, and a multiplicity
of stored
values, each of the stored facial images having at least some of the plurality
of discrete
facial attributes, at least some of the discrete facial attributes having the
values,
represented by adjectives, associated therewith, and an adjective-based
comparator,
comparing a facial image with the multiplicity of stored facial images by
comparing the
plurality of discrete facial attributes of the facial image, attribute- and
adjective- wise
with the multiplicity of stored facial images. Preferably, the adjective-based
comparator
queries the database in an adjective-wise manner.
Preferably, the system also includes a computerized identifier operative
in response to an output from the computerized classifier for identifying at
least one
stored facial image corresponding to the output. Preferably, the computerized
identifier
2

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
is operative for generating a ranked list of stored facial images
corresponding to said
output.
Preferably, the system also includes a social network interface for
making available information from a social network to the computerized facial
image
attribute-wise evaluator. Preferably, the system also includes face model
generation
functionality operative to generate a face model corresponding to the facial
image.
Preferably, the computerized identifier employs the face model.
There is also provided in accordance with another preferred embodiment
of the present invention a computerized method for classifying facial images
of persons
including assigning values representing a facial image to plural ones of
discrete facial
attributes of the facial image, the values being represented by adjectives,
and classifying
the facial image in accordance with the plural ones of the discrete facial
attributes.
In accordance with a preferred embodiment of the present invention,
each of the facial images has at least some of the plurality of discrete
facial attributes
and at least some of the discrete facial attributes have the values,
represented by
adjectives, associated therewith. Preferably, the method also includes
providing
statistical information derived from the multiplicity of stored values.
Preferably, each of the stored facial images has at least some of the
plurality of discrete facial attributes, and at least some of the discrete
facial attributes
have the values, represented by adjectives, associated therewith, and the
method
preferably also includes comparing a facial image with a multiplicity of
stored facial
images by comparing the plurality of discrete facial attributes of the facial
image,
attribute- and adjective- wise with the multiplicity of stored facial images.
Preferably,
the comparing queries a database in an adjective-wise manner.
Preferably, the method also includes identifying at least one stored facial
image corresponding to an output of the classifying. Preferably, the
identifying is
operative for generating a ranked list of stored facial images corresponding
to the
output. Preferably, the method also includes making available information from
a social
network to the computerized facial image attribute-wise evaluator. Preferably,
the
method also includes face model generation operative to generate a face model
corresponding to the facial image. Preferably, the identifying employs the
face model.
3

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
There is further provided in accordance with yet another preferred
embodiment of the present invention a system for registration of persons in a
place
including a facial image/person identification acquisition subsystem acquiring
at least
one facial image and at least one item of personal identification of a person,
and a
computerized subsystem receiving the at least one facial image and the at
least one item
of personal identification of the person, the computerized subsystem including
face
model generation functionality operative to generate a face model
corresponding to the
at least one facial image and image-to-attributes mapping functionality
operative to
assign values represented by adjectives to a plurality of facial attributes of
the facial
image, and a database which stores information and the values of facial
attributes for a
plurality of the persons.
Preferably, the system also includes attributes-to-image mapping
functionality operative to utilize a collection of values of facial attributes
to identify a
corresponding stored facial image and thereby to identify a particular
individual
utilizing the face model. Preferably, the computerized subsystem also includes
a value
combiner is operative to combine the face model and the collection of values
of facial
attributes into a combined collection of values which can be matched to a
corresponding
stored collection of values, and thereby to identify a particular individual.
Preferably, the system also includes a subsequent facial image
acquisition subsystem acquiring at least one facial image and supplying it to
the
computerized subsystem, and the computerized subsystem is preferably operative
to
create a face model corresponding to the subsequent facial image, assign
values
represented by adjectives to a plurality of facial attributes of the
subsequent facial
image, and identify a corresponding stored facial image and thereby the
subsequent
facial image as a particular individual, at least one item of personal
identification
relating to whom is stored in the database.
Preferably, the value combiner is employed to combine the face model
and the collection of values corresponding to the subsequent facial image and
thereby to
identify the particular individual. Preferably, the at least one item of
personal
identification of the person is obtained from pre-registration data.
Preferably, the system also includes a social network interface for
making available information from a social network to the computerized
subsystem.
4

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
Preferably, the facial image/person identification acquisition subsystem is
operative for
acquiring at least one facial image and at least one item of personal
identification of a
person other than a person interacting with the subsystem. Additionally or
alternatively,
the facial image/person identification acquisition subsystem is operative for
acquiring at
least one facial image of an otherwise unidentified person other than a person
interacting with the subsystem.
Preferably, the system is embodied in a computerized facial image
attribute-wise evaluator, assigning values representing a facial image to
plural ones of
discrete facial attributes of the facial image, the values being represented
by adjectives
and a computerized classifier which classifies the facial image in accordance
with the
plural ones of the discrete facial attributes.
There is further provided in accordance with yet another preferred
embodiment of the present invention a system for recognizing repeated presence
of
persons in a place including a facial image/person identification acquisition
subsystem
acquiring at least one facial image of a person, and a computerized subsystem
receiving
the at least one facial image, the computerized subsystem including face model
generation functionality operative to generate a face model corresponding to
the at least
one facial image, and image-to-attributes mapping functionality operative to
assign
values represented by adjectives to a plurality of facial attributes of the
facial image,
and a database which stores information and the values of facial attributes
for a plurality
of the persons.
Preferably, the computerized subsystem also includes attributes-to-image
mapping functionality operative to utilize a collection of values of facial
Attributes to
identify a corresponding stored facial image associated with a particular
individual,
utilizing the face model. Preferably, the computerized subsystem also includes
a value
combiner is operative to combine the face model and the collection of values
of facial
attributes into a combined collection of values which can be matched to a
corresponding
stored collection of values.
Preferably, the system also includes a subsequent facial image
acquisition subsystem acquiring at least one facial image and supplying it to
the
computerized subsystem, and the computerized subsystem is preferably operative
to
create a face model corresponding to the subsequent facial image, assign
values
5

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
represented by adjectives to a plurality of facial attributes of the
subsequent facial
image, and identify a corresponding stored facial image and thereby the
subsequent
facial image as being that of a particular individual, for recognizing
repeated presence
of that particular person.
Preferably, the value combiner is employed to combine the face model
and the collection of values corresponding to the subsequent facial image
thereby to
recognize repeated presence of a person. Preferably, the system also includes
a repeat
presence statistics generator employing the face models and the collections of
values for
generate attribute-wise statistics regarding persons repeatedly present at a
place.
Preferably, the system also includes a social network interface for making
available
information from a social network to the computerized subsystem.
Preferably, the facial image/person identification acquisition subsystem
is operative for acquiring at least one facial image and at least one item of
personal
identification of a person other than a person interacting with the subsystem.
Additionally or alternatively, the facial image/person identification
acquisition
subsystem is operative for acquiring at least one facial image of an otherwise
unidentified person other than a person interacting with the subsystem.
Preferably, the system is embodied in a computerized facial image
attribute-wise evaluator, assigning values representing a facial image to
plural ones of
discrete facial attributes of the facial image, the values being represented
by adjectives,
and a computerized classifier which classifies the facial image in accordance
with the
plural ones of the discrete facial attributes.
There is yet further provided in accordance with yet still another
preferred embodiment of the present invention a method for generating a
computerized
facial image attribute-wise evaluator, capable of assigning values, each
represented by
an adjective, to plural ones of discrete facial attributes of a facial image,
the method
including gathering a multiplicity of facial images, each having at least one
facial image
attribute, characterized by an adjective, associated therewith, and generating
a function
operative to receive a facial image to be evaluated and to utilize results of
the gathering
for assigning values to plural ones of discrete facial attributes of the
facial image to be
evaluated, the values being represented by adjectives.
6

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
Preferably, the gathering includes collecting a multiplicity of facial
images, each having at least one facial image attribute, characterized by an
adjective,
associated therewith from publicly available sources, and employing
crowdsourcing
to enhance correspondence between adjectives and facial attributes appearing
in the
multiplicity of facial images. Preferably, the crowdsourcing includes
employing
multiple persons who view ones of the multiplicity of facial images and the
adjectives
and indicate their views as to the degree of correspondence between the
adjectives and
the facial attributes in the ones of the multiplicity of images. Preferably,
the values are
numerical values.
There is also provided in accordance with another preferred embodiment
of the present invention a system for recognizing user reaction to at least
one stimulus
including a computerized facial image attribute-wise evaluator, assigning
values
representing a facial image obtained at a time corresponding to user reaction
to a
stimulus to plural ones of discrete facial attributes of the facial image, the
values being
represented by adjectives, and a computerized classifier which classifies the
facial
image in accordance with the plural ones of the discrete facial attributes.
Preferably, the system also includes a computerized attribute comparator
comparing the plural ones of the discrete facial attributes prior to and
following
application of the at least one stimulus.
There is further provided in accordance with yet another preferred
embodiment of the present invention a method for recognizing user reaction to
at least
one stimulus including assigning values representing a facial image obtained
at a time
corresponding to user reaction to a stimulus to plural ones of discrete facial
attributes of
the facial image, the values being represented by adjectives, and classifying
the facial
image in accordance with the plural ones of the discrete facial attributes.
Preferably, the method also includes comparing the plural ones of the
discrete facial attributes prior to and following application of the at least
one stimulus.
There is further provided in accordance with yet another preferred
embodiment of the present invention a computerized system for classifying
persons
including a relationship coefficient generator which generates relationship
coefficients
representing the probability of a person to be in a particular context at a
particular time,
7

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
and a computerized classifier which classifies the person in accordance with
the plural
ones of the relationship coefficients.
Preferably, the context is one of a geographic location and an event.
Preferably, the relationship coefficients include a value and a decay
function.
Preferably, the decay function is a linear function. Alternatively, the decay
function is
an exponential function.
Preferably, the context is one of a hierarchy of hierarchical contexts.
Preferably, relationship coefficients of contexts of a hierarchy of contexts
are
interdependent. Preferably, the relationship coefficient generator is
operative in a case
where multiple persons have been together in at least a first context to
generate
interdependent relationship coefficients between the multiple persons in a
second
context.
Preferably, the system also includes a computerized classifier which
classifies facial images in accordance with plural ones of discrete facial
attributes.
8

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated more fully
from the following detailed description, taken in conjunction with the
drawings in
which:
Figs. 1A, 1B and 1C are simplified illustrations of an identification
system employing image-to-text and text-to-image association in accordance
with a
preferred embodiment of the present invention;
Figs. 2A and 2B are simplified illustrations of an identification system
employing image-to-text and text-to-image association in accordance with
another
preferred embodiment of the present invention;
Figs. 3A and 3B are simplified illustrations of an identification system
employing image-to-text and text-to-image association in accordance with yet
another
preferred embodiment of the present invention;
Figs. 4A, 4B and 4C are simplified illustrations of an identification
system employing image-to-text and text-to-image association in accordance
with yet
another preferred embodiment of the present invention;
Figs. 5A and 5B are simplified illustrations of an identification system
employing image-to-text and text-to-image association in accordance with yet
another
preferred embodiment of the present invention;
Fig. 6 is a simplified illustration of a user satisfaction monitoring system
employing image-to-text association in accordance with yet another preferred
embodiment of the present invention;
Fig. 7 is a simplified illustration of an image/text/image database
generation methodology useful in building a database employed in the systems
of Figs.
1A ¨ 6;
Fig. 8 is a simplified flow chart illustrating a training process for
associating adjectives with images;
Fig. 9 is a simplified flow chart illustrating the process of training a
visual classifier;
9

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
Fig. 10 is a simplified flow chart illustrating a process for retrieving
adjectives associated with an image;
Fig. 11 is a simplified flow chart illustrating a process for retrieving
images associated with one or more adjectives; and
Fig. 12 is a simplified flow chart illustrating a process for retrieving
facial images similar to a first image.
10

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Reference is now made to Figs. 1A, 1B and 1C, which are simplified
illustrations of an identification system employing image-to-text and text-to-
image
association in accordance with a preferred embodiment of the present
invention. The -
system of Figs. 1A ¨ 1C preferably includes a computerized facial image
attribute-wise
evaluator, assigning values representing a facial image to plural ones of
discrete facial
attributes of the facial image, the values being represented by adjectives,
and a
computerized classifier which classifies the facial image in accordance with
the plural
ones of the discrete facial attributes.
As seen in Fig. 1A, on January 1, Mr. Jones, a customer of the AAA
Department Store, enters the store and registers as a valued customer of the
store at a
registration stand 100. The registration stand preferably includes a computer
102
connected to a store computer network, and a digital camera 104 connected to
computer
102. The valued customer registration process includes entering personal
identification=
details of the customer, such as his full name, and capturing a facial image
108 of the
customer by digital camera 104. Alternatively, personal identification details
of the
customer may be retrieved, for example, from a pre-existing personal social
network
account of the customer. Alternatively, the customer may register as a valued
location
over the interne from a remote location.
The personal identification details and facial image 108 are transmitted
to a computerized person identification system 110 which preferably includes
face
model generation functionality 112, image-to-attributes mapping functionality
114,
attributes-to-image mapping functionality 116 and a value combiner 117.
Computerized
person identification system 110 also preferably includes a valued customer
database
118 which stores registration details and values of facial attributes of all
registered
customers. It is appreciated that database 118 may be any suitable
computerized
information store.
Face model generation functionality 112 is operative to generate a face
model 120 which corresponds to facial image 108. It is appreciated that face
model
generation functionality 112 may employ any suitable method of face model
generation
11

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
known in the art. As seen in Fig. 1A, face model 120 generated by face model
generation functionality 112 and corresponding to facial image 108 is stored
in database
118 as one of the attributes of Mr. Jones.
In accordance with a preferred embodiment of the present invention,
image-to-attributes mapping functionality 114 is operative to assign values
represented
by adjectives 122 to a plurality of facial attributes of facial image 108. The
adjectives
122 representing the facial attributes may include, for example, adjectives
describing
hair color, nose shape, skin color, face shape, type and presence of or
absence of facial
hair. As seen in Fig. 1A, adjectives generated by attributes mapping
functionality 114
which correspond to facial image 108 are stored in database 118 as values of
attributes
of Mr. Jones.
Further in accordance with a preferred embodiment of the present
invention, attributes-to-image mapping functionality 116 is operative to
utilize a
collection of values of facial attributes to identify a corresponding stored
facial image,
and thereby to identify a particular individual.
Yet further in accordance with a preferred embodiment of the present
invention, value combiner 117 preferably is operative to combine a face model
and a
collection of values of facial attributes into a combined collection of values
which can
be matched to a corresponding stored collection of values, and thereby to
identify a
particular individual.
Turning now to Fig. 1B, it is seen that on a later date, such as on January
17, a customer enters the AAA Department Store and a digital camera 150,
mounted at
the entrance to the store, captures a facial image 152 of the customer. Facial
image 152
is transmitted to computerized person identification system 110 where a face
model 160
corresponding to facial image 152 is preferably generated by face model
generation
functionality 112. Additionally, values 162 represented by adjectives are
preferably
assigned to a plurality of facial attributes of facial image 152 by image-to-
attributes
mapping functionality 114.
As shown in Fig. 1B, face model 160 and adjectives 162 are preferably
combined by value combiner 117 into a combined collection of values, which is
compared to the collections of values stored in database 118, and are found to
match the
face model and adjectives assigned to Mr. Jones, thereby identifying the
person
12

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
portrayed in facial image 152 captured by camera 150 as being Mr. Jones. It is
appreciated that the collection of values combined by value combiner 117 and
which are
compared to the collections of values stored in database 118 may be any subset
of face
model 160 and adjectives 162.
Turning now to Fig. 1C, it is shown that for example, upon identifying
the customer who has entered the store as Mr. Jones, who is a registered
valued
customer, the manager is notified by system 110 that a valued customer has
entered the
store, and the manager therefore approaches Mr. Jones to offer him a new
product at a
discount.
Reference is now made to Figs. 2A and 2B, which are simplified
illustrations of an identification system employing image-to-text and text-to-
image
association in accordance with another preferred embodiment of the present
invention.
As seen in Fig. 2A, on a particular day such as January 1, a customer of the
AAA
Department Store enters the store and a digital camera 200 mounted at the
entrance to
the store captures a facial image 202 of the customer. Facial image 202 is
transmitted to
a computerized person identification system 210 which preferably, includes
face model
generation functionality 212, image-to-attributes mapping functionality 214,
attributes-
to-image mapping functionality 216 and a value combiner 217. Computerized
person
identification system 210 also preferably includes a customer database 218,
which
preferably stores values of facial attributes of all customers who have ever
entered the
store, and a visit counter 219 which preferably tracks the number of
accumulated visits
that each particular customer has made to the store. It is appreciated that
database 218
may be any suitable computerized information store.
Face model generation functionality 212 is operative to generate a face
model 220, which corresponds to facial image 202. It is appreciated that face
model
generation functionality 212 may employ any suitable method of face model
generation
known in the art. As seen in Fig. 2A, face model 220 generated by face model
generation functionality 212 and corresponding to facial image 202 is stored
in database
218 as one of the attributes of the customer of facial image 202.
In accordance with a preferred embodiment of the present invention,
image-to-attributes mapping functionality 214 is operative to assign values
represented
by adjectives 222 to a plurality of facial attributes of facial image 202. The
adjectives
13

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
222 representing the facial attributes may include, for example, adjectives
describing
age group, gender, ethnicity, face shape, mood and general appearance.
Further in accordance with a preferred embodiment of the present
invention, attributes-to-image mapping functionality 216 is operative to
utilize a
collection of values of facial attributes to identify a corresponding stored
facial image,
and thereby to identify a particular individual. It is appreciated that the
collection of
values may also include non-physical characteristics of the customer's
appearance such
as clothing type and color which may be used to identify an individual within
a short
period of time in a case where current values of facial attributes are not
available.
Yet further in accordance with a preferred embodiment of the present
invention, value combiner 217 preferably is operative to combine a face model
and a
collection of values of facial attributes into a combined collection of values
which can
be matched to a corresponding stored collection of values, and thereby to
identify a
particular individual.
As seen in Fig. 2A, face model 220 and adjectives 222 are preferably
combined by value combiner 217 into a combined collection of values, which is
compared to the collections of values stored in database 218, and are found to
match the
face model and adjectives corresponding to a returning customer. Therefore,
the visit
counter 219 of the customer is incremented. It is appreciated that the
collection of
values combined by value combiner 217 and which are compared to the
collections of
values stored in database 218 may be any subset of face model 220 and
adjectives 222.
Alternatively, if the combined collection of values generated by value
combiner 217 is not found to match any of the collections of values stored in
database
218, the combined collection of values generated by value combiner 217 and
facial
image 202 are preferably stored in database 218 as representing a new
customer, and the
counter 219 of the new customer is initialized to 1.
Turning now to Fig. 2B, it is shown that at closing time, such as at 5:00
PM on January 1, the manager of the store preferably receives a first report
230 from
system 210 which includes a segmentation of customers who have entered the
store over
the course of the January 1. The segmentation may be according to any of the
adjectives
stored in database 218, such as gender, age group, ethnicity and mood. Report
230 also
14

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
preferably includes information regarding the number of previous visits that
were made
to the store by the customers of January 1.
Additionally, the manager of the store may also receive a second report
234 from system 210 which includes a segmentation of returning customers who
have
entered the store over the course of the January 1. The segmentation may be
according
to any of the adjectives stored in database 218, such as gender, age group,
ethnicity and
mood. It is appreciated that reports 230 and 234 may be useful, for example,
for
planning targeted marketing campaigns, or for evaluating the success of
previously
executed marketing campaigns.
Reference is now made to Figs. 3A and 3B, which are simplified
illustrations of an identification system employing image-to-text and text-to-
image
association in accordance with yet another preferred embodiment of the present
invention. As seen in Fig. 3A, on a particular day such as January 1, a
customer of the
AAA Department Store enters the store and browses merchandise in the store's
toy
department. A digital camera 250 mounted in the toy department captures a
facial image
252 of the customer. As shown in Fig. 3A, additional digital cameras are
preferably
mounted throughout the various departments of the store.
Facial image 252 is transmitted to a computerized person identification
system 260 which includes face model generation functionality 262, image-to-
attributes
mapping functionality 264, attributes-to-image mapping functionality 266 and a
value
combiner 267. Computerized person identification system 260 also preferably
includes a
customer database 268, which preferably stores values of facial attributes of
all
customers who have entered the store during the day, and information
indicating which
of the store's departments each customer visited. It is appreciated that
database 268 may
be any suitable computerized information store.
Face model generation functionality 262 is operative to generate a face
model 270, which corresponds to facial image 252. It is appreciated that face
model
generation functionality 262 may employ any suitable method of face model
generation
known in the art. As seen in Fig. 3A, face model 270 generated by face model
generation functionality 262 and corresponding to facial image 252 is stored
in database
268 as one of the attributes of the customer of facial image 252.

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
In accordance with a preferred embodiment of the present invention,
image-to-attributes mapping functionality 264 is operative to assign values
represented
by adjectives 272 to a plurality of facial attributes of facial image 252. The
adjectives
272 representing the facial attributes may include, for example, adjectives
describing
age group, gender, ethnicity, face shape, mood and general appearance. As seen
in Fig.
3A, adjectives generated by attributes mapping functionality 264 which
correspond to
facial image 252 are stored in database 268 as values of attributes of the
customer of
facial image 252.
Further in accordance with a preferred embodiment of the present
invention, attributes-to-image mapping functionality 266 is operative to
utilize a
collection of values of facial attributes to identify a corresponding stored
facial image,
and thereby to identify a particular individual. It is appreciated that the
collection of
values may also include non-physical characteristics of the customer's
appearance such
as clothing type and color which may be used to identify an individual within
a short
period of time in a case where current values of facial attributes are not
available.
Yet further in accordance with a preferred embodiment of the present
invention, value combiner 267 preferably is operative to combine a face model
and a
collection of values of facial attributes into a combined collection of values
which can
be matched to a corresponding stored collection of values, and thereby to
identify a
particular individual.
Additionally, system 260 records the department which the customer has
visited in database 268 as being the toys department.
Turning now to Fig. 3B, it is shown that at closing time, such as at 5:00
PM on January 1, the manager of the store preferably receives a report 280
from system
260 which includes a segmentation of customers who have entered the store's
toy
department over the course of the January 1. The segmentation may be according
to any
of the adjectives stored in database 268, such as gender, age group, ethnicity
and mood.
It is appreciated that report 280 may be useful, for example, for planning
targeted
marketing campaigns, or for evaluating the success of previously executed
marketing
campaigns.
Reference is now made to Figs. 4A, 4B and 4C, which are simplified
illustrations of an identification system employing image-to-text and text-to-
image
16

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
association in accordance with yet another preferred embodiment of the present
invention. As shown in Fig. 4A, on January 1, a potential attendee registers
to attend the
florists' annual conference, preferably via a computer 300. As part of the
registration
process, the potential attendee is preferably prompted to enter personal
identification
details, such as his full name, and to upload at least one facial image 302 of
himself.
Alternatively, the potential attendee may choose to import personal
identification details
and one or more facial images, for example, from a pre-existing personal
social network
account.
The personal identification details and facial image 302 are transmitted
to a computerized conference registration system 310 which preferably includes
face
model generation functionality 312, image-to-attributes mapping functionality
314,
attributes-to-image mapping functionality 316 and a value combiner 317.
Computerized
conference registration system 310 also preferably includes a database 318
which stores
registration details and values of facial attributes of all registered
attendees. It is
appreciated that database 318 may be any suitable computerized information
store.
Face model generation functionality 312 is operative to generate a face
model 320, which corresponds to facial image 302. It is appreciated that face
model
generation functionality 312 may employ any suitable method of face model
generation
known in the art. As seen in Fig. 4A, face model 320 generated by face model
generation functionality 312 and corresponding to facial image 302 is stored
in database
318 as one of the attributes of potential attendee Mr. Jones.
In accordance with a preferred embodiment of the present invention,
image-to-attributes mapping functionality 314 is operative to assign values
represented
by adjectives 322 to a plurality of facial attributes of facial image 308. The
adjectives
representing the facial attributes may include, for example, adjectives
describing hair
color, nose shape, skin color, face shape, type and presence of or absence of
facial hair.
As seen in Fig. 4A, adjectives generated by attributes mapping functionality
314, which
correspond to facial image 302 are stored in database 318 as values of
attributes of
potential attendee Mr. Jones.
Further in accordance= with a preferred embodiment of the present
invention, attributes-to-image mapping functionality 316 is operative to
utilize a
17

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
collection of values of facial attributes to identify a corresponding stored
facial image,
and thereby to identify a particular individual.
Yet further in accordance with a preferred embodiment of the present
invention, value combiner 317 preferably is operative to combine a face model
and a
collection of values of facial attributes into a combined collection of values
which can
be matched to a corresponding stored collection of values, and thereby to
identify a
particular individual.
Turning now to Fig. 4B, it is seen that on a later date, such as on January
17, an attendee enters the florists' annual conference and approaches a
registration
booth 330 on the conference floor. Registration booth 330 includes a digital
camera 332
which captures a facial image 334 of the attendee. Facial image 334 is
transmitted
computerized conference registration system 310 where a face model 340
corresponding
to facial image 334 is preferably generated by face model generation
functionality 312.
Additionally, values 342, represented by adjectives, are preferably assigned
to a
plurality of facial attributes of facial image 334 by image-to-attributes
mapping
functionality 314.
As shown in Fig. 4B, face model 340 and values 342 are preferably
combined by value combiner 317 into a combined collection of values, which is
compared to the collections of values stored in database 318, and are found to
match the
face model and values assigned to Mr. Jones, thereby identifying the person
portrayed in
facial image 334 captured by camera 332 as being Mr. Jones. It is appreciated
that the
collection of values combined by value combiner 317 and which are compared to
the
collections of values stored in database 318 may be any subset of face model
340 and
adjectives 342. Upon being identified as Mr. Jones, the attendee's
registration is
completed and the attendee is welcomed by the conference staff
Turning now to Fig. 4C, it is shown that while attending the conference,
attendees who wish to be introduced to other attendees, allow other attendees
to capture
a facial image 350 of them, using, for example, a digital camera embedded in a
mobile
communicator device 352. Mobile communicator devices 352 of conference
attendees
are granted access to computerized conference registration system 310 via a
computer
network. It is appreciated that the computer network may be, for example, a
local
computer network or the internet.
18

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
Additionally or alternatively, an attendee may access computerized
conference registration system 310 to register new, currently unregistered
attendees to
the conference, by capturing a facial image of the new attendee and
transmitting the
facial image, preferably together with associated personal identification
information, to
registration system 310.
Upon capturing image 350 of a conference attendee, mobile
communicator device 352 transmits image 350 over the computer network to
computerized conference registration system 310 where a face model 360
corresponding
to facial image 350 is preferably generated by face model generation
functionality 312.
Additionally, values 362 represented by adjectives are preferably assigned to
a plurality
of facial attributes of facial image 350 by image-to-attributes mapping
functionality
314.
As shown in Fig. 4C, face model 360 and values 362 are combined by
value combiner 317 into a combined collection of values, which is compared to
the
collections of values stored in database 318, and are found to match the face
model and
values assigned to Mr. Jones, thereby identifying the person portrayed in
facial image
350 captured by mobile communicator device 352 as being Mr. Jones. It is
appreciated
that the collection of values combined by value combiner 317 and which are
compared
to the collections of values stored in database 318 may be any subset of face
model 360
and adjectives 362. Notification of the identification of the attendee
portrayed in image
350 as Mr. Jones is transmitted by computerized conference registration system
310
back to mobile communicator device 352, which notification enables the
operator of
mobile communicator device 352 to know that he is approaching Mr. Jones.
Reference is now made to Figs. 5A and 5B, which are simplified
illustrations of an identification system employing image-to-text and text-to-
image
association in accordance with yet another preferred embodiment of the present
invention. In the embodiment of Figs. 5A and 5B, a relationship coefficient
which
measures the relationship between a person and a context is employed. The
context may
be, for example, a geographic location or an event, and the relationship
coefficient
comprises a value and a predefined decay function. A single person may have a
relationship coefficient with multiple contexts simultaneously. The
relationship
19

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
coefficient can be used, for example, to predict the probability of a person
being at a
given location at a particular time.
The decay function may be any mathematical function. For example, the
decay function for a geographical location may be a linear function,
representing the
tendency of a person to gradually and linearly distance himself from the
location over
time. The decay function for a one-time event may be, for example, an
exponential
decay function.
While a person is within a particular context, the current value of the
generated relationship coefficient between the person and the context is set
to be high.
Each time the person is repeatedly sighted within the context, the value of
the
relationship coefficient is increased, potentially in an exponential manner.
It is appreciated that contexts may be hierarchical. For example, a
geographic location may be within a larger geographical area such as a city or
a country.
Therefore, a person who has a relationship coefficient with a particular
geographic
location will also have a lower relationship coefficient with all other
geographical
locations hierarchical thereto, which decreases as a function of the distance
between the
particular geographic location and the related hierarchical geographic
locations.
It is also appreciated that relationship coefficient of different people may
be at least partially interdependent. For example, a first person who has been
sighted
together with a second person at multiple locations at multiple times will be
assigned a
relatively high relationship coefficient to a new location where the second
person has
been sighted.
As seen in Fig. 5A, on a particular day such as January 1, 2011, a diner
dines at Café Jaques which is close proximity to the Eiffel Tower in Paris,
France. A
friend of the diner captures a facial image 400 of the diner using a digital
camera which
is part of a handheld mobile device 402 and registers the sighting of the
diner by
transmitting facial image 400 together with an associated time and location
over the
interne to a computerized person identification system 410. The location may
be
provided, for example, by a GPS module provided with device 402.
Alternatively, the
location may be retrieved, for example, from a social network. Using the
associated
time and location, a relationship coefficient which relates the diner to the
location is
generated as described hereinabove.

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
Computerized person identification system 410 includes face model
generation functionality 412, image-to-attributes mapping functionality 414,
attributes-
to-image mapping functionality 416 and a value combiner 417. Computerized
person
identification system 410 also preferably includes a sightings database 418
which
preferably stores values of facial attributes of all persons who have been
sighted and
registered, together with an associated time and location. It is appreciated
that database
418 may be any suitable computerized information store.
Face model generation functionality 412 is operative to generate a face
model 420, which corresponds to facial image 400. It is appreciated that face
model
generation functionality 422 may employ any suitable method of face model
generation
known in the art. As seen in Fig. 5A, face model 420 generated by face model
generation functionality 412 and corresponding to facial image 400 is stored
in database
418 as one of the attributes of the individual of facial image 400.
In accordance with a preferred embodiment of the present invention,
image-to-attributes mapping functionality 414 is operative to assign values
represented
by adjectives 422 to a plurality of facial attributes of facial image 400. The
adjectives
422 representing the facial attributes may include, for example, adjectives
describing
age group, gender, ethnicity, face shape, mood and general appearance. As seen
in Fig.
5A, adjectives generated by attributes mapping functionality 414 which
correspond to
facial image 400 are stored in database 418 as values of attributes of the
individual of
facial image 400. Additionally, the time and location associated with facial
image 400
are also stored in database 418.
Further in accordance with a preferred embodiment of the present
invention, attributes-to-image mapping functionality 416 is operative to
utilize a
collection of values of facial attributes to identify a corresponding stored
facial image,
and thereby to identify a particular individual. It is appreciated that the
collection of
values may also include non-physical characteristics of the customer's
appearance such
as clothing type and color which may be used to identify an individual within
a short
period of time in a case where current values of facial attributes are not
available.
Yet further in accordance with a preferred embodiment of the present
invention, value combiner 417 preferably is operative to combine a face model
and a
collection of values of facial attributes into a combined collection of values
which can
21

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
be matched to a corresponding stored collection of values, and thereby to
identify a
particular individual.
Turning now to Fig. 5B, it is shown that on a later date, such as on
February 1, 2011, a diner dines at Café Jaques which is in close proximity to
the Eiffel
Tower in Paris, France. A bystander captures a facial image 450 of the diner
using a
digital camera which is part of a handheld mobile device 452 and registers the
sighting
of the diner by transmitting facial image 450 together with an associated time
and
location over the interne to a computerized person identification system 410
where a
face model 460, corresponding to facial image 450, is preferably generated by
face
model generation functionality 412. Additionally, values 462 represented by
adjectives
are preferably assigned to a plurality of facial attributes of facial image
450 by image-
to-attributes mapping functionality 414.
As shown in Fig. 5B, face model 460, values 462 "and the time and
location associated with facial image 450 are preferably combined by value
combiner
417 into a combined collection of values, which is compared to the collections
of values
stored in database 418, and are found to match the combined values assigned to
the
diner who was last seen at the Eiffel Tower on January 1, 2011. It is
appreciated that the
collection of values combined by value combiner 417 and which are compared to
the
collections of values stored in database 418 may be any subset of face model
460 and
adjectives 462. Notification of the identification of the diner portrayed in
image 450 is
transmitted over the internet by computerized person identification system 410
back to
mobile communicator device 452.
It is a particular feature of this embodiment of the present invention that
the relationship coefficient which relates the diner to the location may also
be used as an
attribute value which increases the reliability of the identification of the
diner.
It is a particular feature of the present embodiment of the current
invention that the combination of the values of the facial attributes
associated with a
facial image together with additional information such as a particular
location
frequented by an individual is operative to more effectively identify
individuals at the
particular location or at related locations, such as at other locations which
are in close
proximity to the particular location.
22

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
It is another particular feature of the present embodiment of the current
invention that identification of individuals according to the present
embodiment of the
current invention is not limited to precise identification of particular
individuals based
on personal identification information such as first and last name, but rather
also
includes identification of individuals according by facial attributes and
aggregating
behavioral information pertaining to the individuals.
Reference is now made to Fig. 6, which is a simplified illustration of a
user satisfaction monitoring system employing image-to-text association in
accordance
with yet another preferred embodiment of the present invention. As shown in
Fig. 6, a
viewer uses a multimedia viewing device 480 to view computerized content 482.
It is
appreciated that device 480 may be, for example, a television device or a
computer.
Content 482 may be, for example, a video clip, a movie or an advertisement.
A digital camera 484 connected to multimedia viewing device 480
preferably captures a facial image 486 of the viewer at predefined intervals
such as, for
example, every few seconds, and preferably transmits images 486 over the
internet to an
online computerized content satisfaction monitoring system 490. Alternatively,
images
486 may be monitored, stored and analyzed by suitable functionality embedded
in
device 480.
Preferably, system 490 includes image4o-attributes mapping
functionality 492 and a viewer expressions database 494. It is appreciated
that database
494 may be any suitable computerized information store.
In accordance with a preferred embodiment of the present invention,
image-to-attributes mapping functionality 492 is operative to assign a value
represented
by an adjective 496 to the expression of the viewer as captured in facial
images 486, and
to store adjectives 496 in database 494. Adjectives 496 may include, for
example,
"happy", "sad", "angry", "content" and "indifferent". It is appreciated that
adjectives
496 stored in database 494 may be useful, for example, for evaluating the
effectiveness
of content 482.
Reference is now made to Fig. 7, which is a simplified illustration of an
image/text/image database generation methodology useful in building a database
employed in the systems of Figs. 1A ¨ 6. As shown in Fig. 7, a plurality of
images 500
are collected from an image repository 502 which is publicly available on the
internet,
23

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
by a computerized person identification training system 510. Image repository
502 may
be, for example, a publicly available social network or textual search engine
which
associates text with images appearing on the same page as the images or on one
or more
nearby pages. Preferably, one or more associated characteristics are provided
by the
image repository with each of images 500. The characteristics may include, for
example, a name, age or age group, gender, general appearance and mood, and
are
generally subjective and are associated with the images by the individuals who
have
publicized the images or by individuals who have tagged the publicized images
with
comments which may include such characteristics.
Computerized person identification training system 510 first analyzes
each of the characteristics associated with each of images 500 and translates
each such
suitable characteristic to an attribute value. For each such value, system 510
then sends
each of images 500 and its associated attribute value to a crowdsourcing
provider, such
as Amazon Mechanical Turk, where a plurality of individuals voice their
opinion as to
the level of correspondence of each image with its associated attribute value.
Upon
receiving the crowdsourcing results for each image-attribute value pair,
system 510
stores those attribute values which received a generally high correspondence
level with
their associated image in a database 520.
Reference is now made to Fig. 8, which is a simplified flow chart
illustrating a training process for associating adjectives with images. As
seen in Fig. 8,
an adjective defining a facial attribute is chosen by the system from a list
of adjectives
to be trained, and one or more publicly available textual search engines are
preferably
employed to retrieve images which are associated with the adjective.
Additionally, one
or more publicly available textual search engines are preferably employed to
retrieve
images which are associated with one or more translations of the adjective in
various
languages. The list of adjectives may be compiled, for example, by collecting
adjectives
from a dictionary.
A visual face detector is employed to identify those retrieved images
which include a facial image. Crowdsourcing is then preferably employed to
ascertain,
based on a majority vote, which of the facial images correspond to the
adjective. The
adjective and corresponding facial images are then used to train a visual
classifier, as
described hereinbelow with regard to Fig. 9. The visual classifier is then
employed to
24

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
associate the adjective with an additional set of facial images, and
crowdsourcing is
further employed to ascertain the level of correspondence of each of the
additional set of
facial images with the adjective, the results of which are used to further
train the visual
classifier. It is appreciated that additional cycles of crowdsourcing and
training of the
visual classifier may be employed to further refine the accuracy of the visual
classifier,
until a desired level of accuracy is reached. After the training of the visual
classifier, the
classifier is added to a bank of attribute functions which can later be used
by the system
to classify facial images by adjectives defining facial attributes.
Reference is now made to Fig. 9, which is a simplified flow chart
illustrating the process of training a visual classifier. As shown in Fig. 9,
for each
adjective, the results of the crowdsourcing process described hereinabove with
regard to
Fig. 8 are employed to generate two collections of images. A first, "positive"
collection
includes images which have been ascertained to correspond to the adjective,
and a
second, "negative" collection includes images which have not been ascertained
to
correspond to the adjective.
The images of both the positive and the negative collection are then
normalized to compensate for varying 2- dimensional and 3-dimensional
alignment and
differing illumination, thereby transforming each of the images into a
canonical image.
The canonical images are then converted into canonical numerical vectors, and
a
classifier is learned from a training set comprising of pairs of positive and
negative
numerical vectors using a supervised-classifier, such as a Support Vector
Machine
(SVM).
Reference is now made to Fig. 10, which is a simplified flow chart
illustrating a process for retrieving adjectives associated with an image. As
shown in
Fig. 10, the image is first analyzed to detect and crop a facial image which
is a part of
the image. The facial image is then converted to a canonical numerical vector
by
normalizing the image to compensate for varying 2-dimensional and 3-
dimensional
pose-alignment and differing illumination. The bank of attribute functions
described
hereinabove with regard to Fig. 8 is then applied to the numerical vector, and
the value
returned from each attribute function is recorded in a numerical vector which
represents
the adjectives associated with the facial image.

CA 02826177 2013-07-31
WO 2012/104830
PCT/1L2011/000287
Reference is now made to Fig. 11, which is a simplified flow chart
illustrating a process for retrieving images from a pre-indexed database of
images,
which are associated with one or more adjectives. As shown in Fig. 11, a
textual query
for images having adjectives associated therewith is first composed. Using
Natural
Language Processing (NLP), adjectives are extracted from the textual query.
The system
then retrieves images from a previously processed database of facial images
which are
best-matched to the adjectives extracted from the query, preferably by using
Latent
Dirichlet Allocation (LDA). The retrieved facial images are ordered by the
level of
correlation of their associated numerical vectors to the adjectives extracted
from the
query, and the resulting ordered facial images are provided as output of the
system.
Reference is now made to Fig. 12, which is a simplified flow chart
illustrating a process for retrieving facial images which are similar to a
first image. As
shown in Fig. 12, the first image is first analyzed to detect and crop a
facial image
which is a part of the image. The facial image is then converted to a
canonical numerical
vector by normalizing the image to compensate for varying 2-dimensional and 3-
dimensional pose-alignment and differing illumination. The bank of attribute
functions
described hereinabove with regard to Fig. 8 is then applied to the numerical
vector, and
the value returned from each attribute function is recorded in a numerical
vector which
represents the adjectives associated with the facial image.
A previously indexed database comprising numerical vectors of images,
such as a KD tree, is searched using a similarity-function, such as Euclidian
distance, to
find a collection of numerical vectors which represent images which closely
match the
numerical vector of the first image.
It will be appreciated by persons skilled in the art that the present
invention is not limited by what has been particularly shown and described
hereinabove.
Rather the scope of the present invention includes both combinations and
subcombinations of the various features described hereinabove as well as
modifications
thereof which would occur to persons skilled in the art upon reading the
foregoing
description and which are not in the prior art.
26

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Time Limit for Reversal Expired 2023-10-03
Letter Sent 2023-03-31
Letter Sent 2022-10-03
Letter Sent 2022-03-31
Inactive: IPC expired 2022-01-01
Letter Sent 2021-03-31
Revocation of Agent Requirements Determined Compliant 2020-09-22
Revocation of Agent Request 2020-07-13
Common Representative Appointed 2019-10-30
Common Representative Appointed 2019-10-30
Revocation of Agent Request 2019-04-25
Revocation of Agent Requirements Determined Compliant 2019-04-25
Inactive: IPC expired 2019-01-01
Grant by Issuance 2017-08-08
Inactive: Cover page published 2017-08-07
Pre-grant 2017-06-27
Inactive: Final fee received 2017-06-27
Notice of Allowance is Issued 2017-06-15
Letter Sent 2017-06-15
Notice of Allowance is Issued 2017-06-15
Inactive: Approved for allowance (AFA) 2017-06-01
Inactive: Q2 passed 2017-06-01
Amendment Received - Voluntary Amendment 2017-01-18
Inactive: S.30(2) Rules - Examiner requisition 2016-10-11
Inactive: Report - No QC 2016-10-07
Inactive: Office letter 2016-08-17
Inactive: Office letter 2016-08-17
Revocation of Agent Requirements Determined Compliant 2016-06-16
Revocation of Agent Request 2016-06-16
Inactive: Office letter 2016-06-03
Revocation of Agent Request 2016-05-26
Letter Sent 2015-10-15
Request for Examination Received 2015-10-01
Request for Examination Requirements Determined Compliant 2015-10-01
All Requirements for Examination Determined Compliant 2015-10-01
Inactive: Cover page published 2013-10-15
Letter Sent 2013-09-18
Letter Sent 2013-09-18
Inactive: First IPC assigned 2013-09-17
Inactive: IPC assigned 2013-09-17
Application Received - PCT 2013-09-16
Inactive: Notice - National entry - No RFE 2013-09-16
Inactive: IPC assigned 2013-09-16
Inactive: First IPC assigned 2013-09-16
Correct Applicant Request Received 2013-08-14
Inactive: Single transfer 2013-08-02
Inactive: Reply to s.37 Rules - PCT 2013-08-02
National Entry Requirements Determined Compliant 2013-08-01
Application Published (Open to Public Inspection) 2012-08-09

Abandonment History

There is no abandonment history.

Maintenance Fee

The last payment was received on 2017-03-07

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
FACEBOOK, INC.
Past Owners on Record
EDEN SHOCHAT
GIL HIRSCH
YANIV TAIGMAN
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Description 2013-07-31 26 1,405
Claims 2013-07-31 12 471
Abstract 2013-07-31 1 73
Representative drawing 2013-09-16 1 31
Drawings 2013-07-31 18 882
Description 2017-01-17 26 1,390
Claims 2017-01-17 6 220
Representative drawing 2017-07-06 1 28
Notice of National Entry 2013-09-15 1 194
Courtesy - Certificate of registration (related document(s)) 2013-09-17 1 102
Courtesy - Certificate of registration (related document(s)) 2013-09-17 1 102
Acknowledgement of Request for Examination 2015-10-14 1 174
Commissioner's Notice - Application Found Allowable 2017-06-14 1 164
Commissioner's Notice - Maintenance Fee for a Patent Not Paid 2021-05-11 1 536
Commissioner's Notice - Maintenance Fee for a Patent Not Paid 2022-05-11 1 551
Courtesy - Patent Term Deemed Expired 2022-11-13 1 536
Commissioner's Notice - Maintenance Fee for a Patent Not Paid 2023-05-11 1 550
PCT 2013-07-31 24 1,652
Correspondence 2013-08-13 3 107
Correspondence 2013-08-01 2 51
Fees 2013-07-31 1 49
Request for examination 2015-09-30 1 48
Correspondence 2016-05-25 16 886
Courtesy - Office Letter 2016-06-02 2 50
Request for Appointment of Agent 2016-06-02 1 36
Correspondence 2016-06-15 16 814
Courtesy - Office Letter 2016-08-16 15 733
Courtesy - Office Letter 2016-08-16 15 732
Examiner Requisition 2016-10-10 3 192
Amendment / response to report 2017-01-17 9 312
Final fee 2017-06-26 1 45