Language selection

Search

Patent 2635068 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2635068
(54) English Title: EMULATING COSMETIC FACIAL TREATMENTS WITH DIGITAL IMAGES
(54) French Title: EMULATION DE TRAITEMENTS ESTHETIQUES DU VISAGE AVEC DES IMAGES NUMERIQUES
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06T 5/00 (2006.01)
(72) Inventors :
  • AARABI, PARHAM (Canada)
(73) Owners :
  • AARABI, PARHAM (Canada)
(71) Applicants :
  • AARABI, PARHAM (Canada)
(74) Agent:
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2008-06-13
(41) Open to Public Inspection: 2009-03-18
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
60/994,146 United States of America 2007-09-18

Abstracts

English Abstract





An initial digital image of a person's face is transformed into a
final digital image in which facial wrinkles and minor anomalies are reduced.
A
region is selected of the initial image in which facial wrinkles and minor
anomalies are to be reduced. Pixels in the selected region are processed by
selecting a target pixel, selecting a set of pixels, preferably randomly, that

immediately surround the target pixel, examining the set to determine a
maximum
pixel brightness value, and adjusting the brightness value of the target pixel
to
correspond to the maximum value. The process is repeated with different target

pixels until substantially all pixels within the selected region have been
processed.
Methods are also provided for transforming facial features to emulate the
effects
of cosmetic procedures, and various methods are applied to the operation of
digital camera to produce aesthetically enhanced images.


Claims

Note: Claims are shown in the official language in which they were submitted.



THE EMBODIMENTS OF AN INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS::
1. A method of converting an initial digital image of a person's face
into a final digital image in which facial wrinkles are reduced, the method
comprising:

selecting a region of the initial image in which facial wrinkles are
to be reduced;

processing pixels in the selected region, the processing comprising
(a) selecting a target pixel within the selected region,

(b) selecting a set of pixels within the selected region that immediately
surround the target pixel,

(c) examining the set to determine a maximum pixel brightness value
for the set,

(d) adjusting the brightness value of the target pixel to correspond to
the maximum value if the maximum value exceeds the brightness value of the
target pixel, and

(e) repeating steps (a) through (d) with different target pixels until
substantially all pixels within the selected region have been processed.

2. The method of claim 1 in which the set is randomly selected from
among the pixels immediately surrounding the target pixel.

3. The method of claim 1 in which the selecting of the set of pixels in
step (b) comprises:

selecting a subregion of the selected region that is substantially
square and substantially centered about the target pixel; and,

selecting the set from among the pixels in the subregion.
-20-


4. The method of claim 3 in which the subregion is selected to have a
width in pixels of about 1% to about 10% of the width of the face in pixels.

5. The method of claim 4 in which the set is randomly selected from
among pixels within the subregion.

6. The method of claim 1 comprising:

detecting one or more facial features within the selected region of
the initial digital image, the one of more facial features being one or more
of the
group consisting of eyes, nose, mouth, eyebrows and hair; and,

superimposing the detected one or more facial features in the initial
image onto the final image.

7. The method of claim 1 comprising smoothing the final image after
the processing of pixels, the smoothing comprising:

further processing pixels in the selected region, the further
processing comprising

(f) selecting a target pixel within the selected region,

(g) selecting a set of pixels located within the selected region that
immediately surround the target pixel,

(h) examining the set to determine an average pixel brightness value
for the set,

(i) adjusting the brightness value of the target pixel to correspond to
the average value, and

(j) repeating steps (f) through (i) with different target pixels until
substantially all pixels within the selected region have been further
processed.
-21-


8. The method of claim 7 comprising:

detecting one or more facial features within the selected region of
the initial digital image, the one of more facial features being one or more
of the
group consisting of eyes, nose, mouth, eyebrows and hair; and,

superimposing the detected one or more facial features of the initial
image onto the final digital image after the further processing.

9. The method of claim 1 comprising producing a smoothed copy of
the initial image, the production of the smoothed copy comprising:

further processing pixels in the selected region, the further
processing comprising

(f) selecting a target pixel within the selected region,

(g) selecting a set of pixels located within the selected region that
immediately surround the target pixel,

(h) examining the set thereby to determine an average pixel brightness
value for the set,

(i) adjusting the brightness value of the target pixel to correspond to
the average pixel brightness value, and

(j) repeating steps (f) through (i) with different target pixels until
substantially all pixels within the selected region have been further
processed.
10. The method of claim 9 comprising mixing the smoothed copy of
the initial image with the final image thereby to smooth the final image.

11. The method of claim 10 comprising:

detecting one or more facial features within the selected region of
the initial digital image, the one of more facial features being one or more
of the
-22-


group consisting of eyes, nose, mouth, eyebrows and hair; and,

superimposing the detected one or more facial features of the initial
image onto the final digital image after the smoothing of the final image.

12. The method of claim 1 adapted to produce a digital image having a
region corresponding to the selected region of the initial image in which
locations
of the wrinkles are identified, the method comprising, for each pixel in the
corresponding region of the digital image of adjusting the colour value of
each
pixel by the difference between the colour value of the corresponding pixel in
the
initial image and the colour value of the pixel in the final image.

13. The method of claim 1 adapted to produce in the final digital image
a facial feature that undergoes a desired transformation in size, position or
both
from its orientation in the initial image, the initial and final digital
images being
formed of coloured pixels located at predetermined positions on a grid, the
method comprising:

identifying the location of the facial feature in the selected region;
calculating new pixel positions for pixels in the initial digital image
that define the located feature thereby to implement the desired
transformation
and incidentally defining one or more stray pixels that are not located at the
grid
positions; and,

incorporating the colour values of the stray pixels into colour
values associated with pixels located at nearby grid positions.

14. The method of claim 13 in which the incorporating of the colour
values of the stray pixels comprises for each of the grid positions:

identifying a set of stray pixels whose pixel positions are within
- 23 -


one grid spacing unit from the predetermined grid position;
handling the set of stray pixels by:

(a) ignoring the set if the set is empty;

(b) if the set consists of only one member, substituting the colour value
of the one member for the colour value associated with the predetermined grid
position; or,

(c) if the set consists of two or more members, combining the colour
values of the two or more members and substituting the combined colour value
for
the colour value associated with the grid position.

15. The method of claim 14 in which the combining of colour values in
step (c) comprises forming a weighted average of the colour values of the set
members in which the colour value of each of the set members is multiplied by
a
scaling factor that varies substantially inversely with the relative distance
of the
set member from the grid position.

16. The method of claim 1 adapted for use with a digital camera,
comprising:

the preliminary steps of capturing the initial digital image with the
camera and automatically identifying the location of the person's face in the
captured digital image to serve as the selected region;

thereafter automatically executing steps (a) through (e) of claim 1.
17. The method as claimed in claims 1 to 16 adapted to restore a colour
balance to the final picture, comprising:

(i) producing a smoothed copy of the initial digital image;
(ii) producing a smoothed copy of the final digital image;
-24-


(iii) adjusting the colour value of each pixel of the final digital
image by adding an adjustment factor corresponding to the difference between
the
colour value of the corresponding pixel in the smoothed copy of initial image
less
the colour value of the corresponding pixel in the smoothed copy of the final
image.

18. A method of manipulating an initial digital image containing a
human face so as to produce a final digital image in which a facial feature
undergoes a desired transformation in size, position or both, the initial and
final
digital images being formed of coloured pixels located at predetermined
positions
on a grid, the method comprising:

identifying the location of the facial feature in the initial digital
image;

calculating new pixel positions for pixels in the initial digital image
that define the located feature thereby to implement the desired
transformation
and incidentally defining one or more stray pixels that are not located at the
grid
positions; and,

incorporating the colour values of the stray pixels into colour
values associated with pixels located at nearby grid positions.

19. The method of claim 18 in which the incorporating of the colour
values of the stray pixels comprises for each of the grid positions:

identifying a set of stray pixels whose pixel positions are within
one grid spacing unit from the predetermined grid position;

handling the set of stray pixels by:
(a) ignoring the set if the set is empty;
-25-


(b) if the set consists of only one member, substituting the colour value
of the one member for the colour value associated with the predetermined grid
position; or,

(c) if the set consists of two or more members, combining the colour
values of the two or more members and substituting the combined colour value
for
the colour value associated with the grid position.

20. The method of claim 19 in which the combining of colour values in
step (c) comprises forming a weighted average of the colour values of the set
members in which the colour value of each of the set members is multiplied by
a
scaling factor that varies substantially inversely with the relative distance
of the
set member from the grid position.

21. The method of claim 18 in which the location of the facial features
is in a selected region that comprises a wrinkle or minor facial anomaly, the
method comprising the preliminary steps of:

processing pixels in the selected region, the processing comprising
(a) selecting a target pixel within the selected region,

(b) selecting a set of pixels within the selected region that immediately
surround the target pixel,

(c) examining the set to determine a maximum pixel brightness value
for the set,

(d) adjusting the brightness value of the target pixel to correspond to
the maximum value if the maximum value exceeds the brightness value of the
target pixel, and

(e) repeating steps (a) through (d) with different target pixels until
-26-


substantially all pixels within the selected region have been processed.

22. The method of claim 21 adapted to restore a colour balance to the
final picture, comprising:

producing a smoothed copy of the initial digital image;
producing a smoothed copy of the final digital image;

adjusting the colour value of each pixel of the final digital image
by adding an adjustment factor corresponding to the difference between the
colour
value of the corresponding pixel in the smoothed copy of initial image less
the
colour value of the corresponding pixel in the smoothed copy of the final
image.
23. The method of any one of claims 18 through 22 in which the step
of calculating new pixel positions incidentally produces empty grid positions
which are not proximate to stray pixels, the method further comprising for
each
such empty grid position interpolating the colour values of adjacent non-empty
grid positions and assigning the interpolated colour values to the empty grid
position.

24. The method of any one of claims 18 through 22 adapted for use
with a digital camera, in which the identifying of the location of the facial
feature
in the initial digital image comprises automatically identifying the location
of the
person's face in the captured digital image and automatically identifying the
location of one ore more of the facial features within the location of the
person's
face.

-27-


THE EMBODIMENTS OF AN INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS::
1. A method of converting an initial digital image of a person's face
into a final digital image in which facial wrinkles are reduced, the method
comprising:

selecting a region of the initial image in which facial wrinkles are
to be reduced;

processing pixels in the selected region, the processing comprising
(a) selecting a target pixel within the selected region,

(b) selecting a set of pixels within the selected region that immediately
surround the target pixel,

(c) examining the set to determine a maximum pixel brightness value
for the set,

(d) adjusting the brightness value of the target pixel to correspond to
the maximum value if the maximum value exceeds the brightness value of the
target pixel, and

(e) repeating steps (a) through (d) with different target pixels until
substantially all pixels within the selected region have been processed.

2. The method of claim 1 in which the set is randomly selected from
among the pixels immediately surrounding the target pixel.

3. The method of claim 1 in which the selecting of the set of pixels in
step (b) comprises:

selecting a subregion of the selected region that is substantially
square and substantially centered about the target pixel; and,

selecting the set from among the pixels in the subregion.
-20-



4. The method of claim 3 in which the subregion is selected to have a
width in pixels of about 1% to about 10% of the width of the face in pixels.


5. The method of claim 4 in which the set is randomly selected from
among pixels within the subregion.


6. The method of claim 1 comprising:

detecting one or more facial features within the selected region of
the initial digital image, the one of more facial features being one or more
of the
group consisting of eyes, nose, mouth, eyebrows and hair; and,

superimposing the detected one or more facial features in the initial
image onto the final image.


7. The method of claim 1 comprising smoothing the final image after
the processing of pixels, the smoothing comprising:

further processing pixels in the selected region, the further
processing comprising

(f) selecting a target pixel within the selected region,

(g) selecting a set of pixels located within the selected region that
immediately surround the target pixel,

(h) examining the set to determine an average pixel brightness value
for the set,

(i) adjusting the brightness value of the target pixel to correspond to
the average value, and

(j) repeating steps (f) through (i) with different target pixels until
substantially all pixels within the selected region have been further
processed.

-21-



8. The method of claim 7 comprising:

detecting one or more facial features within the selected region of
the initial digital image, the one of more facial features being one or more
of the
group consisting of eyes, nose, mouth, eyebrows and hair; and,

superimposing the detected one or more facial features of the initial
image onto the final digital image after the further processing.


9. The method of claim 1 comprising producing a smoothed copy of
the initial image, the production of the smoothed copy comprising:

further processing pixels in the selected region, the further
processing comprising

(f) selecting a target pixel within the selected region,

(g) selecting a set of pixels located within the selected region that
immediately surround the target pixel,

(h) examining the set thereby to determine an average pixel brightness
value for the set,

(i) adjusting the brightness value of the target pixel to correspond to
the average pixel brightness value, and

(j) repeating steps (f) through (i) with different target pixels until
substantially all pixels within the selected region have been further
processed.

10. The method of claim 9 comprising mixing the smoothed copy of
the initial image with the final image thereby to smooth the final image.


11. The method of claim 10 comprising:

detecting one or more facial features within the selected region of
the initial digital image, the one of more facial features being one or more
of the

-22-



group consisting of eyes, nose, mouth, eyebrows and hair; and,

superimposing the detected one or more facial features of the initial
image onto the final digital image after the smoothing of the final image.


12. The method of claim 1 adapted to produce a digital image having a
region corresponding to the selected region of the initial image in which
locations
of the wrinkles are identified, the method comprising, for each pixel in the
corresponding region of the digital image of adjusting the colour value of
each
pixel by the difference between the colour value of the corresponding pixel in
the
initial image and the colour value of the pixel in the final image.


13. The method of claim 1 adapted to produce in the final digital image
a facial feature that undergoes a desired transformation in size, position or
both
from its orientation in the initial image, the initial and final digital
images being
formed of coloured pixels located at predetermined positions on a grid, the
method comprising:

identifying the location of the facial feature in the selected region;
calculating new pixel positions for pixels in the initial digital image
that define the located feature thereby to implement the desired
transformation
and incidentally defining one or more stray pixels that are not located at the
grid
positions; and,

incorporating the colour values of the stray pixels into colour
values associated with pixels located at nearby grid positions.


14. The method of claim 13 in which the incorporating of the colour
values of the stray pixels comprises for each of the grid positions:

identifying a set of stray pixels whose pixel positions are within

-23-



one grid spacing unit from the predetermined grid position;
handling the set of stray pixels by:

(a) ignoring the set if the set is empty;

(b) if the set consists of only one member, substituting the colour value
of the one member for the colour value associated with the predetermined grid
position; or,

(c) if the set consists of two or more members, combining the colour
values of the two or more members and substituting the combined colour value
for
the colour value associated with the grid position.


15. The method of claim 14 in which the combining of colour values in
step (c) comprises forming a weighted average of the colour values of the set
members in which the colour value of each of the set members is multiplied by
a
scaling factor that varies substantially inversely with the relative distance
of the
set member from the grid position.


16. The method of claim 1 adapted for use with a digital camera,
comprising:

the preliminary steps of capturing the initial digital image with the
camera and automatically identifying the location of the person's face in the
captured digital image to serve as the selected region;

thereafter automatically executing steps (a) through (e) of claim 1.

17. The method as claimed in claims 1 to 16 adapted to restore a colour
balance to the final picture, comprising:

(i) producing a smoothed copy of the initial digital image;
(ii) producing a smoothed copy of the final digital image;

-24-



(iii) adjusting the colour value of each pixel of the final digital
image by adding an adjustment factor corresponding to the difference between
the
colour value of the corresponding pixel in the smoothed copy of initial image
less
the colour value of the corresponding pixel in the smoothed copy of the final
image.


18. A method of manipulating an initial digital image containing a
human face so as to produce a final digital image in which a facial feature
undergoes a desired transformation in size, position or both, the initial and
final
digital images being formed of coloured pixels located at predetermined
positions
on a grid, the method comprising:

identifying the location of the facial feature in the initial digital
image;

calculating new pixel positions for pixels in the initial digital image
that define the located feature thereby to implement the desired
transformation
and incidentally defining one or more stray pixels that are not located at the
grid
positions; and,

incorporating the colour values of the stray pixels into colour
values associated with pixels located at nearby grid positions.


19. The method of claim 18 in which the incorporating of the colour
values of the stray pixels comprises for each of the grid positions:

identifying a set of stray pixels whose pixel positions are within
one grid spacing unit from the predetermined grid position;

handling the set of stray pixels by:
(a) ignoring the set if the set is empty;

-25-



(b) if the set consists of only one member, substituting the colour value
of the one member for the colour value associated with the predetermined grid
position; or,

(c) if the set consists of two or more members, combining the colour
values of the two or more members and substituting the combined colour value
for
the colour value associated with the grid position.


20. The method of claim 19 in which the combining of colour values in
step (c) comprises forming a weighted average of the colour values of the set
members in which the colour value of each of the set members is multiplied by
a
scaling factor that varies substantially inversely with the relative distance
of the
set member from the grid position.


21. The method of claim 18 in which the location of the facial features
is in a selected region that comprises a wrinkle or minor facial anomaly, the
method comprising the preliminary steps of:

processing pixels in the selected region, the processing comprising
(a) selecting a target pixel within the selected region,

(b) selecting a set of pixels within the selected region that immediately
surround the target pixel,

(c) examining the set to determine a maximum pixel brightness value
for the set,

(d) adjusting the brightness value of the target pixel to correspond to
the maximum value if the maximum value exceeds the brightness value of the
target pixel, and

(e) repeating steps (a) through (d) with different target pixels until

-26-


substantially all pixels within the selected region have been processed.


22. The method of claim 21 adapted to restore a colour balance to the
final picture, comprising:

producing a smoothed copy of the initial digital image;
producing a smoothed copy of the final digital image;

adjusting the colour value of each pixel of the final digital image
by adding an adjustment factor corresponding to the difference between the
colour
value of the corresponding pixel in the smoothed copy of initial image less
the
colour value of the corresponding pixel in the smoothed copy of the final
image.

23. The method of any one of claims 18 through 22 in which the step
of calculating new pixel positions incidentally produces empty grid positions
which are not proximate to stray pixels, the method further comprising for
each
such empty grid position interpolating the colour values of adjacent non-empty

grid positions and assigning the interpolated colour values to the empty grid
position.


24. The method of any one of claims 18 through 22 adapted for use
with a digital camera, in which the identifying of the location of the facial
feature
in the initial digital image comprises automatically identifying the location
of the
person's face in the captured digital image and automatically identifying the
location of one ore more of the facial features within the location of the
person's
face.


-27-

Description

Note: Descriptions are shown in the official language in which they were submitted.



CA 02635068 2008-06-13

EMULATING COSMETIC FACIAL TREATMENTS
WITH DIGITAL IMAGES

FIELD OF THE INVENTION

The invention relates generally to manipulation of digital images, and
more specifically, to alteration of a digital image of a person's face so as
to simulate
the effects of various cosmetic procedures, such as lip augmentation, eyebrow
lifts,
cheek lifts, nose reduction, skin rejuvenation, and overall facelifts.

DESCRIPTION OF THE PRIOR ART

Prior art is known that deals with modification of digital images of
the human face to simulate the effects of plastic surgery, laser treatments or
other
cosmetic procedures. In that regard, reference is made to the inventor's prior
U.S.
patent application published on November 8, 2007 under serial number
11/744,668.
The earlier patent application describes both prior and new methods for
detecting a
face and detecting facial features within a face, including automatic and semi-


automatic techniques. It also describes automatic and semi-automatic
procedures
and associated means for visualizing the results of a facelift or various
modifications of particular facial features. It also provides methods for
transferring
facial features in one digital image of a face into another digital image of a
face,
such features including eyes, eyebrows, nose, mouth, lips or hair. It also
provides

methods for performing a virtual facelift on a digital image using various
blending
and colouring features to produce a more realistic appearance in the final
image. It
also provides user interfaces for selecting features to be transformed and
selecting
the degrees of such transformations. Such transformations include forehead
lifts,
eyebrow lifts, below eyelifts, inter-brow lifts, outer cheek lifts, inner
cheek lifts, lip

- I -


CA 02635068 2008-06-13

augmentation, and jaw restoration or lifting. Among other things the prior
patent
application describes user interfaces for a computer system that permit
selection of
one or more facial features and the degree to which a selected transformation
affects
the selected facial features. The teachings of the earlier published U.S.
patent

application and references to which it refers are recommended to the reader,
as prior
art techniques will not be retaught in this specification.

The present specification is concerned with more specific matters
pertaining to digital image processing, such as retexturing of skin to lessen
facial
wrinkles and minor facial anomalies and managing the transformation of facial

features to more realistically reflect the effects of various cosmetic
procedures.
BRIEF SUMMARY OF THE INVENTION

In one aspect, the invention provides a method of converting an initial
digital image of a person's face into a final digital image in which facial
wrinkles
and minor facial anomalies are reduced. The method comprises selecting a
region of

the initial image in which facial wrinkles and minor facial anomalies are to
be
reduced, and processing pixels in the selected region as follows. A target
pixel
within the selected region is selected together with a set of pixels in the
selected
region that immediately surround the target pixel. The set is then examined to
determine a maximum pixel brightness value for the set. The brightness value
of the

target pixel is then set to correspond to the maximum value if that value
exceeds the
brightness value of the target pixel. The pixel processing steps are then
repeated
with different target pixels until substantially all pixels within the
selected region
have been processed.

In another aspect the invention provides a method of manipulating an
-2-


CA 02635068 2008-06-13

initial digital image containing a human face so as to produce a final digital
image in
which a facial feature undergoes a desired transformation in one or more
aspects
such as size, shape and position. The initial and final digital images are
formed of
coloured pixels located only at predetermined positions on a grid, which is
normally

the case with most display systems. The method involves identifying the
location of
the facial feature in the initial digital image, and then calculating new
pixel positions
for those pixels in the initial digital image that define the located feature.
The new
calculated pixel positions implement the desired transformation but
incidentally
define one or more stray pixels that are not located at the grid positions.
The colour

values of the stray pixels are then incorporated into colour values associated
with
pixels located at nearby grid positions, producing a more life-like
appearance. This
method will often be used in tandem with the method above to reduce wrinkles
that
might be associated with the part of the face being repositioned, for example,
to
remove laugh lines when performing a virtual cheek lift.

Other aspects of the invention will be apparent from a description of
preferred embodiments and will be more specifically defined in the appended
claims. For purposes of this specification, the term "pixel" should be
understood as
denoting a position in a display grid and a colour value that typical
identifies the
intensity of different colours (typically red, green and blue commonly
identified with

the acronym "RGB") applied at the particular grid position. The term
"brightness
value" as used in respect of a pixel should be understood as corresponding to
the
sum of the individual RGB values associated with the pixel.

DESCRIPTION OF THE DRAWINGS

The invention will be better understood with reference to drawings
-3-


CA 02635068 2008-06-13
in which:

Figs.. la and lb diagrammatically illustrate a digital image of a
face respectively before and after a skin retexturing process;

Fig. 2 diagrammatically illustrates a pixel grid associated with the
drawings of figs. la and lb;

Fig. 3 is an enlarged view of the region designated 3 in fig. 2
further detailing pixels and grid positions;

Fig. 4 is a flowchart illustrating the principal steps in a virtual skin
retexturing process;

Fig. 5 is a flowchart illustrating the principal steps in a skin and
facial feature smoothing process;

Fig. 6 is a flowchart illustrating how a facial feature can be
contracted, expanded or simply moved;

Fig. 7a is a simplified example of how pixel positions are

recalculated resulting in stray pixels, which are shown as empty circles, and
empty grid positions, which are also shown as empty circles, and fig. 7b shows
how proper grid positions are ultimately achieved;

Figs 8a and 8b diagrammatically illustrate the digital representation
of a face before and after an eyebrow lift is performed;

Fig. 9 graphically illustrates a mapping function used to calculate
pixel positions and their displacement to give effect to the eyebrow lift
apparent
from figs. 8a and 8b;

Figs. 10a and lOb diagrammatically illustrate a digital image of a
face before and after a virtual lip augmentation is performed;

-4-


CA 02635068 2008-06-13

Fig. 11 graphically illustrates a mapping function used to calculate
pixel positions surrounding the user's lips and their displacement to give
effect to
the augmentation of the lips apparent from figs. l0a and 10b;

Fig. 12 diagrammatically illustrates a digital image of a face before
a virtual cheek lift is performed with the affected area shown within a
stippled
rectangle;

Fig. 13 graphically illustrates a mapping function used to calculate
new positions for the pixels defining the cheek areas of fig. 12;

Figs 14a and 14b diagrammatically illustrate a digital image of a
face before and after performance of a virtual nose reduction;

Fig. 15 graphically illustrates a mapping function used to calculate
new positions for the pixels defining the nose area of fig. 14a.

Fig. 16 diagrammatically illustrates a camera incorporating the
invention and the images produced by the camera from an initial image to a
final
aesthetically enhanced image;

Fig. 17 diagrammatically illustrates the camera of fig. 16 in greater
detail;

Fig. 18 diagrammatically illustrates certain internal components of
the camera in greater detail; and,

Fig. 19 is a flowchart diagrammatically illustrating the novel
functions performed by the camera.

It should be understood that the graphic mapping functions
illustrated herein have been stripped of excess pixels to facilitate
illustration. The
reason is that the density of the dotted lines if drawn to scale (72 dots per
inch

-5-


CA 02635068 2008-06-13

being typical in most computer monitors) does not readily permit reproduction
or
understanding.

DESCRIPTION OF PREFERRED EMBODIMENTS

The skin retexturing method of the invention will be described with
reference to fig. la, which shows a face 10 before a facelifting operation,
and Fig.
lb, which shows the face 12 after a virtual facelift has been performed. The
retexturing method requires repeated selection of a target pixel whose
intensity
will be adjusted and a surrounding set of pixels whose colour brightness
values
will be assessed and assigned to the target pixel. In the expanded view of
fig. 3,

which shows a typical pixel arrangement for such purposes, the target pixel is
designated TP while the member pixels of the surrounding set have been labeled
Ml-M8. It should be noted that grid of fig. 3 has been grossly minimized and
exaggerated for purposes of illustration. In practice, an array of pixels n x
n might
typically be chosen, with n being approximately 1% to 10% of the width of the

face in pixels for best results. A total of n pixels will generally suffice to
assess
the brightness value to be assigned to the target pixel TP. As well, to make
the
results appear more realistic, the set of n pixels is preferably randomly
selected.

The method used to reduce wrinkles and minor skin anomalies is
illustrated in the flow chart of fig. 4. First, a region to be retextured is
selected at
step 14. In many instances, the region will be the entirety of the face and
may

include all facial features such as the lips, nose, eyes, eyebrows and hair,
as might
be appropriate with a complete facelift. The affected region in fig. la has
been
shown in a rectangular of stippled outline labeled with reference number 16.
If
smaller regions are to be smoothed, the region may be restricted to the
forehead,
-6-


CA 02635068 2008-06-13

below eyes or beside the eyes, or simply the region surrounding the cheeks, if
a
cheek lift is contemplated. Referring back to the flowchart fig. 4, the target
pixel
TP to be lightened is selected at step 18 and a set of surrounding pixels
whose
members are label M1-M8 is selected at step 20. The pixels forming the
boundary

of the superposed grid are normally several rows and columns thick, although
not
so illustrated, and are not treated as target pixels. At step 22 in fig. 4,
the set of
pixels MP1-MP8 are checked for maximum pixel brightness, namely, the
maximum sum of the R, G and B values associated with the pixels. At step 24,
the brightness value of the target pixel TP is set to the maximum brightness
value

derived from inspection of the set. At step 26, a check is made to determine
whether all target pixels within the selected region have been processed. If
not,
the method returns to step 18 to select another target pixel, followed by step
20 in
which a new set of surrounding pixels is selected, and the new target pixel is
assigned the maximum brightness value derived from the new set of surrounding

pixels. Once all target pixels within the boundary of the selected region have
been
processed in this manner, the results can be seen in the face 12 of fig. lb.
Laugh
lines that extend downward from either side of the nose have been reduced in
thickness. The largely horizontal lines below the eyes have been eliminated
(though small remnants may often remain), and the vertical wrinkles between
the

eyes have been reduced as well. As suggested in the inventor's previous
published U.S. patent application, a dialog box can be used to set process
parameters, and choices for wrinkle reduction may very from mildly reduced,
medium reduction or substantially complete removal. Although not specifically
illustrated, other minor facial anomalies such as moles or scars are reduced
or

-7-


CA 02635068 2008-06-13

eliminated in the same manner as the wrinkles.

The method of fig. 4 then continues below the if-box 26.

The interim result at this stage is an image referred to as F(x,y), which is
simply a
three dimensional array specifying the R, B and G colour values at any pixel

position given by integer coordinates x and y. To provide a more realistic
image,
the image F(x,y) is preferably smoothed at subroutine 28, which is detailed in
the
flowchart of fig. 5. The subroutine 28 bears some similarity to the steps of
the
procedure of fig. 4. In step 30, of fig. 5 the region of interest has already
been
selected as the entirety of the face, although this can be selected according
to the

region of the face to be treated. Once again, at step 32, an initial target
pixel is
selected within the bounding box of the region being treated. A set of pixels
surrounding the target pixel is then selected at step 34, and once again, to
encourage a realistic representation the set of pixels may be randomly
selected.
At step 36, the average brightness value of the pixel set is determined, and
at step

38, the brightness value of the target pixel is set to that average value.
This
process is repeated at step 40 until all target pixels in the selected region
have
been exhausted with such processing, producing a smoothed image Fs(x,y). To
produce more realistic results, the facial features from the original digital
image
are then superimposed on the smooth image Fs(x,y) at step 42. This may be done

with a conventional blending mask that emphasizes the pixels central to the
region
being processed and emphasizes the smoothed pixels at the periphery of the
facial
region being processed.

We return again to the flow chart of fig. 4 where last step 42
(superimposing facial features) has just been completed. One shortcoming
-8-


CA 02635068 2008-06-13

associated with the image F(x, y) is that the pixels are too bright and can
look
somewhat artificial. A recolouring process is then initiated within the
remaining
steps of the method of fig. 4. First, the initial image I(x,y) is smoothed at
step 44
using the process in fig. 5 or alternatively a conventional blending mask to

produce a smooth copy Is(x,y). A new target pixel (x,y) is set at step 46, and
at
step 48 the colour value of the target pixel F(x,y) of the excessively bright
image
is then combined with the difference between the colour values of the smooth
digital image Is(x,y) and the colour value of the smooth digital image
Fs(x,y).
The difference between corresponding pixels in ls(x,y) and Fs(x,y) typically

consists of negative colour values, which effectively reduce the colour
intensity of
the pixel F(x,y). This process is repeated at step 50 until all pixels within
the
selected region are similarly processed and effectively exhausted.

An exemplary use of such a recolouring process will be described. Assume that
the original unaltered image is denoted as I(x,y), an image that at each x,y

coordinate has a red, green, and blue value. In other words, 1(11,15) might
for
example be [10,55,185] where 10 is the red value, 55 is the green value, and
185
is the blue value component of the pixel at location (11,15). The retextured
version of this image will be denoted as F(x,y). Although F(x,y) has reduced
wrinkles and facial anomalies, it is brighter in many regions thereby making
it

look unrealistic. In order to adjust the colour of F(x,y) to make it more like
that of
I(x,y) without reintroducing wrinkles, we need to introduce the concept of
average
or blurred image. The image F(x,y) is blurred/smoothed to get image Fs(x,y),
and
the image I(x,y) is blurred/smoothed to get image Is(x,y). Blurring consists
of
changing each pixel to the average of its nearby pixels, or alternatively by
filtering

-9-


CA 02635068 2008-06-13

each image by a two dimensional blurring/smoothing filter. At each pixel, the
colour difference between the two blurred images represents the colour
imbalance
that has resulted as a result of retexturing. As a result, one can adjust the
image
F(x,y) to get the re-coloured and re-textured image Fnew(x,y) as follows:

Fnew(x,y)=F(x,y)+Is(x,y)-Fs(x,y). If not already apparent, it should be noted
that
the above operation is performed separately on each of the red, green, and
blue
pixel values. The resultant value of the final digital image Fnew(x,y) will
have a
very close colour composition to that of I(x,y) (the original image) but the
wrinkles and anomalies would be reduced. This is exactly the result wanted to

simulate facial aesthetics treatments on photos.

Various facial cosmetic procedures can be visualized as a
transformation of particular facial features. A nose reduction for example,
consists essentially of a horizontal compression that shrinks the nose. A lip
augmentation on the other hand can be viewed as a vertical stretching that

increases the height of the lips. An eyebrow lift can be viewed similarly as
an
upward stretching/displacement of the eyebrows. A cheek lift (as part of a
facelift
or as a partial facelift) can be viewed as an upward stretching of the cheeks.
A
facelift can be viewed as an upward vertical stretching of the cheeks thereby
simulating the effects of dermal fillers or a surgical facelift. A set of
shared

techniques can be used to give effect to such transformations. Since a
prospective
patient might expect to see a reduction of wrinkles as part of such a
procedure, for
example with a cheek lift, the transformation of facial features is often
accompanied with a retexturing of the skin at least in the region surrounding
the
feature to be altered.

-10-


CA 02635068 2008-06-13

The general method for transforming various facial features is
illustrated in the flow chart of fig. 6. First, the location of the feature to
be altered
is identified as at step 52 (using well known prior art techniques). This will
generally result in a grid box being located about the region in which the
facial

feature of interest is located. At step 54, new pixel positions are calculated
for
those pixels composing the feature (excluding pixels at bounding boxes
outwardly
positioned relative to the feature). These pixel calculations may implement a
contraction of the feature (as in nose reduction), expansion of a feature (as
in lip
augmentation) or general movement of a feature (as in a cheek lift). Pixel

mapping functions may be used for such purposes, and mapping functions for
various alterations to facial features are discussed below. Once again, the
procedure must contend with stray pixels and potentially empty grid positions.

At step 56 of the flowchart of fig. 6, a loop is entered in which the
fixed grid positions are analyzed to determine as in step 58 whether stray
pixels
(shown as empty circles) are proximate to the current grid position being

analyzed. A count is taken at step 60 of the number of members in the set of
stray pixels proximate to the grid position in issue. In this embodiment, to
be
considered proximate to a grid position, a stray pixel must be within 1 grid
unit
horizontally or within 1 grid unit vertically relative to the grid position in
issue.

The method then branches according to the size of this set of stray pixels. If
there
is only a single member in the set, then the colour values for the single
member
are substituted for those of the adjacent grid position, as at step 62. If
multiple
stray pixels (two or more) are proximate to a particular grid position, the

brightness value of the pixel at the grid position is replaced as at step 64
with a
-11-


CA 02635068 2008-06-13

weighted average of the surrounding stray pixels, the weighting or scaling
factors
being selected to vary inversely with the distance of each stray pixel from
the grid
position. If the stray pixel set is empty, then the grid position is
unaffected unless
the grid position is empty due to prior recalculation of its pixel to a new
location,

and this emptiness is checked at step 66. Since no stray pixel are available
to set
the brightness value of the empty pixel at the grid position, interpolation
among
other grid positions is used at step 68 to assign a value. This process is
repeated
until all stray pixels have been processed and all empty grid positions are
filled.
Thereafter, the stray pixels are simply ignored.

An example of such processing of stray pixels and empty grid
positions is provided in the very simple views of fig. 7a (before processing)
and
fig. 7b (after processing). It should be noted, once again, that grid density
would
be very significantly larger than what has been illustrated in figs. 7a and
7b, but
spacing has been increased for ease of illustration and part labeling. Grid

positions are indicted with reference characters GPI through to GP9. In fig.
7a,
two stray pixels have been indicated with empty circles and labeled with
reference
characters SPa and SPb, and the letters "a" and "b" identify the distance of
stray
pixels SPa and SPb respectively from the central grid position GP5. The stray
pixel SPa is assumed for example to have RGB colour values of (110, 150, 220)

while SPb is assumed to have RGB colour values of (70,100, 250.) An empty
grid position GP8 has been indicated with an empty circle to indicate no
assigned
RGB values, and the remaining pixels at grid positions are shown solid to
indicate
an associated RGB colour value.

The central grid position GP5 will be used to exemplify the weight
-12-


CA 02635068 2008-06-13

averaging that occurs when the two stray pixels SPa, SPb are combined to
replace
the colour value of the grid position. The new RGB colour values of the
central
pixel GP5 are calculated as weight averages, as follows:

R=(b/(a+b)) 110 + (a/(a+b)) 70
G=(b/(a+b)) 150 + (a/(a+b)) 100
B=(b/(a+b)) 220 + (a/(a+b)) 250

It should be noted that distance b is considerably greater than distance a.
What
should be noted is that weighting or scaling factors vary inversely with the
distance of the relevant stray pixel from an associated grid position. Similar
but

simpler computations are done for the remaining grid positions. For example,
the
colour value of grid position GPl is simply replaced by that of the single
nearby
stray pixel SPb. The colour value at grid position GP2 is replaced by the
weighted average of nearby stray pixels SPa and SPb, but distances between
grid
position GP2 and each the stray pixels SPa and SPb must be calculated first.
The

colour value of empty grid position GP3 remains unchanged as its colour value
is
non-zero and there are no stray pixels nearby. The RGB colour values at grid
position GP4 are simply replaced by those of the single nearby stray pixel
SPb.
The colour value at grid position GP6 is left unchanged since neither stray
pixel
SPa or SPb is less than 1 unit distance from grid position GP6. The colour
value

associated with the grid position GP7 remains unchanged, as does the colour
value
associated with the grid position GP9, neither of which has a zero colour
value
and neither of which is in the vicinity of a stray pixel. The grid position
GP8 has
no colour value associated with it, and it is consequently assigned an
interpolated
value derived from non-empty grid positions GP7 and GP9. The result of these

- 13 -


CA 02635068 2008-06-13

re-mapping of values results in the grid arrangement shown in 7b where all
pixels
are positioned for proper display but with colour values as modified above.
What
should be noted about the weighting function used to incorporate multiple
stray
pixel into a grid position is the numerator distance used: the closest stray
pixel is

assigned the largest distance value of any other stray pixel, the farthest
stray pixel
is assigned the smallest distance value, and any intervening stray pixels are
ordered accordingly. This has the effect of giving prominence to the colours
associated with closer pixels, a very natural approach. In more complex
examples, whether there are multiple stray pixels, which have been moved in
both

x and y directions, the weighting formula is simply adapted accordingly.
Various feature transformations are described below. Figs 8a and
8b show before and after a virtual eyebrow lift. First a region is selected
for
transformation as generally indicated by the dashed box 70 surrounding the
eyebrows and mid forehead of the digital face. A mapping function

diagrammatically illustrated in fig. 9 is used to calculate new pixel
positions for
pixels within the eyebrow area. What should be noted is that the mapping
function displaces the pixels associated with the eyebrows themselves,
effectively
stretching he eyebrows upward as shown in the final image 72 in fig. 8b.
However, central pixels are largely unaffected, leaving the region between the

eyebrows and immediately above the nose unaffected. Once again, the algorithm
shown in fig. 6 is followed not only to calculate new pixel positions but also
to
combine resulting stray pixels with fixed grid position and to fill empty grid
position if required. Weighting functions are simplified in this example as
all
displacement of eyebrow pixels is vertical, and only two stray vertically
aligned

-14-


CA 02635068 2008-06-13

pixels might typically be combined to achieve a weighted average at a nearby
grid
position. Forehead wrinkles (not shown) are preferably removed as an adjunct
to
the eyebrow lift. This can be done using the skin retexturing algorithm shown
in
fig. 4 with incidental smoothing of the results for a region selected to bound
the

forehead and eyebrow areas.

Reference is made to figs. l0a and lOb which show before and
after performance of a virtual lip augmentation. The area bounding the lips is
shown in stippled outline in fig. 10a and labeled with reference number 74,
and
the pixel mapping function is graphically represented in fig. 11. The central
pixels

constituting the junction between the upper and lower lips are left
unaffected.
Above that central region, the existing pixel constituting the upper lip are
mapped
to higher positions, and below the central region, the existing pixels
constituting
the lower lip are mapped to lower positions. The net effect is a vertical
stretching
of the lips so as to achieve the desired augmentation, as apparent in final
face 76.

Once again, the algorithm illustrated in fig. 6 is followed to map the
existing
pixels to new locations, resulting in stray pixels and empty grid positions.
The
pixel weighting functions are once again simplified in this implementation of
the
invention as all displacement is vertical. As will be noted in the after
picture of
fig. lOb the upper and lower limits have been stretched vertically, providing
the

appearance of fuller lips. If necessary, empty grid positions not proximate to
stray
pixels are assigned values interpolated from surrounding non-empty grid
positions.

Fig. 12 shows the before face 10 in preparation for a cheek lift.
The region of interest has been selected in an entirely conventional manner
and
-15-


CA 02635068 2008-06-13

the region affected has been indicated with a stippled rectangular box labeled
78.
The associated mapping function used to calculate new pixel position for those
pixels constituting the cheeks has been graphically illustrated in fig. 13. It
should
be noted that both horizontal sides of the mapping function are constructed to

raise pixels in both the left and right cheeks. The central region of the
mapping
function has fewer variations from standard grid position in order to leave
the
nose itself substantially unaffected by the transformation. A cheek lift will
normally be expected to diminish the laugh lines extending downward from the
nose and outward relative the corners of the mouth. This is another instance
in

which a preliminary step would be to lighten the pixels in the region
encompassing the cheeks, effectively diminishing the laugh lines (although
this
has not been shown). To implement the cheek lift, the procedure in fig. 6 is
once
again followed. The mapping function is used to calculate new pixel positions
for
the pixel defining the cheek areas, once again giving rise to stray pixels and

empty grid positions. As described in the algorithm of fig. 6, stray pixels
within
the vicinity of a grid position (within 1 horizontal unit and I vertical unit)
are
combined in a weighted average and applied to the nearby grid position. Any
empty grid position that is not within the vicinity of stray pixels is once
again
filled with colour values by interpolation from surrounding non-empty grid

positions.

Figs. 14a and 14b show before and after conditions associated with
as nose reduction. A region containing the nose is delimited in a standard
fashion
and designation with a stippled box labeled 80 in fig. 14a. Unlike other
feature
transformation demonstrated herein, the pixel area defining the nose is
effectively

-16-


CA 02635068 2008-06-13

compressed, and the mapping function shown in fig. 15 effectively leaves
central
vertical lines of pixels unaffected but displaces pixels on opposing sides of
that
central region inwardly to give effect to the desired contraction. The
procedure
outlined in fig. 6 is used once again to give effect to complete the desired

transformation. Pixels at opposing sides of the nose are effectively displaced
to
new positions proximate to the centerline of the nose. Stray pixels are
combined
using weighted average dependent on distance from fixed grid positions to
ensure
that the colours of stray pixels have been absorbed at grid positions and the
stray
pixels can thereafter be ignored again for purposes of illustrating the final
view.

Colour values of empty grid positions are similarly filled with weighted
averages
of nearby stray pixels, or, if no such stray pixels exist nearby, the colour
value of
an empty grid position is interpolated from surrounding filled grid positions.
The
final effect is apparent in the face labeled 82 in fig. 14b. An important
variant in
the nose reduction process should be noted but has not been illustrated.
Rather

than tapering the nose from top to bottom, the mapping function may leave
lower
corners of the nostrils stationary, as these are often left substantially
intact after
plastic surgery involving nose reduction. Thus the mapping function may be
adapted to taper upper sections of the nose but leave lower portions

substantially intact.

In another embodiment, the method is adapted to produce a digital
image having a region corresponding to a selected region in which locations of
the
wrinkles and other minor facial anomalies are identified. The method comprises
for each pixel in the corresponding region of the digital image of setting the
colour value of each pixel to the difference between the colour value of the

-17-


CA 02635068 2008-06-13

corresponding pixel in the initial image and the colour value of the pixel in
the
final image. The net result is an image in which the majority of the skin
surface
appears darkened but wrinkles and other facial anomalies are brighter and
highlighted.

Reference is made to fig. 16, which diagrammatically illustrates a
digital camera 84 adapted to automatically implement the invention. More
specifically, fig. 16 shows successive processing of images by the camera 84.
The
initial digital image 86 as captured by the camera 84 provides a symbolic
representation of two individuals against a mountainous backdrop. In the next

image at 88, the faces and features of the two individuals are automatically
located and rectangular processing regions are formed about the faces. In the
image at 90, the picture is adjusted based on either default values or values
entered by the user before his initial snapshot. The adjustments possible
include a
facelift, weight reduction, nose reduction, lip augmentation, eyebrow lifts
etc., as

specified by the user. In typically less than a second, the user sees on the
viewing
screen associated with the camera 84 a final digitally enhanced image 92 with
each face modified aesthetically.

The camera 84 and its operation is further detailed in figs. 17-19.
In fig. 17, the lens 94 associated with the camera focus an image 96 onto a

conventional converter 98 responsible for transforming light into electrical
signals. The signals so converted are then applied to internal circuitry
generally
designated by the number 100 in fig. 17. The internal circuitry 100 is shown
in
greater detail in the diagrammatic view of fig. 18 where it may be seen to

comprise an electronic memory 102 that receives the digital image from the
-18-


CA 02635068 2008-06-13

converter 98. The electronic memory 102 is scanned by what is referred to
herein
as an "enhancement chip" 104. This will typically be a digital signal
processor
implemented using VLSI (very large scale integrated), general DSP (digital
signal
processors), standard processor, FPGA (field programmable gate arrays), or
ASIC

(application specific integrated circuit) technologies. Many of the functions
associated with the enhanced chip 104 are standard: its output image can be
displayed on a screen 106, can be stored in conventional long term memory 108,
or applied to electronic mail at 110. Functions of the circuitry 100 more
pertinent
to implementation of the present invention are shown in the flowchart of fig.
19.

First, at step 111, a user is allowed to select preferences for aesthetic
transformations desired: a facelift, weight reduction, nose reduction, lip
augmentation, eyebrow lifts, cheek lifts etc. The initial image 96 as captured
by
the lens 94 and processed by the light-to-signal converter 98 is received in
the
electronic memory 102 at step 112. Faces and facial features are detected at
step

114 in a manner that is known in the prior art. Default or user-specified
aesthetic
enhancement occurs at step 116. Although indicated as a single step 16, it
should
be noted that the enhanced chip 104 embodies and implements the algorithms
identified in figs. 4-6. At step 118, the final enhanced image is made
available for
standard screen display, electronic mailing, or long term storage.

It will be appreciated that particular embodiments of the invention
have been described and illustrated but these may be varied without
necessarily
departing from the scope of the appended claims.

-19-

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2008-06-13
(41) Open to Public Inspection 2009-03-18
Dead Application 2012-06-13

Abandonment History

Abandonment Date Reason Reinstatement Date
2010-06-14 FAILURE TO PAY APPLICATION MAINTENANCE FEE 2011-01-31
2011-06-13 FAILURE TO PAY APPLICATION MAINTENANCE FEE

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $200.00 2008-06-13
Reinstatement: Failure to Pay Application Maintenance Fees $200.00 2011-01-31
Maintenance Fee - Application - New Act 2 2010-06-14 $100.00 2011-01-31
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
AARABI, PARHAM
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Description 2008-06-13 19 754
Claims 2008-06-13 16 515
Drawings 2008-06-13 7 123
Representative Drawing 2008-12-03 1 4
Abstract 2008-06-13 1 22
Cover Page 2009-03-12 1 38
Assignment 2008-06-13 3 72
Fees 2011-01-31 1 30