Language selection

Search

Patent 2248721 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 2248721
(54) English Title: A COMPUTER SYSTEM AND PROCESS FOR DEFINING AND MANUFACTURING IMAGES USING STRUCTURED OBJECTS WITH VARIABLE EDGE CHARACTERISTICS
(54) French Title: SYSTEME ET PROCEDE INFORMATIQUES DE DEFINITION ET DE PRODUCTION D'IMAGES UTILISANT DES OBJETS STRUCTURES A CARACTERISTIQUES DE BORDS VARIABLES
Status: Expired and beyond the Period of Reversal
Bibliographic Data
(51) International Patent Classification (IPC):
  • G6T 11/00 (2006.01)
(72) Inventors :
  • KIVOLOWITZ, PERRY S. (United States of America)
  • WONG, SZE-PING (United States of America)
(73) Owners :
  • AVID TECHNOLOGY, INC.
(71) Applicants :
  • AVID TECHNOLOGY, INC. (United States of America)
(74) Agent: SMART & BIGGAR LP
(74) Associate agent:
(45) Issued: 2005-12-20
(86) PCT Filing Date: 1997-03-05
(87) Open to Public Inspection: 1997-09-18
Examination requested: 2002-03-05
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/US1997/003436
(87) International Publication Number: US1997003436
(85) National Entry: 1998-09-04

(30) Application Priority Data:
Application No. Country/Territory Date
08/613,283 (United States of America) 1996-03-11

Abstracts

English Abstract


An image representation using a structured object may be readily manipulated
by a computer. This image representation can represent
more complex images where the structured object defines a boundary for which
image values near the boundary in a region are defined by
a function called an edge characteristic function. The edge characteristic
function is applied by determining, for each point in the region, a
point on the boundary to which the given point is closest, herein called an
anchor, and a distance to that point. Different edge characteristic
functions may be assigned to different points along the boundary. In one
embodiment, edge characteristic functions are assigned to selected
for characteristic points on the boundary. These edge characteristic functions
may be interpolated to provide edge characteristic functions
for remaining points along the boundary. Each point not on the boundary is
processed according to the edge characteristic function of the
point on the boundary to which it is closest and according to the distance to
that point. An edge characteristic function may also be a
function of other factors. This image representation provides many benefits
including the ability to generate more complex images which
can then be manipulated more easily or be converted from different image
formats. One form of manipulation is interpolation over time.
In image compositing, this image representation also allows for easy
attenuation of various effects over regions in a controlled way.


French Abstract

Une représentation d'image utilisant un objet structuré peut être facilement manipulée par un ordinateur. Cette représentation d'image peut représenter des images plus complexes dans lesquelles l'objet structuré définit une limite pour laquelle des valeurs d'image proche de la limite, dans une région, sont définies par une fonction dite fonction de caractéristiques de bords. La fonction de caractéristiques de bords est appliquée par détermination, pour chaque point se trouvant dans la région, d'un point sur la limite dont le point donné est le plus proche, appelé ancre, et d'une distance à ce point. On peut affecter différentes fonctions de caractéristiques de bords à différents points situés le long de la limite. Dans un mode de réalisation, les fonctions de caractéristiques de bords sont affectées aux points caractéristiques sélectionnés situés sur la limite. Ces fonctions de caractéristiques de bords peuvent être interpolées afin de produire des fonctions de caractéristiques de bords pour les points restants se trouvant le long de la limite. Chaque point ne se trouvant pas sur la limite est traité selon la fonction de charactéristiques de bords du point situé sur la limite dont il est le plus proche et selon la distance à ce point. Une fonction de caractéristiques de bords peut également être une fonction d'autres facteurs. Cette représentation d'image offre de nombreux avantages y compris la capacité de générer plus d'images complexes que l'on peut ensuite manipuler plus facilement ou convertir à partir de différents formats d'images. Une forme de manipulation est l'interpolation temporelle. Dans la composition d'images, cette représentation d'images permet également une atténuation simple de divers effets sur différentes régions de manière régulée.

Claims

Note: Claims are shown in the official language in which they were submitted.


-21-
CLAIMS:
1. A method for generating an image, comprising the
steps of:
(a) defining a structured object having a boundary of a
plurality of points;
(b) assigning an edge characteristic function to each point
along the boundary, wherein the edge characteristic
functions of at least two points are different, and wherein
each of the edge characteristic functions defines an edge
characteristic which defines image values near the boundary;
(c) for each picture element in the image, determining an
anchor on the boundary and a signed distance to the anchor
wherein the anchor is a point on the boundary to which the
picture element is closest;
(d) for each picture element in the image, selecting the
edge characteristic function assigned to the anchor
determined for the picture element; and
(e) for each picture element in the image, applying the
selected edge characteristic function to the signed distance
from the picture element to the anchor to determine image
values for the picture element.
2. The method of claim 1, wherein the step (b) of
assigning comprises the steps of:
selecting points from the plurality of points on the
boundary;
assigning an edge characteristic function to each selected
point; and

-22-
interpolating the edge characteristic function of each of
the selected points to obtain edge characteristic functions
of other points on the boundary.
3. The method of claim 1, wherein the step of
applying the function includes calculating for each point, a
factor .alpha. according to the anchor and the distance, and
wherein the image values for the point is .alpha.A+(1-.alpha.)B, where A
is data from a corresponding point in a first image and B is
image data from a corresponding point in a second image.
4. The method of claim 1, wherein the step of
assigning further comprises:
interpolating the edge characteristic functions to determine
an edge characteristic function for each point on the
boundary.
5. The method of claim 1, wherein the step of
determining the anchor further includes the steps of:
initializing coordinates for points on the boundary;
selecting a minimum distance value;
selecting the anchor on the boundary;
determining a signed distance between the picture element
and the selected anchor; and
comparing the signed distance to the minimum distance value.
6. The method of claim 5, wherein the step of
comparing further includes the steps of:
setting the minimum distance value to the signed distance;
and

-23-
selecting an anchor for the picture element corresponding to
the minimum distance value.
7. The method of claim 5, wherein the step of
selecting the anchor further includes the steps of:
determining a signed distance between a picture element and
each point on the boundary;
comparing the signed distance to the minimum distance value
of each point on the boundary; and
defining the minimum distance value for the picture element
and selecting a corresponding anchor.
8. The method of claim 1, wherein the step of
determining the anchor further includes the steps of:
for each point in each scanline,
generating a first array which is a distance table including
current minimum distances to the boundary for the point;
generating a second array which is an anchor table including
current closest points on the boundary for the point; and
selecting a color for each point along the scanline
according to the distance table and anchor table for the
point.
9. The method of claim 8, wherein the steps of
generating a first array and a second array include:
determining whether the current scanline is within a
bounding box defining a region of the image affected by the
boundary and by associated edge characteristics;
initializing a parameter value of the boundary;

-24-
obtaining pixel coordinates of a point corresponding to a
current parameter;
comparing a distance along a y-axis between the point and
the current scanline to a maximum distance, the maximum
distance representing the distance from the boundary in
which pixels are affected by the edge characteristics of the
boundary;
updating the distance table and the anchor table in the
first and second arrays using radial distances from the
point for only those points within the maximum distance
along an x-axis of the point; and
incrementing the parameter value until all points on the
boundary have been evaluated.
10. The method of claim 9, further including the steps
of:
reinitializing the parameter value;
obtaining coordinates of the point corresponding to the
current parameter and the coordinates of a point in the
image corresponding to the previous parameter;
determining an orthogonal projection of the scanline onto a
line segment defined by the point representing the current
parameter and the point in the image;
determining the orthogonal distance to the points from the
scanline;
comparing the orthogonal distance to the maximum distance;
and
updating the distance table and anchor table in the first
and second arrays between a minimum point on the scanline

-25-
and a maximum point on the scanline corresponding to the
orthogonal projections of the scanline on the line segment
defined by the points.
11. The method of claim 9, wherein the step of
determining whether the current scanline is within a
bounding box includes setting anchor and distance values to
default values if the scanline is outside the bounding box.
12. The method of claim 2, wherein the edge
characteristic function has two arguments, the anchor and
the distance.
13. The method of claim 2, wherein the edge
characteristic function is applied, by determining, for each
point in the region, an anchor point on the boundary to
which the given point is closest, and a distance to the
anchor point.
14. A computer readable medium having computer
readable instructions stored thereon for execution by one or
more computers, that when executed implement the method of
any one of claims 1 to 13.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02248721 1998-09-04
WO 97/34261 PCT/US97/03436
A COMPUTER SYSTEM AND PROCESS FOR DEFINING AND
MANUFACTURING IMAGES USING STRUCTURED OBJECTS
WITH VARIABLE EDGE CHARACTERISTICS
Field of the Invention
The invention is related to computer systems and computer-implemented
processes for
defining and manipulating a region of interest in a coordinate space, such as
for defining
structured objects in two- or three-dimensional space using a computer. In
particular, this
invention is useful for generating a control image for use in compositing
images.
to
Background of the Invention
In a computer system, there are many ways to represent still image data which
generally fall into one of two categories. In a f rst category of image
representation. called a bit
map, the value of each individual picture element, or pixel, of the image is
stored. The image
i s representation may be compressed or uncompressed. Bit-mapped image
representations
generally do not require much computation by a computer to display, but may
occupy a
substantial amount of memory space. Also, such image representations are
generally not flexible
and are difficult to scale or otherwise modify without generating significant
visual artifacts. A
second category of image representation is a structured object. A structured
object defines an
2o image, typically in mathematical terms, according to the structure of
objects contained within the
image. For example, an image of a cube may be represented by the positions of
the vertices of
the cube with respect to some reference point. Such structured objects are
commonly found in
three-dimensional modeling systems, desktop publishing systems, etc.
One area where image representation is important is in image compositing. One
type of
25 image compositing, called alpha compositing, involves combining two images
to generate a third
image, where the extent of contribution of each of the combined images in the
third image is
defined by a control image. An example of an alpha compositing system is found
in U.S. patent
no. 4,602.286 to Paul Kellar.
As described in the Kellar patent. the control image is defined as a bit-
mapped picture.
3o in which each pixel has a value. The value of each pixel represents an
alpha value (a) on the
scale of zero (0) to one (1 }. The alpha value (a,,.) for each pixel (x,y),
herein called a control
value. in the control image is used in a standard mapping function to combine
the input images.
For example. for each pixel A,,,. in the first input image (A). and pixel
B~_~. in the second input

CA 02248721 1998-09-04
WO 97/34261 PCT/US97/03436
-2-
image (B). where A and B are images of x by y pixels, the pixel C~.,. in the
output ima~~e C is
given b5~ the formula:
C,., - a..," A,., + ( 1 - a,.,~)B,., ~ 1 )
Such compositing appears in video programs as special effects and transitions
such as
wipes, fades, etc. One difficulty with using a bit-mapped image as a control
image is that alpha
compositing often involves an interpolation over time, such as in transitions
between cuts in a
motion video sequence. That is, a control image used to combine one pair of
frames to produce
one frame in the motion video sequence is often different from control images
used to produce
other frames in the sequence. Additionally, if a bit-mapped control image
created at a low
resolution is then scaled to a higher resolution quantization found in the low
resolution image
produces artifacts in the higher resolution image.
Another way to define a control image is by using a structured object. A
structured
~ 5 object defines a control image, typically in mathematical terms, by the
structure of objects within
the image. This definition may be a mathematical formula. such as an equation
representing a
curve, or geometric, such as coordinates of points and vectors representing
directions and
magnitudes. The structured object definition is interpreted to obtain control
values for each pixel
to be generated in a final image. When taken together these control values are
similar to the set
20 of control values obtained using a bit-mapped image.
One benefit of structured definitions is that they can be scaled, animated or
interpolated
over time more easily than static definitions, such as a bit-mapped image.
Another advantage of
using structured objects is that the computational determination of whether
one part of the image
is either inside or outside a boundary of an object is relatively easy.
Structured object definitions
also are more amenable to scaling and simple manipulation like rotations.
which simplifies the
process of making some effects. However, structured objects to date have been
used generally
for simple operations on simple objects.
In image compositing, both bit-mappe~. .:rages and structured object
definitions
<~enerallv are used for ~~enerating a control image that typically has control
values of either U or 1.
3o where the chan;~es between pixels in the image define edges. Usually the
edges or boundaries
are processed to provide a blending characteristic such that changes between
source content in
the resultant image do not occur disjointly over only one pixel. Rather. the
control image is

CA 02248721 2005-03-21
77787-71
_3_
processed such that all edges have a gradual change, typically a simple
greyscale ramp. In the
Kellar patent, this greyscale ramp is provided because of an artifact in
processing input from a
pen and tablet input device. Using structured objects, most prior art systems
simply adjust
values before and after an edge in a scanline.
A general aim of the invention is to provide an image representation, for a
computer,
which can represent more complex images and which is also readily manipulable.
Summary of the Invention
An image representatian using a structured object may be readily manipulated
by a
computer. This image representation can represent more complex images where
the structured
object defines a boundary for which image values near the boundary in a region
are defined by a
function called an edge characteristic function. The edge characteristic
function is applied by
determining, for each paint in the region, a point on the boundary to which
the given point is
closest: herein called an anchor, and a distance to that point.
35 Different edge characteristic functions may be assigned to different points
along the
boundary. In one embodiment. edge characteristic functions are assigned to
selected
characteristic points on the boundary. These edge characteristic functions may
be interpolated to
provide edge characteristic fiuactions for remaining points along the
boundary. Each point not on
the boundary is processed according to the edge characteristic function of the
point on the
2o boundary to which it is closest and according to the distance to that
point. An edge characteristic
function may also be a function of other factors.
This image representation provides many benefits including the ability to
generate rnor~e
complex images which can then be manipulated more easily or be converted from
different
image formats. One form of manipulation is interpolation over time. In image
~ompositing, this
25 image representation also allows for easy attenuation of various effects
over zegions in a
controlled way. This method is particularly useful in image compositing and
for modeling data
which may vary over a number of parameters.

CA 02248721 2005-03-21
77787-71
-3a-
Accordingly, in one aspect of the present
invention, there is provided a method for generating an
image, comprising the steps of: (a) defining a structured
object having a boundary of a plurality of points; (b)
assigning an edge characteristic function to each point
along the boundary, wherein the edge characteristic
functions of at least two points are different, and wherein
each of the edge characteristic functions defines an edge
characteristic which defines image values near the boundary;
(c) for each picture element in the image, determining an
anchor on the boundary and a signed distance to the anchor
wherein the anchor is a point on the boundary to which the
picture element is closest; (d) for each picture element in
the image, selecting the edge characteristic function
assigned to the anchor determined for the picture element;
and (e) for each picture element in the image, applying the
selected edge characteristic function to the signed distance
from the picture element to the anchor to determine image
values for the picture element.
In another aspect, there is provided a computer
readable medium having computer readable instructions stored
thereon for execution by one or more computers, that when
executed implement the method of the previous aspect.
One embodiment of the present invention is a method
for generating an image. This method involves defining a
curve in the image as a plurality of points. For at least two
points in the curve, a function is assigned. The function of
one of the points is different from the function of another of
the points. For each picture element in. the image, an anchor
on the curve and a signed distance to th.e anchor are
determined. For each picture element, the function

CA 02248721 1998-09-04
WO 97/34261 PCT/US97/03436
-4-
assigned to the anchor of the point is selected. For each picture element in
the image. the
selected function is applied to the signed distance from the picture element
to the anchor to
determine image data for the picture element. A signed distance is a distance
measure which is
negative or positive. rather than an absolute value, and is useful for
indicating a direction for
example.
In one embodiment of this method, the step of assigning a function to a point
involves
selecting a plurality of points on the boundary. An edge characteristic is
assigned to each
selected point. The edge characteristics of each of the selected points are
interpolated to obtain
edge characteristics of other points on the boundary.
~n another embodiment of this method, the step of applying the function
involves
calculating, for each point, a factor a according to the anchor and the
distance. and wherein the
image data for the point is a A + ( 1 - a)B. where A is image data from a
corresponding point in a
first image and B is image data from a corresponding point in a second image.
Another aspect of the invention is a computer-readable medium having a
representation
15 of a region of interest within a coordinate space stored thereon in a
computer-readable form.
This representation includes an indication of a boundary defined by a
plurality of points in the
region, and, for at least two points of the boundary, an indication of an edge
characteristic,
wherein the edge characteristic of a first point and the edge characteristic
of a second point are
different. In one embodiment, the representation includes an indication, for
each point in the
20 coordinate space, of a point on the boundary to which the point is closest
and a signed distance to
the point on the boundary.
Another aspect of the invention is a computer-readable medium having a
representation
of a region of interest within a coordinate space stored thereon in a computer-
readable fOItll. The
representation includes an indication of a boundary defined by a plurality of
points in the re~~ion
25 and an indication. for each point in the coordinate space. of a point on
the boundary to which the
point is closest and a signed distance to the point on the boundary.
Another aspect of the invention is a computer-implemented process for
generating a
representation of a region of interest in a coordinate space. The process
involves defining a
boundary in response to user input. User input is also received for at least
two points on the
3o boundary. and an edge characteristic is assigned according to the user
input. wherein the ed<~e
characteristic of a first point and the edge characteristic of a second point
are different.

CA 02248721 1998-09-04
WO 97/34261 PCT/US97103436
-5-
Another aspect of the invention is a computer system for generating a
representation of
a region of interest in a finite coordinate space. This computer system has a
mechanism for
defining a boundary. Another mechanism is used for assigning an edge
characteristic for at least
two points on the boundary wherein the edge characteristic of a first point
and the edge
characteristic of second points are different.
Another aspect of the invention is a computer-implemented process for
manipulating a
definition of a region of interest in a coordinate space, wherein the
definition is stored using
computer-readable signals on a computer-readable medium, and wherein the
definition includes
an indication of a boundary defined by a plurality of points in the region and
an indication, for
each point in the coordinate space, of a point on the boundary to which the
point is closest and a
signed distance to the point on the boundary. This process involves applying,
for each point in
the coordinate space having an anchor and a signed distance to that anchor,
the edge
characteristic function assigned to the anchor to the signed distance stored
to obtain a value. The
point in the coordinate space is displayed according to the obtained value.
is
Brief Description of the Drawing
In the drawing,
FIGS. 1 A and 1B illustrate a bit-mapped image definition of a control image
for use in
alpha compositing in a prior art system, and an associated blending
characteristic:
2o FIGS. 2A and 2B illustrate a control image defined using a structured
object in
accordance with a prior art methodology, and an associated blending
characteristic,
FIGS. 3A-3E illustrate a control image defined using a structured object
representation
in accordance with the present invention, and associated edge characteristic
functions:
FIG. 4 is a schematic illustration of a representation of a structured object
definition in
accordance with the present invention;
FIG. 5 is a block diagram of a computer system with which the present
invention may
be used:
FIG. ( is a schematic illustration of more details of a memory System shown In
FIG. ~:
FIG. 7 is a block diagram of a system for processing structured object
definitions in
3o accordance with the present invention;
FIGS. 8A and 8B are pseudocode representing example edge characteristic
definitions:

CA 02248721 1998-09-04
WO 97/34261 PCT/L1S97/03436
-6-
FIG. 9A is a flow chart describing a process for determining anchor and
distance values
performed by anchor and distance calculator in FIG. 7;
FIG. 9B is a schematic illustration of a structured object delineating a
region within a
coordinate space;
FIGS. 9C and 9D are flowcharts describing another process for determining
anchor and
distance measures;
FIG. 9E is a diagram illustrating how radial distances and orthogonal
distances are
defined;
FIG. 10 is a flow chart describing image processing using structured object
definitions,
variable edge characteristics and anchor and distance values as shown in FIG.
7:
FIG. 11 is a diagram of an example output generated by the image processing
module
shown in FIG. 7; and
FIG. 12 is a block diagram of an alpha compositing system with which a control
image
generated using the present invention may be used.
~J
Detailed Description
The present invention will be more completely understood through the following
detailed description which should be read in conjunction with the attached
drawing in which
similar reference numbers indicate similar structures.
20 The invention relates to the delineation of a region of interest in a
coordinate space
which is a subset of a larger region in the space. One type of delineation of
a region of interest is
the selection of a subset of an image which is treated differently from the
rest of the image.
Treatment may mean any of a plurality of image processing functions including
bet la ~. el~.lted
to, painting, compositing, transitions and special effects. Another kind of
delineation of a r~ Lion
?5 of interest. without relation to imaging, is the delineation of a
geographical region for which
some demographic characteristic applies. An example in three-dimensional space
involves
description of the animation of a fireball 111 lllOt1011. such as :comet,
which includes the body and
tail of the comet.
Referring now to FIG. lA. in a prior image processing system. a region is
delineated
3o using a bit-mapped image which is generated by a user using a pen and
tablet input device (not
shown). ThIS lllpllt device allows a user to define a line such as shown at 20
in the image 22 of
FIG. 1 A. Referring now to FIG. 1 B. by virtue of the sloped shape of the tip
of the pen. an edge

CA 02248721 1998-09-04
WO 97/34261 PCT/US97/03436
having a mathematical characteristic such as shown at 24 is provided. When
viewed. the ed~~e
defined by line 20 appears to have a short grey scale ramp. The graph shown at
24 illustrates
grey scale values with respect to a distance from the boundary.
Referring now to FIG.2A, in prior systems using structured objects. an image
26 may
include an object such as the circle defined at A. A circle may be defined by
its radius, e.g., two.
and its center, e.g,.. coordinates (0,0), within the coordinate space of the
image. Referring now to
FIG. 2B, a function, such as shown at 28, may define a greyscale ramp in the x
direction across
the boundary defined by the structured object A to provide a desired edge
smoothing. When
processed, a point within the image 26 may be assigned a greyscale value
according to how far it
is from the boundary defined by the structured object A. In practice, given a
point on the
boundary defined by the structured object, this function 28 is used to
identify other points close
to the point on the boundary and to assign greyscale values to these
identified points.
An advantage of a structured object definition is that interpolation of the
structured
object over time, e.g., changing its shape, is easy to perform. For example,
an enlarged circle
having an enlarged radius, e.g., three, and the same center coordinates is
shown at B in the image
26. As can clearly be seen, the size of this circle can be modified easily by
incrementing or
decrementing the radius value. Its position can be changed easily by moving
the center
coordinate.
These solutions from image processing applications involve only indirectly the
notion
2U of "proximity" in the determination of the blending characteristic at the
boundary defined b5~ the
structured object. 1t has been found that a direct measure and use of the
proximity of a point to a
boundary permits the representation of more complex objects.
Proximity is a signed measure of how far a point is from the edge of the
region defined
by the structured object. For example, the measure may be positive if the
point is inside the
region, or negative if the point is outside the region. If the color black
(white) is assigned to all
outside ( inside) points which are not close to the edge. and a ramp of
greyscales is assigned for
those near the edge. the region is manifested as a white area on a black
background with a
camped transition fI'Olll Whlte to black on the edge of the re~~ion. The shape
of the greyscale
ramp. which controls the shades near the edge. is called the edge
characteristic of the region.
3o Different greyscale ramps give rise to different edge characteristics. and
thus different
delineations of the region. In the ima~~e representation shown in FIGS. 1 A
and l B. an ed~~e
characteristic is not defined explicitly. The blending characteristic near the
edge is really a

CA 02248721 1998-09-04
-s-
desirable side-effect of the input device. In the image representation shown
in FIGS. 2A and 2B,
only one blending characteristic is assigned per region. Additionally, the
blending characteristic
is processed with respect to a point on the boundary of the region in order to
identify points in
the coordinate space proximate to the boundary point. In contrast, in the
present invention,
proximity is defined with respect to a point in the region, in order to
identify a point on the
boundary to which the given point is closest.
Referring now to FIGS. 3A-3E, the present invention permits different edge
characteristic function at different points along a curve defined by a
structured object. For
example, a circular region C may be delineated by a structured object
definition. Variable edge
to characteristic function assigned around the edge of the circle represent a
circular disk with a
greyscale ramp with a different appearance in every direction. For example, a
point E on the
edge of the circle in FIG. 3A may be assigned a first edge characteristic
function f, (E) such as
shown at 32 in FIG. 3B. A second point F at the opposite side of the edge of
the circle in FIG.
3A may be assigned a different edge characteristic function f2 (F) such as
shown at 34 in FIG.
3C. This combination of edge characteristics results in a shape that looks
like an egg with an egg
yolk.
Such an image representation may be interpolated easily over time. For
example, the
radius may be changed to produce a curve as shown at D in FIG. 3A.
Alternatively, edge
characteristics assigned to points E and F on the curve may change, for
example to be new
2o functions f,'(E) and f2'(F) such as shown at 36 and 38.
Referring now to FIG. 4, a representation of a delineation of a region of
interest within
a coordinate space such as a two dimensional image is illustrated. Such a
representation
generally includes an indication of a definition of a structured object which
provides a boundary,
such as indicated at 40. In the example given, the boundary of a circle is
defined by a center
point and a radius. The image representation also includes an indication of
one or two or more
edge characteristic functions, such as shown at 42, which are assigned,
respectively, to
characteristic points on the boundary.
This representation of a delineation of a region of interest is used by a
computer system
50, for example, as shown in FIG. 5. The computer system 50 includes a
processor 52 connected
3o to a memory 54 via an interconnection mechanism 56. Input devices 58 and
display devices 60
are also connected to the processor and memory via interconnection mechanism
56. The
computer may be programmed using a high-level programming language, such as
"C", or other

CA 02248721 1998-09-04
_.
programming language to create an application program, which is machine-
executable code

CA 02248721 2005-03-21
. 77787-71
-9-
created by compiling a computer file containing a program in the high-level
proLrammin_~
language. An operating system (not shaven) controls the execution of the
application program
and provides scheduling. debuggin=. inputloutput. memory management. data
management and
related services on the computer.
The computer system 50 may be any of a number of commercially-available
general
purpose computing systems. for example. the indigo computer from Silicon
Graphics lnc. of
Mountain Viem. California. using the IRIX operating system similar computers
available from
Apple Computer' or other custom image processing work stations and other
programming
systems. Such computer systems employ commonly available processors, such as
the Intel
Pentiun~i or series x86 processors or Motorola series 680x0 processor. for
example. Oilier
suitably programmed microprocessors may be used. 'The interconnection
mechanism 56 may be
a simple bus. or may involve a more complex switching arrangement to allow the
various
elements of the computer system to communicate in parallel. It should also be
understood that
the invention is not limited to a single processor system. A multi-processor
system. including
t a parallel processing computers. may be used in the present invention.
Exemplary input devices ~~hich may be used with the present invention include
a
mouse, a track ball. a pen and tablet. or other pointing devices, as well as
keyboards. keypads.
scanners and other commonly used input devices for general purpose computers.
The display
device b0 may be a cathode ray tube (CRT) display; liquid crystal display
(LCO), of urhich there
2o are a number of types. and other display devices such as a printer. In
cases where this invention
is used to generate motion picture sequences. the display device 10 may be an
optical printin~~
device. or a video tape recorder.
FIG. 6 shows more details about the memory system 54 of FIG. 5. The tnemorv
svs~em
includes a computer readable and veritable. random access. nonvolatile
recording medium 70.
?~ such as a magnetic or magneto-optic disk. The disk may be xemovahle. known
as a floppy disk
or compact disk. or permanent. known as a hard drive. Such a disk has a number
of~tracks such
as indicated at 7? in which signals are stared. typically in binary form.
i.e._ a form interpreted as
a sequence of ones and zeros as shown at 76. Such a sequence of ones and zeros
provides an
indication of the boundary definitions. characteristic points and associated-
~ed~.!e characteristic
3t~ functions such as shov~~n in F1G. 4. This information is stored on the
disk 70. T'~pirally_ in
operation. the processor causes the data to be read from disk 70 into an
inte~~rated circuit memoru
element such as indicated at ?8. The integrated circuit memory ?8 allov~~s
faster access tn the
*Trade-mark

CA 02248721 2005-03-21
77787-71
- L0 -
information than does the disk 70. The processor generally manipulates the
data within the
integrated circuit memory and then copies the data to the disk 70 when
processing is completed.
A variety of mechanisms are known for managing data movement between the disk
70 and
integrated circuit memory 78, and the invention is not limited thereto. The
invention is also not
a limited to the particular processor, input device, output device or
interconnection mechanism.
One embodiment of a system in accordance with tlae invention will now be
described in
connection with FIG. 7. Such system may be implemented using a high-level
computer
programming language on a computer system, such as described above in
connection with FIGS.
and 6. One module of this system is the model generator I 00 which receives
user input 1 O 1.
i o typically via a graphical user interface and in response generates the
structured objects that are
used to define a boundary within a coordinate space. The model generator 100
rnay be, for
example. matte generation software, such as found in Matadoi from Avid
Technolog~~Europe
Ltd., formerly Parallax;~ofLondon, England, Elastic Reality from Avid
Technology. formerly
from Elastic Reality in Madison, Wisconsin and Photoshop from Adobe
Systems.~Inc., in
~ s Mountain View, California. Such systems allow a user to generate a matte
using an input device.
The matte may be output to a file as a representation of a boundary. The
boundary so generated,
as indicated in 102, may be stored in the memory 54 of the computer.
Another module is the edge characteristic definition module 104 which receives
user
input I 03 and which enables a user to define an edge characteristic function
as indicated at 106.
2U An edge characteristic function may be associated with one or more
characteristic points on a
boundary defined by a structured object. An interpolation may be performed. or
the edge
characteristic may be defined, so as to provide a single definition which
applies to the whole
object as a function of a position on the boundary. The definition of the edge
characteristic
function also may be stored in the computer memory ~4. The combination of edge
characteristic
25 function I 06 and boundary 102 is sufficient to describe a delineation of a
region of interest in the
coordinate space in which the structured object is defined.
An anchor and distance calculation module 108 determines. in a manner
described in
more detail below in connection with FIGS. 9A-9D. for each point in a
coordinate space, the
point on the boundary to which it is closest {herein called an anchor). along
with a distance
3o measure to that boundary. The anchor and distance for each poisat. as
indicated at 110. is stored
in the memory 54 of the computer. The distance measure may be a signed
distance. for example.
*Trade-mark

CA 02248721 2005-03-21
77787-71
A signed distance is a distance measure which is negative or positive, rather
than an absolute
value. and is useful for indicating a direction for example.
It is possible to provide a collection of structured abject definitions in
this invention. In
some applications. it may be preferable to pre-compute the anchor and distance
values for a
given structured object. In such a case. the combination of boundary
definitions 102 and anchor
and distance values 110 are stored together in the memory 54 for later use.
Edge characteristic
functions 106 can be defined at a later time. Additionally, a set of
structured objects and
corresponding edge characteristics lOG may be combined to provide, far
example. a library of
visual effects.
An image processor I I2 receives the boundary definition or structured object
definition
102, the anchor and distance values 110 and the edge characteristic functions
106 to produce an
image i 14. The process performed by the image processor is described in more
detail below in
connection with FIG. 10. It is possible to combine the functions of bath the
anchor and distance
calculator 108 and the image processor I 12 into one module that performs both
operations on a
t 5 per pixel; per scanline basis. The image 114 may be representative of data
to be displayed, or
may be used as a control image in an alpha compositing system I I G. Other
applications and uses
of this image are described in more detail below.
FIGS. 8A and 8B> which illustrate example edge characteristic definitions in
more
detail, will now be described. These definitions can be made using a program
editor such as
EMACS to generate computer code defining the edge characteristic as a
procedure receiving two
arguments. the anchor and the distance.
In this implementation. shown in "C" pseudoeode, the edge characteristic of
the region
is determined by function Color PickC~Ior (double dist, double anch), where
dist is the signed
distance of the current point P from the edge C of the region and anch is the
anchor of P from C.
25 Anch is a normalized value between 0 and 1. with respect to a predetermined
point on the
boundary C. Hence the values 0 and 1 of aneh both represent the same paint on
C when C is
closed. Similarly. disc is a value between -1 and 1. with the convention that
dist is negative if P
is outside the re=ion and positive if P isinside the region defined by C. A
maximal footprint
(MAXDIST) is used for the edge characteristic so that disc*MAXDIST gives the
actual distance
~o of P from C.
A simple edge characteristic function is shown in FICT. 8A. This function is
independent of aneh. The color this edge characteristic function returns is
radially symmetric
*Trade-mark

CA 02248721 2005-03-21
- 77787-71
-I 2-
(i.e., this function is the same function along any radius of constant
anchor). On any path of
constant anchor, the color returned is on a straight ramp between colorA (the
color at dust=1) and
colorB (color at dist=1 ). By carefully choosing the two-end colors, and
possibly the ramp
between them, one can produce quality delineations of regions like soft edges,
etc.
The power and flexibility of this image representation is demonstrated ~by the
definition
of an edge characteristic function as shown in FIG. 813. In this edge
characteristic function,
MixColorQ is a simple blend function which mixes colorA with another~olor
which is obtained
by mixing colorB and colorC. Ramps is a smooth function which maps.[-l;lJ to
[~,1], the
domain of the function MixColorQ. FLampO returns 0 at -1. I on the interval
yWHIT~OUT,1]
Io and it is a smooth cubic ramp on the interval.[-l, WhITIrOUT], where WI-
IITBOUT is a
constant, e.g., 0.5.
In the definition of Pici:Color{~, the color is chosen by an application of
the
MixColorO function with two carefully chosen arguments. The first argument,
Ramp~(dist),
ensures a radially smooth transition of colors in the region. ~Vit~h the flat
potion on
3 5 [V~~iITEOUT, l J, the center of the region is assigned a uniform
color.(colorA). Moving away
from the center, the color changes to another color, which is controlled by ~e
second argument:
(anch<0.5)? anch*2: (1-anch)*2
which means that if anch less than 0.5, the value is arch*2, e9se the value is
(i-anch)*2. 3his
argument defines a hat-shaped function which is zero at anch~ or I and rues to
1 at anch='J.S.
2o This function causes the color at the outermost rim of the delineated
region to change from
colorB (anch=0) to colorC (anch=0.5) then back to colorB ranch=1 ~.0). 3'he
fact that the
anchor value anch is periodic means that PickColor~ should also be periodic
with respect to
anch.
Given the definition of a structured object, the process of determining the
signed
25 distance and anchor on the curve defined by the structured object for each
~picttu~e element in ~e
coordinate space will now be described in connection with F'I~S. 9A through
917. ~fG. 9A is a
flow chart describing how anchor and distance values may be calculated using
the c~tured
object definition 102. FIG. 9B is a schematic illustration of a~coordinate
space'R with a region

y CA 02248721 2005-03-21
77787-71
-12a-
delineated using a structured object. Examples of graphical
representations of edge characteristic functions for points
within the region are depicted to the right of the figure.
In this process, the curve is parametrized such that
each point defining the edge of the region is assigned a
value i. There are many ways to parametrize a curve. In the
embodiment described herein, an initial point on the curve is
assigned parameter i=0, and the final point on

CA 02248721 1998-09-04
f;l~~°C~ir~i='~ ~, ;C
-13-
the curve is assigned the parameter i=1. The purpose of the process defined in
FIG. 9A is to
determine, for each point P in the coordinate space R, 1 ) the point Q on the
edge defining the
region which is the nearest point on the edge to point P, and 2) the signed
distance between point
P and point Q.
The first step in this process is step 200 initializing Y and y values,
representing pixel
coordinates for the points in the curve to be scanned, as well as the
parameter value i of the
curve. Typically, this initialization sets corresponding variables to zero. Of
course other initial
values may be used. Next, the minimum distance value and an initial anchor are
selected in step
202 to initial values. For example, the minimum distance value may be set to
the highest
1o possible value which is expected using the given coordinate space, and the
anchor is set to
indicate the lowest parameter value, e.g., zero.
Given the initialized values, the signed distance 8 between the current point
(x,y) and
the current point i on the curve is determined in step 204. If the distance 8
so calculated is less
than or equal to the minimum distance presently stored, as determined step
206, the minimum
15 distance for the current point is set to this calculated distance 8 in step
208 and the anchor value
for this point (x,y) is set to the current parameter value i. The parameter
value i is then updated
in step 210 by incrementing by a value N, which is 1/(M-1), where M is the
number of points on
the curve. If all of the points on the curve have not been analyzed, as
determined in step 212, the
loop of steps 204, 206, 208 and 210 is repeated.
2o When all of the points on the curve have been analyzed, the minimum
distance for the
present point (x,y) and its anchor, have now been defined. The next point in
the x direction is
then selected by incrementing the x value in step 214 and resetting the
parameter value i to its
initial value, e.g., zero. All points in the scan line are then visited by
continuing to perform steps
202 through 214 until the end of the scan line is reached as determined by
step 216. When the
25 end of the scan line is reached, the y value is incremented in step 218 and
the x value is
initialized, e.g., to zero. Each scan line is processed in accordance with
steps 202 through 218
until all scan lines have been processed as determined by step 220. Upon the
completion of step
220, the minimum distance for each point and its anchor is determined.
The process described in connection with Fig. 9A is illustrated for only two
dimensions.
3o An extension of this process to three dimensions is relatively simple by
adding an additional loop
having steps similar to steps 218, 220 and a link back to step 202 for the
third dimension to be
processed.

CA 02248721 1998-09-04
-14-
In step 204, a distance metric is used to measure the distance between a point
in the
coordinate space and points on the curve. A simple distance metric to use is
the Euclidean
distance, or the squared Euclidean distance which avoids performing a square
root operation.
The Euclidean distance 0 of a point (x, ,y,) to another point (x,,y,) in a
coordinate space is
represented by the following equation:
II p II - (x~ - x2)2 + (y~ - y2)Z in two-dimensional space; and
_ (x~ - x2)Z + (y, - y2)2 + (z~ - z2)Z in three-dimensional space.
Many other distance metrics are known for measuring a difference or a
similarity
between two points. The Euclidean distance is merely an example of distance
metrics to be used.
This distance may be normalized to a range of [-1, 1] by dividing the
Euclidean distance by a
1o factor MAXDIST, indicating a maximum possible distance in the region. The
distance
calculated may be a signed distance. One way to select a sign for the distance
is to consider the
curve as dividing the coordinate space into two regions. This is done by
viewing the curve as
moving forward in the coordinate space as the parameter value i increases. In
a plane, this curve
then divides the plane into two regions, one region to the right of the plane,
another region to the
left of the plane. In such a case, the signed distance between a point P in
the coordinate space
and the curve is defined to be positive if point P is on the right hand side
of curve C; and negative
if point P is on the left hand side of curve C.
Concerning the identification of the anchor, i.e., the point on the curve
which is closest
to a given point on the coordinate space, it is possible that two or more
points on the curve may
have the same distance to any given point in the coordinate space. In such a
situation, one point
should be selected. The process of selecting this point can be performed in
the comparison and
selection steps in steps 206 and 208 in Fig. 9A. In the embodiment shown in
Fig. 9A, the point
on the curve having the highest parameter value i is selected as the anchor.
By making the
comparison in step 206 only a "less than" comparison, the point on the curve
with the lowest
parameter value i could be selected. The same effect could be obtained by
changing the

CA 02248721 1998-09-04
-1 ~-
r'f
Aia';'~;~i°:~=v .~'~'G~.~,
orientation of the parametrization, or by changing the initialization and
update steps 200 and 210,
respectively.
The result of the process of Fig. 9A is information stored in memory for every
point P in
the coordinate space, representing two unique scalars:
i, called the anchor of P: and
8, its signed distance.
This mapping of point P to ordered pair (i,8) is called a delineation map of
the curve C.
The delineation map of a curve C can be represented by a variety of media,
such as colors. For
example, a color map may be defined which assigns a color based on the anchor
and signed
to distance with respect to the curve. Thus, a region is delineated via the
delineation map.
The brute force approach to the calculation of the anchor and signed distance
in Fig. 9A
involves order O(Mxy) computation time, where M is the number of parameters
representing the
curve, and x~y represents the number of pixels in the image. A more efficient
calculation is
desirable.
15 Such a process will now be described in connection with Figs. 9C and 9D.
Similar to
the embodiment of FIG. 9A, in this embodiment, in two dimensions a curve C is
parametrized by
a parametric value i which represents a pair of coordinates (x,y). Since a
computer processes
discrete pixels, in essence the curve C is approximated by a piecewise linear
curve where each
piece is represented by parameter i and has two vertices - the point (x,y)
corresponding to
2o parameter i and the point (x,y) corresponding to parameter i-1. The curve C
between successive
vertices is assumed to be all the points lying on a straight line between the
two vertices.
While computing the distance from a point P to a curve C may involve computing
with
infinitely many points on the curve, computing the distance from P to its
piecewise linear
approximant involves work only in the order of the number of vertices, as
demonstrated by the
25 process of FIG. 9A. The distance from P to C is defined to be the shortest
distance from P to any
point on C. There are two kinds of distances - radial distances and orthogonal
distances. A
radial distance is the distance, using some predefined metric such as the
Euclidean metric,
between P and any vertex of C. A radial distance can be visualized as defining
a circular disk
with a vertex as a center. Orthogonal distance is the distance, using the
metric used to compute
3o the radial distance, between P and each straight line segment between two
adjacent vertices of
the piecewise linear curve. An orthogonal distance can be visualized by an
unite line drawn
through P and in the direction orthogonal to the line segment. If this
infinite line intersects the

CA 02248721 1998-09-04
.. . .
-16-
line segment at some point Q between the two vertices defining the line
segment, then the
orthogonal distance is defined and is equal to the distance between P and Q.
On the other hand,
if the point Q is not between the two vertices defining the line segment, the
orthogonal distance
between P and the line segment is undefined.
As described below in connection with FIG. 9D, this process actually
determines the
reverse. That is, given a scanline (*,y), where * stands for any x value
between the left and right
limits of the image, this process determines the x values on the scanline
which have a well-
defined orthogonal distance with a line segment defined by two vertices of the
curve C.
Additionally, the smallest orthogonal distance produced by these valid x-
values is determined.
1o In the process of this embodiment, better performance is obtained by
traversing the
curve, vertex by vertex, and updating the distances and anchors in an array
representing a
scanline only as an update is needed between scanlines, instead of by
computing a distance and
anchor for each pixel in every scanline. This process takes advantage of the
fact that the distance
and anchor values depend on the maximum distance from the curve in which
pixels are affected
15 by the edge characteristics, defined by a value MAXDIST, and that only a
few pixels in a
scanline can be affected by a given point on the curve. Thus, the process of
updating distance
and anchor values for each scanline only as needed is accomplished by two
steps. Referring now
to FIG. 9E, first, for radial distances from the vertices, only those pixels
within a maximum
distance MAXDIST -need to be updated. For example, only those points (between
xo and x,) on
2o the scanline that are within a radial distance MAXDIST from a point P; on
the curve need to be
updated. Second, the x-intercepts on the current scanline of two lines which
pass through the
endpoints of each line segment (e.g., defined by P; and P;+1) and which are
normal to the line
segment are computed easily. The orthogonal distance between points on the
scanline and the
line segment is also computed easily. Only those pixels between the two
intercepts (e.g., x, and
25 x3) need to be updated when the orthogonal distance is smaller than a
maximum distance
MAXDIST.
Referring now the Fig. 9C, in this embodiment, an image is processed one
scanline at a
time. Thus, the first step is setting a y value to zero, in step 230. Two
arrays are then generated
for the scan line, in step 232. A first array is called the distance table and
holds the current
3o minimum distance to the curve for each point x along the scanline. The
second array is called
the anchor table and holds the current closest point on the curve for each
point x along the
scanline. Thus, the distance table and the anchor table are x-entry arrays,
where x is the number

CA 02248721 1998-09-04
S~~mE~,.~~_
-17-
of pixels in a scanline. The process of generating these arrays is described
below in connection
with FIG. 9D. Then, for each pixel in the scan line, a color is selected by
steps 234 through 240.
In particular, x is initialized to zero in step 234. The color for the pixel
defined by the current x
value and the current scanline is determined according to the distance and
anchor values in the
distance table and the anchor table for the current x value (step 236). X is
incremented in step
238 and these steps are performed until the end of the scanline is reached as
determined in step
240. These steps 234 to 240 may be performed separately, as part of the image
processing
module described below in connection with Fig. 10 if the distance table and
anchor table is
stored for each scanline. The y value is then incremented in step 242 to
process subsequent
t o scanlines until the final scanline has been processed as determined in
step 244.
Referring now to FIG. 9D, the process of generating the distance and anchor
tables will
now be described. The first step of this process is determining whether the
current scanline is
within a bounding box defining the region of the image affected by the curve
and its associated
edge characteristics (step 246). In step 246, the scanline value y is compared
to the minimum
15 and maximum y values defining the bounding box. If the scanline is outside
of the bounding
box, the anchor and distance values are set to a default value, such as
infinity, in step 248.
If the scanline is within the bounding box, a parameter value, e.g., i, is
initialized in step
250 to an initial value, e.g. zero. The pixel coordinates (x,y) of a point P
corresponding to the
current parameter i are then obtained in step 252. The distance along the y
axis between this
2o point and the current scanline is then compared to the value MAXDIST in
step 254 to determine
if this point may affect the distance and anchor values of the current
scanline. Thus, if the
distance is less than MAXDIST, the distance table and anchor table arrays are
updated by the
radial distances from point P for only those points within MAXDIST along the x
axis of the
point P in step 256. Thus, a distance, such as the Euclidean distance, is
calculated between the
25 point P and each point in the scanline between x - MAXDIST and x + MAXDIST,
where x is the
x coordinate of point P. This distance is compared to the current distance in
the distance table
for the point, and the distance table and anchor table is updated, using steps
similar to steps 206
and 208 in FIG. 9A. Steps 252 through 256 are repeated for each parameter i,
or each point
along the curve, by incrementing i (step 258) until all points on the curve
have been evaluated as
3o determined in step 260.
The parameter value, e.g., i, is again initialized in step 262 to an initial
value, e.g. zero.
The pixel coordinates (x,y) of a point Q corresponding to the current
parameter i, and pixel

CA 02248721 1998-09-04
~.-- ',:~:'r . .
-18-
coordinates (x,y) of a point P corresponding to the previous parameter (i-1)
are then obtained in
step 264. The orthogonal projection of the scanline onto the line segment
defined by points P
and Q is then determined and the orthogonal distance to points P and Q from
the scanline is
computed in step 266. The orthogonal distances are compared to the value
MAXDIST in step
X68. If both of these lengths are greater than MAXDIST, these points do not
affect the distance
and anchor values of the scanline. Otherwise, the distance table and anchor
table are updated in
step 270 between the minimum x point on the scanline and the ma<Yimum x point
on the scanline
corresponding to the orthogonal projections of the scanline on the line
segment defined by points
P and Q. That is, for each point along the scanline between the minimum and
maximum x
1o points, the orthogonal distance to the line segment defined by points P and
Q is calculated. This
distance is compared to the current distance in the distance table for the
point, and the distance
table and anchor table is updated, using steps similar to steps 206 and 208 in
FIG. 9A. Steps 264
through 270 are repeated for each parameter i, or each point along the curve,
by incrementing i
(step 272) until all points on the curve have been evaluated as determined in
step 274.
Having now described how the anchor and distance values are determined for
points in
a region, the use of the boundary definitions, edge characteristics and anchor
and distance values
by image processor 112 to produce an image 114 will now be described in
connection with Fig.
10. Given predefined edge characteristic,functions for selected points on the
boundary, the first
step of image processing is to interpolate these edge characteristic functions
(step 300) to
2o determine an edge characteristic function for each point on the boundary.
This step is not
necessary where the edge characteristic function is independent of the anchor,
such as in FIG.
8A, or is defined for the entire object as a function of the anchor. Given the
parametrization of
the curve, there should be an edge characteristic function for each value of
the parameter. Next,
x and y values are initialized, e.g., to zero and zero, respectively, to allow
for scan line
processing of the image. For each point (x,y), the edge characteristic
function assigned to the
anchor of that point is selected in step 304. This function is applied to the
signed distance for
that point in step 306 to obtain a value, herein called a. It should be
understood that a boundary
within the image may limit the area affected by the edge characteristics.
Accordingly, when the
distance is greater than some maximum distance, a default color, such as
black, should be
3o generated. The combination of all these values a for each point (x,y)
defines the image 114.
Next, the x value is incremented in step 308 to further process the scan line.
The scan line of
steps 304 through 308 are repeated until the end of the scan line is reached
as determined in step

CA 02248721 2005-03-21
~~~s~-m
-l g-
310. At the end of the scan line, the y value is incremented in step 312, and
the x value is reset
to its initial value, e.g., zero. Each scan line is processed by repeating
steps 304 through 312
until all the scan lines have been processed as determined in step 314. The
image 114 is then
complete upon the completion of step 314.
As discussed above, the image processing function may be combined with the
anchor
and distance calculation. This combination of processing functions can be
implemented by
inserting steps 304 and 306 between steps 212 and 214 in Fig. 9A for example.
FIG. 1 I shows an example image that may be generated. This image is defined
by an
object or curve 320. The edge characteristic defined in a direction indicated
by line 322 is a
longer greyscale ramp than that indicated, for example, at 324. This fgure
illustrates that the use
of variable edge characteristics which are a function of a point an the curve
enables the
illustrated effect to be created which is not easily performed by prior art
methods.
Referring now to FfG. 12, an application of this invention W 11 now be
described. Fifi.
I2 shows a block diagram of system for alpha compositing of an image. Such a
system includes
t 5 a blender 80 which receives pixel data such as Amy and $x,y from
respective frame stores 82 and
84. A control image is received from frame store 8b. The control image
provides alpha values
axY to the blender 80 which then performs the function of equation ( I )
provided above to output
pixels C,~y for the resulting image which is placed in frame store 88.
The compositing system of Fig. 12 may be implemented to process :contirmous
2o synchronized video signals such as in the Kellar patent, or to proc~e~s
streams ofdigitai video
information which are intermittent and #Iow controlled such as in systems
available tom Avid
Technology; and as described in PCT ~ublicaticn ~lo. 941248 i S.
There are other applicatimns of the invention in the representation vof images
and data.
In general, such applications involve representing values withhi~n a ~c-
oordinate space as a function
25 of the distance to a nearest point on a boundary on that space. As an
exarn~ie, the ~ility
that a point of land may be under water may be defined with refererree to a
distance-to a.point at
a current boundary of a body of water to which the given point is closest. For-
exarnple, various
points along the boundary may be assigned different probability functions. As
an additional
example, the movement of a fireball, such as a comet, may be represented as a
circle with a long
3o edge characteristic function defining its tail and randomly modified edge
characteristics to
indicate flames shooting from its core. An image generated by such a
representation might be
blended into another image using alpha compositing.
*Trade-mark

CA 02248721 1998-09-04
WO 97/34261 PCT/US97/03436
-20-
More complex images may be defined by using such structured objects with an
ed~~e
characteristic function that defines a characteristic of a pixel of an image
as a function of its
distance to a closest point on the structured object. Such an image
representation is also readily
manipulated to allow for a variety of image processing applications,
including. image
COlllpOSitlllg, special effects, animation and modeling. Additionally. the
structured objects in this
invention have all of the benefits of structured objects and many of the
benefits of a bit map.
particularly in the definition of edges.
Having now described a few embodiments of the invention, it should be apparent
to
those skilled in the art that the foregoing is merely illustrative and not
limiting, having been
presentedl~y way of example only. Numerous modifications and other embodiments
are within
the scope of one of ordinary skill in the art and are contemplated as falling
within the scope of
the invention as defined by the appended claims.

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: IPC removed 2015-04-15
Inactive: IPC expired 2011-01-01
Inactive: IPC removed 2010-12-31
Time Limit for Reversal Expired 2010-03-05
Letter Sent 2009-03-05
Inactive: IPC from MCD 2006-03-12
Grant by Issuance 2005-12-20
Inactive: Cover page published 2005-12-19
Pre-grant 2005-10-03
Inactive: Final fee received 2005-10-03
Revocation of Agent Requirements Determined Compliant 2005-09-21
Inactive: Office letter 2005-09-21
Inactive: Office letter 2005-09-21
Appointment of Agent Requirements Determined Compliant 2005-09-21
Revocation of Agent Request 2005-09-15
Appointment of Agent Request 2005-09-15
Notice of Allowance is Issued 2005-08-25
Letter Sent 2005-08-25
4 2005-08-25
Notice of Allowance is Issued 2005-08-25
Inactive: IPC removed 2005-08-19
Inactive: IPC removed 2005-08-19
Inactive: First IPC assigned 2005-08-19
Inactive: Approved for allowance (AFA) 2005-06-30
Amendment Received - Voluntary Amendment 2005-03-21
Inactive: S.30(2) Rules - Examiner requisition 2004-09-21
Amendment Received - Voluntary Amendment 2002-09-12
Letter Sent 2002-04-23
Inactive: Delete abandonment 2002-04-18
Inactive: Office letter 2002-04-18
Amendment Received - Voluntary Amendment 2002-04-04
Request for Examination Received 2002-03-05
Request for Examination Requirements Determined Compliant 2002-03-05
All Requirements for Examination Determined Compliant 2002-03-05
Deemed Abandoned - Failure to Respond to Maintenance Fee Notice 2002-03-05
Letter Sent 1999-03-17
Reinstatement Requirements Deemed Compliant for All Abandonment Reasons 1999-03-08
Deemed Abandoned - Failure to Respond to Maintenance Fee Notice 1999-03-05
Inactive: IPC assigned 1998-11-24
Inactive: IPC assigned 1998-11-24
Inactive: First IPC assigned 1998-11-24
Classification Modified 1998-11-24
Inactive: IPC assigned 1998-11-24
Inactive: Notice - National entry - No RFE 1998-11-12
Application Received - PCT 1998-11-09
Application Published (Open to Public Inspection) 1997-09-18

Abandonment History

Abandonment Date Reason Reinstatement Date
2002-03-05
1999-03-05

Maintenance Fee

The last payment was received on 2005-02-22

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
AVID TECHNOLOGY, INC.
Past Owners on Record
PERRY S. KIVOLOWITZ
SZE-PING WONG
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column (Temporarily unavailable). To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative drawing 1998-11-25 1 6
Description 1998-09-03 21 1,211
Drawings 1998-09-03 13 215
Cover Page 1998-11-25 2 92
Abstract 1998-09-03 1 63
Claims 1998-09-03 6 252
Drawings 2005-03-20 13 224
Claims 2005-03-20 5 184
Description 2005-03-20 23 1,312
Representative drawing 2005-11-13 1 9
Cover Page 2005-11-22 1 58
Reminder of maintenance fee due 1998-11-09 1 110
Notice of National Entry 1998-11-11 1 192
Courtesy - Certificate of registration (related document(s)) 1998-11-11 1 114
Courtesy - Certificate of registration (related document(s)) 1998-11-11 1 114
Courtesy - Abandonment Letter (Maintenance Fee) 1999-03-16 1 187
Notice of Reinstatement 1999-03-16 1 172
Reminder - Request for Examination 2001-11-05 1 118
Acknowledgement of Request for Examination 2002-04-22 1 179
Commissioner's Notice - Application Found Allowable 2005-08-24 1 162
Maintenance Fee Notice 2009-04-15 1 171
PCT 1998-09-03 29 1,157
Correspondence 2002-04-17 2 20
Correspondence 2002-04-17 5 225
Fees 1999-03-07 2 74
Correspondence 2005-09-14 1 43
Correspondence 2005-09-20 1 17
Correspondence 2005-09-20 1 17
Correspondence 2005-10-02 1 34