Note: Descriptions are shown in the official language in which they were submitted.
CA 02307352 2000-OS-O1
SYSTEM AND METHOD FOR DISPLAYING A THREE-DIMENSIONAL OBJECT
USING MOTION VECTORS TO GENERATE OBJECT BLUR
FIELD OF THE INVENTION
The invention relates to the field of information handling
system, and, more particularly, to a system and method for
displaying a three-dimensional object. Still more particularly,
the invention relates to using motion vectors to generate a
blurring effect for an object.
BACKGROUND OF THE INVENTION
Three-dimensional (3D) graphics systems are used for a variety
of applications, including computer-assisted drafting,
architectural design, simulation trainers for aircraft and other
vehicles, molecular modeling, virtual reality applications, and
video games. Three-dimensional systems are often implemented on
workstations and personal computers, which may or may not include
3D graphics hardware. In systems which include 3D graphics
hardware, a graphics accelerator card typically facilitates the
creation and display of the graphics imagery.
A software application program generates a 3D graphics scene,
and provides the scene, along with lighting attributes, to an
application programming interface (API). Current APIs include
OpenGL, PHIGS, and Direct3D. A 3D graphics scene consists of a
number of polygons which are delimited by sets of vertices. The
vertices are combined to form larger primitives, such as triangles
or other polygons. The triangles (or polygons) are combined to
form surfaces, and the surfaces are combined to form an object.
Each vertex is associated with a set of attributes, typically
AUS9-1998-0492 1
CA 02307352 2000-OS-O1
including: 1) material color, which describes the color of the
object to which the vertex belongs; 2) a normal vector, which
describes the direction to which the surface is facing at the
vertex; and 3) a position, including three Cartesian coordinates x,
y, and z. Each vertex may optionally be associated with texture
coordinates and/or an alpha (i.e, transparency) value. In
addition, the scene typically has a set of attributes, including:
1) an ambient color, which typically describes the amount of
ambient light; and 2) one or more individual light sources. Each
light source has a number of properties associated with it,
including a direction, an ambient color, a diffuse color, and a
specular color.
Rendering is employed within the graphics system to create
two-dimensional image projections of the 3D graphics scene for
display on a monitor or other display device. Typically, rendering
includes processing geometric primitives (e.g., points, lines, and
polygons) by performing one or more of the following operations as
needed: transformation, clipping, culling, lighting, fog
calculation, and texture coordinate generation. Rendering further
includes processing the primitives to determine component pixel
values for the display device, a process often referred to
specifically as rasterization.
In some 3D applications, for example, computer animation and
simulation programs, objects within the 3D graphics scene may be in
motion. In these cases, it is desirable to simulate motion blur
for the objects that are in motion. Without motion blur, objects
in motion may appear to move jerkily across the screen.
Similar techniques are also commonly used to blur objects when
simulating depth of field. Objects which are within the "field of
AUS9-1998-0492 2
CA 02307352 2000-OS-O1
view" are left un-blurred, while objects which are closer or
farther away are blurred according to their distance from the
camera (i.e. viewer).
A prior art method for simulating object blur includes the use
of an accumulation buffer. The accumulation buffer is a
non-displayed buffer that is used to accumulate a series of images
as they are rendered. An entire scene (i.e. each object, .or
primitive, in the scene) is repeatedly rendered into the
accumulation buffer over a series of time slices. The entire scene
is thus accumulated in the accumulation buffer, and then copied to
a frame buffer for viewing on a display device.
A prior art method for using an accumulation buffer to
simulate object blur is illustrated in Figure 1. As shown in
Figure 1, a time period is divided into "n" time slices (step 100).
The time period is the amount of time during which ascene is
visible on a display device, and is analogous to the exposure
interval, or shutter speed, of a video camera shutter. A longer
shutter speed corresponds to a greater amount of blurring, whereas
a shorter shutter speed corresponds to a lesser amount of blurring.
A time-slice count is set to one (step 102). Next, an object (i.e.
primitive) is selected for rendering (step 104). The location,
color, and all other per-vertex values are calculated for each
vertex in the object for this particular time slice (step 106).
The object is then rendered into a color buffer (step 108). A
check is made to determine if the object rendered is the last
object in the scene (step 110). If not, the process loops back to
step 104, and is repeated for each object in the scene.
If the last object in the scene has been rendered (i.e. the
answer to the question in step 110 is "yes"), the scene is
AUS9-1998-0492 3
CA 02307352 2000-OS-O1
accumulated (step 112), meaning it is scaled (for example, by 1/n)
and copied into the accumulation buffer. The time-slice count is
checked to see if it is equal to n (step 114). If not, the time
slice count is incremented (step 116). The process then loops back
to step 104, and is repeated for each time slice. If the
time-slice count is equal to n (i.e. the answer to the question in
step 114 is "yes"), then the accumulation buffer is scaled and
copied to the frame buffer (step 120) and is displayed on a display
screen (step 122).
The use of an accumulation buffer as described in Figure 1 is
a computationally expensive process, as the entire scene (i.e. each
object in the scene) is rendered "n" times for each time period.
Consequently, it would be desirable to have a system and method for
more efficiently simulating object blur in a three-dimensional
graphics environment.
SUMMARY OF THE INVENTION
Accordingly, the present invention is directed to a system,
method, and computer-usable medium for simulating object blur using
motion vectors. A motion vector, or array of motion vectors, may
be specified on either a per-vertex or per-primitive (i.e.
per-object) basis. A motion vector is opposite to the direction of
the motion, and thus points in the direction of the blur. The
magnitude of the motion vector represents the distance the vertex
or the primitive ( i . a . each vertex in the primitive ) travels in one
unit of time.
When a scene is rendered, only those objects which are in
motion, or which are subject to depth of field blurring, are
rendered over a series of time slices. All objects which are
AUS9-1998-0492
CA 02307352 2000-OS-O1
static (i.e. non-blurred) are rendered directly into a color
buffer, rather than being repeatedly rendered over a series of time
slices. Thus, static (i.e. non-blurred) objects are rendered only
once, while objects which are to be blurred are rendered over a
series of time slices. This increases the efficiency of the
rendering process while simulating object blur of the objects which
are in motion and/or subject to depth of field blurring.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other features and advantages of the present
invention will become more apparent from the detailed description
of the best mode for carrying out the invention as rendered below.
In the description to follow, reference will be made to the
accompanying drawings, where like reference numerals are used to
identify like parts in the various views and in which:
Figure 1 is a flow chart illustrating a prior art method for
simulating object blur;
Figure 2 is a representative system in which the present
invention may be implemented;
Figure 3 depicts a moving object, including a motion vector,
within a static scene;
Figure4 depicts a moving object, including an array of motion
vectors, within a static scene; and
Figures 5A and SB are f low charts illustrating a method for using
motion vectors to simulate object blur in accordance with the
present invention.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION
A representative system in which the present invention may be
AUS9-1998-0492 5
CA 02307352 2000-OS-O1
implemented is illustrated in Figure 2. Information handling
system 200 includes one or more processors 202 coupled to a
processor or host bus 204. Cache memory 206 may also be coupled to
host bus 204. A bridge/memory controller 208 provides a path
between host bus 204 and system memory 210, as well as a path
between host bus 204 and peripheral bus 212. Note that system
memory 210 may include both read only memory (ROM) and random
access memory (RAM). Accumulation buffer 214 is included within
system memory 210. Alternately, accumulation buffer 214 can be
included in graphics adapter 218. In one embodiment, peripheral
bus 212 is a PCI bus or other bus suitable for supporting high
performance graphics applications and hardware. Graphics adapter
218 is coupled to peripheral bus 212, and may include local memory
portion 220, and frame buffer 221. System 200 may or may not
include graphics adapter 218, and if graphics adapter 218 is not
present in system 200, then frame buffer 221 may be included in
system memory 210 or in video controller 216. Video controller 216
is coupled to display device 222, and is configured to refresh
display device 222 with a graphics image stored in frame buffer
221. Note that graphics adapter 218 may be suitably integrated in
a single device with video controller 216.
The present invention is a system, method, and computer-usable
medium for simulating object blur using motion vectors. A motion
vector, or array of motion vectors, may be specified on either a
per-vertex or per-primitive (i.e. per-object) basis. A motion
vector is opposite to the direction of the motion, and thus points
in the direction of the blur. The magnitude of the motion vector
represents the distance the vertex or the primitive (i.e. each
vertex in the primitive) travels in one unit of time.
AUS9-1998-0492 6
CA 02307352 2000-OS-O1
When a scene is rendered, only those objects which are in
motion are rendered over a series of time slices. All objects
which are static (i.e. nonmoving) are rendered directly into a
color buffer, rather than being repeatedly rendered over a series
of time slices. In the prior art, every object in a scene (whether
static or in motion) is rendered over a series of time slices, and
then accumulated, as discussed above in the Background Of The
Invention section herein. The present invention renders static
objects only once, and only performs rendering over a series of
time slices for those objects which are in motion. This increases
the efficiency of the rendering process while simulating object
blur of the objects which are in motion. The present invention may
also be used to simulate object blur associated with depth of field
blurring.
Referring to Figure 3, an example of a linearly moving object
within a static scene will now be described. While many objects
within a typical scene may be in motion, for illustrative purposes
Figure 3 depicts a single object 300 in motion within a static
scene 302. Note that motion vector 304 is opposite to the
direction of motion 306. The magnitude of motion vector 304
represents the distance over which object 300 moves in a predefined
period of time. In the example shown in Figure 3, a single motion
vector 304 has been specified for the object. Thus, motion vector
304 applies to each vertex of object 300. During the predefined
time period, the vertices of object 300 have moved from points a1,
b1, and c1 to points a2, b2, and c2 respectively. The magnitude of
motion vector 304 is thus equal to a2-a1. The magnitude is also
equal to b2-b1, and equal to c2-c1.
Of course, each vertex of object 304 does not have to be
AUS9-1998-0492 7
CA 02307352 2000-OS-O1
moving at the same velocity. It is possible to assign a different
motion vector to each vertex of an object. Each vertex may be
moving in a different direction and/or at a different rate of
speed.
Referring to Figure 4, an example of a non-linearly moving
object within a static scene will now be described. As in Figure
3, for illustrative purposes only, a single moving object 400 is
depicted within a static scene 402. There are several motion
vectors 404, 406, 408, and 410 associated with object 400. Each
motion vector has a magnitude equal to a portion of the distance
traveled by object 400 during a predefined time period. In the
example shown, each motion vector 404, 406, 408, and 410 applies to
every vertex of object 400. During the predefined time period, the
vertices of object 400 move from points a1, b1, and c1 to points
a2, b2, and c2 respectively. Each vertex moves uniformly with the
other vertices, however, object 400 (and its vertices) do not move
linearly. Thus, each motion vector 404, 406, 408, and 410
includes a magnitude equal to the distance traveled during a
portion of the predefined time period. As in Figure 3, each motion
vector is opposite to the direction of motion 412. Motion vectors
404, 406, 408, and 410 are referred to as an array of motion
vectors. An array of motion vectors may be assigned to an object,
as shown in Figure 4, in which case the array of motion vectors
applies to every vertex in the object. Alternately, an array of
motion vectors may be assigned to a vertex.
An Application Programming Interface (API) is preferably
provided in order to allow an application program to specify motion
vectors for objects and vertices. An exemplary OpenGL API is
depicted below. Note that this API is shown for illustrative
AUS9-1998-0492 8
CA 02307352 2000-OS-O1
purposes only, and is not meant to be limiting. Those skilled in
the art will appreciate that motion vectors may be specified using
a variety of programming techniques. Further, the use of OpenGL as
an API is not meant to be limiting. The present invention may be
implemented using various APIs, including, but not limited to PHIGS
and Direct3D.
An exemplary OpenGL API is as follows:
Overview
This extension allows object blur to occur via a point, line, or
edge along a specified motion vector. The motion vector is
opposite to the direction of motion, thus it points in the
direction of blur. The magnitude of the vector is the distance
each vertex has traveled in one unit of time.
The "glMotionVector*" routines allow the application to specify
motion vectors or arrays of motion vectors on a per-vertex basis or
a per-primitive basis. The "glMotionEnv*" routines allow the
application to specify the duration of motion, the degree of fade
over time, and whether motion blur (i.e. object blur) is enabled or
disabled.
Procedures And Functions
1. void glMotionVector[bsifd]IBM(T xcomponent, ycomponent,
zcomponent)
Purpose: Specify a motion vector for an object
Variables: Three [b]ytes, [s]hortwords, [i]ntegers,
[f]loating point numbers, or [d]ouble precision floats
specifying a 3D vector
AUS9-1998-0492 9
CA 02307352 2000-OS-O1
2. void glMotionVectorv[bsifd]IBM(T components)
Purpose: Specify a motion vector for a vertex
Variables: An array specifying a 3D vector
3. void glMotionVectorPointerIBM(int size, enum type, sizei
stride, void *pointer)
Purpose: Specify an array of motion vectors for an object or
vertex
Variables: The size, type, and stride of a list of motion
vectors pointed to by the pointer variable
4. void glMotionEnv[if]IBM(GLenum pname, GLfloat param)
Purpose: Specifies the duration and degree of blur
Variables: If pname is equal to
GL MOTION ENV FADE-IBM, then
param specifies the degree to which the object is faded over
time. If pname is equal to GL MOTION ENV DELTA TIME_IBM, then
param specifies the number of units of time to blur.
Referring to Figures 5A and 5B, a flow chart illustrating a
method for using motion vectors to simulate object blur in
accordance with the present invention will now be described. Note
that the steps described in Figures 5A and 5B can be performed
either in software or in hardware (e. g., by a graphics
accelerator) , or by a combination of software and hardware. An
object within a scene is defined for rendering (step 500), meaning
that the location, color, motion vector(s), and other attributes
are defined for the object. The environment of the scene, along
with any motion vectors associated with the object, are used to
AUS9-1998-0492 10
CA 02307352 2000-OS-O1
determine whether the object is static or in motion (step 502).
Note that the determination in step 502 could further include a
determination as to whether or not the object needs to be blurred
due to its depth of field. If the object is not static (i.e. the
answer to the question in step 504 is "no") , then the object is
identified as in "in-motion" object (step 506): If the object is
static (i.e. the answer to the question in step 504 is "yes"), then
the object is rendered into a color buffer. The color buffer can
be any displayed or non-displayed buffer area. For example, the
color buffer may be a portion of system memory 210 or local memory
220 on graphics adapter 218, as described above with reference to
Figure 2.
Referring back to Figures 5A and 5B, a check is made to
determine if the object is the last object in the scene (step 510).
If not (i.e. the answer to the question in step 510 is "no"), then
another object is defined for rendering in step 500. If the object
is the last object in the scene (i.e. the answer to the question in
step 510 is "yes"), then the in-motion objects (i.e. the objects
that require blur) are processed. One skilled in the art will
realize that if there are no "in-motion" objects in the scene, the
color buffer may be copied to the frame buffer at this point, and
the scene may be displayed. For illustrative purposes, the process
depicted in Figures 5A and 5B assumes a combination of static and
"in-motion" objects in the scene.
A predetermined render time period is divided into "n" time
slices (step 512). As discussed above with reference to Figure 1,
the render time period is the amount of time during which a scene
is visible on a display device, and is analogous to the exposure
interval, or shutter speed, of a video camera shutter. A longer
AUS9-1998-0492 11
CA 02307352 2000-OS-O1
shutter speed corresponds to a greater amount of blurring, whereas
a shorter shutter speed corresponds to a lesser amount of blurring.
A time-slice count is set to one (step 514). Next, an "in-motion"
object (i.e. an object identified as an "in-motion" object in step
506) is selected for rendering (step 516). The motion vector or
vectors associated with the object are used, along with other
motion variables to calculate and/or modify the location, color,
and all other attributes for each vertex in the object (step 518).
The object is then rendered into a color buffer (step 520). A
check is made to determine if the object rendered is the last
"in-motion" object in the scene (step 522). If not, the process
loops back to step 516, and is repeated for each "in-motion" object
in the scene.
If the last "in-motion" object in the scene has been rendered
(i.e. the answer to the question in step 522 is "yes"), the scene
is accumulated (step 524), meaning it is scaled (for example, by
1/n) and copied from the color buffer into the accumulation buffer.
The time-slice count is checked to see if it is equal to n (step
526). If not, the time slice count is incremented (step 528), and
the process then loops back to step 516, and is repeated for each
time slice. If the time-slice count is equal to n (i.e. the answer
to the question in step 528 is "yes"), then the accumulation buffer
is scaled and copied to the frame buffer (step 532) and is
displayed on a display screen (step 534).
Note that it is possible to define two entry points into the
method described in Figures 5A and 5B, or alternately, it is
possible to have two separate routines to execute the method
described in Figures 5A and 5B. For example, the determination as
to whether an object is in motion (i.e. needs to be blurred) or not
AUS9-1998-0492 12
CA 02307352 2000-OS-O1
can be made by an application program. If the application program
determines that an object is static, the application program can
call a routine which executes only steps 500 through 510 to render
the static object. If the application program determines that an
object needs to be blurred, the application program can call a
routine which executes only steps 512 through 534 to render the
in-motion object.
Although the invention has been described with a certain
degree of particularity, it should be recognized that elements
thereof may be altered by persons skilled in the art without
departing from the spirit and scope of the invention. One of the
implementations of the invention is as sets of instructions
resident in the random access memory of one or more computer
systems configured generally as described in Figure 2. Until
required by the computer system, the set of instructions may be
stored in another computer readable memory, for example in a hard
disk drive, or in a removable memory such as an optical disk for
eventual use in a CD-ROM drive, or a floppy disk for eventual use
in a floppy disk drive. Further, the set of instructions can be
stored in the memory of another computer and transmitted over a
local area network or a wide area network, such as the Internet,
when desired by the user. One skilled in the art will appreciate
that the physical storage of the sets of instructions physically
changes the medium upon which it is stored electrically,
magnetically, or chemically so that the medium carries computer
usable information. The invention is limited only by the following
claims and their equivalents.
AUS9-1998-0492 13