Language selection

Search

Patent 2256970 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2256970
(54) English Title: METHOD FOR ACCESSING AND RENDERING AN IMAGE
(54) French Title: METHODE POUR ACCEDER A UNE IMAGE ET POUR LA GENERER
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06T 11/00 (2006.01)
  • G06T 11/60 (2006.01)
(72) Inventors :
  • GIGNAC, JOHN-PAUL J. (Canada)
  • COULOMBE, SAM D. (Canada)
  • WICK, DALE M. (Canada)
  • SUTHERLAND, STEPHEN B. (Canada)
(73) Owners :
  • TRUESPECTRA CANADA INC. (Canada)
(71) Applicants :
  • TRUESPECTRA INC. (Canada)
(74) Agent: DENNISON ASSOCIATES
(74) Associate agent:
(45) Issued:
(22) Filed Date: 1998-12-23
(41) Open to Public Inspection: 2000-06-23
Examination requested: 2003-12-23
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data: None

Abstracts

English Abstract




The invention provides a method of defining and
rendering an image comprising a plurality of components
(bitmaps, vector-based elements, text and effects or other
effects) and an alpha channel. The components are grouped
into a ranked hierarchy based on their position relative to
each other. There can be groups of groups. With this
grouping, each component can be defined using a common
protocol and rendering and processing of the components can
be dealt with in the same manner. The image can be
processed on a scanline-by-scanline basis. For each
scanline analysis, information regarding neighbouring
scanlines are acquired and processed, as needed.


Claims

Note: Claims are shown in the official language in which they were submitted.




THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. A method of rendering an image on a scanline by
scanline basis where the image is composed of a plurality
of distinct segment, said method comprising
defining each distinct segments of the image as
a) a region, tool and alpha channel,
b) a bit map and alpha channel, or
c) a group of objects where the objects are defined
according to a) and b) and where each definition includes
information of the scanlines of the image affected by the
particular segment;
defining an order of the distinct segments from
lower to higher in the image, successively returning
scanlines of the image where each scanline is returned by
i) examining the segments to determine which
segments and the order of the segments which affect the
scanline to be returned,
ii) examining the determined ordered segments of
step I) and determining the particular scanlines to be
outputted by each ordered segment for returning the
particular scanline of the image, and
iii) using the ordered segments from lower to
higher and returning the determined scanlines of the
segment used by the next higher segment as an input until a
scanline of the image is returned.
2. A method as claimed in claim 1 wherein said
image includes a background segment defined as a region,
tool and alpha channel and said background segment is
applied as a last segment prior to returning a scanline.
3. A method as claimed in claim 2 wherein said
method includes using an initial input for the lower most
object equivalent to transparent scanlines.
-16-


4. A method as claimed in claim 3 wherein a group
of objects is a simple object which requires only one
scanline of input for returning a scanline of output.
5. A method as claimed in claim 1 wherein said step
of examining the determined ordered segments of step I) and
determining the particular scanlines to be outputted by
each ordered segment for returning the particular scanline
of the image is carried out by examining the segments from
the highest segment to the lowest segment.
6. A method as claimed in claim 1 wherein at least
some of said segments include a look around object which
requires at least several scanlines from a lower object to
return a scanline.
7. A method as claimed in claim 6 wherein said
group of objects contains at least 3 objects.
8. A method as claimed in claim 1 wherein a group
of objects includes as part thereof, a further group of
objects.
9. A method of grouping dependent elements of an
image for processing on a scanline by scanline basis, said
method comprising: grouping each element of the image as
a) a region, tool and alpha channel;
b) a bit map and alpha channel; or
c) a group of objects where the objects are defined
according to a) and b) and where each group includes
information of the scanlines of the image affected by the
particular element; creating single depending associations
between each subordinate element and its parent; and
defining an order of the distinct elements from
lower to higher in the image.
-17-

Description

Note: Descriptions are shown in the official language in which they were submitted.



CA 02256970 1998-12-23
- WH-10,340CA
TITLE: METHOD FOR ACCESSING AND RENDERING AN IMAGE
FIELD OF THE INVENTION
The present invention relates to a method for
defining various objects which make up an image and a
method of rendering the image on a scanline by scanline
basis.
BACKGROUND OF THE INVENTION
There are a number of computer graphics programs
which store various objects and use these objects to render
the final image. Generally these programs are divided into
vector based programs or bitmap based programs. COREL
DRAWT~~ is primarily vector based whereas PHOTOSHOPT~" is
essentially bitmap based. These known graphics packages
allocate enough temporary storage for the entire rendered
image and then render each object, one by one, into that
temporary storage. This approach fully renders lower
objects prior to rendering upper objects. The programs
require substantial memory in rendering the final image.
Some programs allow an object to be defined as a group of
objects and this provides some flexibility. In the case of
an object being a group of objects, this group is
effectively a duplicate of the base bitmap. Groupings of
objects add flexibility in changing the design or returning
to an earlier design, but substantial additional memory is
required.
The final image of graphics packages is
typically sent to the raster device for output, which
renders the image on a scanline by scanline basis. The
final image is defined by a host of scanlines, each
representing one row of the final bitmap image. Raster
devices include printers, computer screens, television
screens, etc.
- 1 -

CA 02256970 1998-12-23
- WH-10,340CA
Vector based programs such as COREL DRAWT"~,
produce a bitmap of the final image for the raster device.
Similarly, the graphic program PHOTOSHOPT"~ produces a
bitmap of the final image.
Vector based drawings tend to use little storage
before rendering, as simple descriptions often produce
largely significant results. Vector drawings are usually
resolution independent and they are made up of a list of
objects, described by a programming language or other
symbolic representation. Bitmap images, in contrast, are a
rectangular array of pixels wherein each pixel an
associated color or grey level. This image has a clearly
defined resolution (the size of the array). Each
horizontal row of pixels of the bitmap is called a
scanline. Bitmaps tend to use a great deal of storage, but
they are easy to work with because they have few
properties.
Recently the ability to combine layers of
content has been standardized by using a so called "alpha
channel" which represents the transparency of an object or
pixel. There are levels of transparency between solid and
transparent, which could be represented as a percentage.
Although some standard file formats such as CompuServe's
GIF are limited to 2 levels (solid and transparent), newer
formats such as Aldus' TIFF, PNG (Portable Network
Graphics) and the Digital Imaging Group's ".fpx" format
allow 256 or more levels of transparency, which allows for
smooth blending of layers of content. Normally
manipulation of alpha channel information is limited to
bitmap based programs.
There remains a need for a method which allows
the compact descriptions of vector programs, with a
retargetable output resolution, which additionally allows
for full use of all of the powerful image processing
- 2 -


CA 02256970 1998-12-23
WH-10,340CA
effects of an bitmap based program including the alpha
channel capabilities. Our earlier U.S. Patent application
SN 08/629,543 entitled Method Rendering an Image allows for
scanline based rendering and divides all objects into a
tool and region where the region acts as a local alpha
channel for the tool. This doesn't allow for a more
general use of alpha channels to create holes in images
when used, for example, on web pages -- showing through the
background. Also some types of objects such as formatted
text with color highlighting cannot be represented easily
with a separate region (as the shape of the text), and tool
(with the coloring for the text) since particular words
need to have different colors, and these need to follow the
words when the text is reformatted. Additionally the
interface is inconvenient to use as it returns a variable
number of scanlines, including none, when the output device
works best with at most and at least one scanline for each
call.
SUMMARY OF THE INVENTION
The present invention allows the user to define an
image using different definitions for the individual
objects of the image. The objects can be defined as a
region tool and an alpha channel, as a bitmap and alpha
channel or as a group of objects where each object within
the group is defined as a region tool and alpha channel or
a bitmap and an alpha channel. A group of objects is
defined to act as any other object in the string of ranked
objects defining the image. Grouped objects have the same
defining characteristics as an object defined by a region
tool and an alpha channel or a object defined by a bitmap
and an alpha channel. A group object can contain within
its grouping, a further grouped object which can also
contain grouped objects. With this definition, each
grouped object can be defined using the common protocol and
the rendering of objects and the processing of objects can
all be dealt with in the same manner. This arrangement
- 3 -


CA 02256970 1998-12-23
WH-10,340CA
allows for all of the advantages of being able to group
objects while having a common and consistent manner for
dealing with the storage and rendering of an image defined
by the different types of objects.
A method for rendering an image on a scanline by
scanline basis where the image is composed of a plurality
of distinct segments, according to the present invention,
comprises defining each distinct segment of the image as a)
a region tool and alpha channel, b) a bitmap and alpha
channel, or c) a group of objects where the objects are
defined according to a) and b), where each definition
includes information of the scanline of the image affected
by the particular segment.
The method further includes defining an order of
the distinct segments from lower to higher in the unit and
successively returning scanlines of the image where each
scanline is returned by one examining the segments to
determine which segments and the order of the segments
which order the scanline to be returned, to examining the
determining order segments of step 1, and determining the
particular scanlines to be outputted by each ordered
segment for returning the particular scanline of the image,
and 3) using the ordered segments from lower to higher and
returning the determined scanlines of the segment used by
the next higher segment as an input until a scanline of the
image is returned.
According to an aspect of the invention, the image
includes a background segment which is defined as a region
tool and alpha channel and the background segment is
applied as a last segment prior to returning a scanline.
According to yet a further aspect of the invention,
the method includes using an initial input for the lower
most object equivalent to transparent scanlines.
- 4 -


CA 02256970 1998-12-23
' WH-10,340CA
According to yet a further aspect of the invention,
the step of examining and determining the order segments of
step 1 and determining the particular scanlines to be
outputted by each ordered segment for returning the
particular scanline of the image is carried out by
examining the segments from the highest segment to the
lower segment.
According to yet a further aspect of the invention,
wherein some of the segments include a lookaround object
which requires at least several scanlines from a lower
object to return a scanline.
According to yet a further aspect of the invention,
any defined group of objects can include as part thereof, a
further group of objects.
An image made up of a number of render objects
will have a "RenderLayer" as the base render object, which
has a list of render objects contained in it. To create a
RenderLayer job (which implements a render job interface),
the RenderLayer first surveys the ordered list of objects
from top to bottom, to determine the dependencies of the
contained objects, and determines the amount of look around
for each object. The RenderLayer job renders a scanline on
request by creating a buffer for all active contained
objects, that is, all objects that affect the current
scanline. Each active object is partially rendered, in
order, from bottom to top. The minimum portion of each
object is rendered. That portion being the minimum number
of scanlines required to create an output scanline for the
parent object. Buffers that will be needed on subsequent
calls to the render engine are retained for efficiency in
calculations. Information buffered for objects can be
reused or deleted when they are no longer needed. This
provides for both computational and memory efficiency and
allows for scanlines to be output sooner. A render object
can combine its output using a variety of operators that
- 5 -


CA 02256970 1998-12-23
WH-10,340CA
allow for special effects such as punching a hole through a
background.
Each object is separately maintained in storage
and has its own resolution or an unlimited resolution where
possible. The rendering effect of each object is at the
best resolution for imparting the rendering effect of the
object to each segment of the scanline which the object
affects. For example an object may only affect a middle
segment of the scanline and the best resolution of the
object for this segment of the scanline is used.
A method for grouping dependent elements of an
image is provided where the method groups each element of
the image into either (i) a region, tool and alpha channel;
(ii) a bit-map and alpha channel, or (iii) a group of
objects defined according to (i) or (ii), where each group
includes information of the scanlines of the image affected
by a particular element and creating a plurality of single
dependencies between each subordinate element and its
parent.
BRIEF DESCRIPTION OF THE DRAWINGS
Preferred embodiments of the invention are shown in
the drawings, wherein:
Figure 1 displays the abstract interfaces for
the render object and render job;
Figure 2 defines the support classes which store
information used to communicate between various parts of
the method;
Figures 3, 3a, 3b, 3c, 3d, 3e, 3f, 3g, 3h, 3i,
3j, 3k, and 31 contain pseudo code illustrating the steps
involved in rendering an image to an output device;
Figure 4 displays a rendering pipeline with a
RenderLayer which contains two render objects;
Figure 5 displays a rendering pipeline with an
effect which contains a region and a tool;
- 6 -

CA 02256970 1998-12-23
WH-10,340CA
Figure 6 displays a depiction of a rendered
image containing the objects defined in figure 4 and 5;
Figure 7 displays the inter-relationship of the
various major classes used to define the method;
Figure 8 follows the partial rendering process
of objects contained in a RenderLayer;
Figure 9 is a visual representation of the
objects referred to in Figure 8;
Figure 10 is an example of how RenderLayers work
to create a result shown as a list within a list;
Figure 11 is a hierarchical representation of
the objects in Figure 10; and
Figure 12 shows
(a) the a rendered final composition of the objects,
with the Layerl object rendered into the scene,
from the objects described in Figure 10;
(b) the Text1 object;
(c) the Text1 object with the Shadowl object
rendered over top; and
(d) the Text1 object with the Shadowl object
rendered over top, and then the Wave1 object
rendered on top of that.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
An image is defined using a common standard and
according to at least three classifications. The common
standard is that each classification includes a bound box
definition which defines the area which that particular
objects affects. A lookaround distance which is any
additional information that might be required for the
object to render itself as well as the alpha channel.
The first classification is a simple object
defined by a region, tool and alpha channel similar to a
vector program. The second classification is a bitmap
object as commonly defined by a draw program. The third
classification is a grouped object which is defined by a


CA 02256970 1998-12-23
WH-10,340CA
plurality of objects which can include further grouped
objects. The common definition or standard allows grouped
objects to be rendered in a similar manner to the series of
objects defining the image. It also simplifies the
calculations necessary for rendering a scanline as a group
of objects as a common definition which allows the
requirements of that particular group to be known to the
other objects within a series of objects. As far as the
adjacent higher and lower object is concerned, a grouped
object is merely a different type of object having common
characteristics, and as such, the higher and lower objects
continue to interact with the objects in a common set
manner.
This standard for defining an image makes it
convenient during rendering of the object to look from the
top down through the primary objects to determine the
number of lines required of the lower objects for passing
onto an upper object. This process is essentially repeated
for any grouped object. In this way, the steps and
interfaces for rendering of the image are consistent and
straightforward.
Figure 1 shows the following structures used by
the invention.
1. Abstract RenderObiect Class
This class defines the minimum interfaces that
an object needs to be used by the method. They are
getBoundBox which returns an upright rectangle which
defines the limits of the area which the object affects,
getLookAround which defines the amount of extra information
required around any given pixel, in order for that pixel to
be rendered correctly, and initRender which returns a
RenderJob object.
_ g _


CA 02256970 1998-12-23
WH-10,340CA
2. Concrete RenderLayer Subclass
The pseudo code required to implement a render
object, which contains an ordered list of objects, is shown
in Figure 3. A sample instance is shown in Figure 6 with a
data flow diagram shown in Figure 4.
3. Other Concrete Subclasses of RenderOb'~ect
Other subclasses include Effect which contains a
Region and Tool as shown in Figure 7. A corresponding data
flow diagram is shown in Figure 5. Other subclasses
include other effects such as Rich Text with Color
Highlighting or Alpha Channel Bitmap.
4. Abstract RenderJob Class
The class defining the interfaces for a
RenderJob is shown in Figure 1. They are
prefersToOverwriteInputBuffer which facilitates
negotiation of whether the output buffer is different from
the input buffer for computational efficiency. The
getLookAroundDistances is the same as in the RenderObject.
Finally renderNext takes an input buffer, and an output
buffer, and renders one scanline.
5. A Concrete Subclass of RenderJob for Each Concrete
Subclass of RenderObiect, Including RenderLaver
The pseudo code required to implement a render
job for a RenderLayer is shown in Figure 3. The
definitions used in figure 3 are simplified for
readability. A sample walk through of how this method
renders 3 scanlines is included in Figures 8 and 9.
The defining implementations shown in Figure 2
of the invention include RGBAScanline, Rectangle2D,
Affine2D, LookAroundDistances. Although the implementation
in Figure 3 uses an RGB color space, this method applies
equally well to other color spaces such as CIE-LAB, CMYK,
XYZ, etc.
- 9 -


CA 02256970 1998-12-23
WH-10,340CA
Figure 9 shows an example of the operation of
the invention, wherein:
a composition of 3 render objects contained in a
RenderLayer is shown. The objects are "Heartl" which
displays a red heart, "Hellol" which displays the black
text "Hello World", and "Blur1" which blurs or makes fuzzy
the content underneath it. Heartl and Hellol are known as
"simple render objects" because they require only the
background immediately behind the output pixel. This
information can be ascertained by using the
getLookAroundDistances method on the given object. This
call passes in the output resolution and a transformation
to the output space (which can involve rotation, scaling,
translation and skewing - ie. all affine transformations).
The result is the number of extra pixels required as input
which are above, below, to the left and to the right of any
given pixel, in order for the object to be rendered. When
the number of extra pixels is 0 in every direction, the
object is considered to be a simple object. If the number
is greater than zero in any direction then the object is a
"look around" object. An example of a look around object
is Blurl. Blur1 requires an extra pixel in each direction
to render its effect. The extra area required by the blur
is shown by the dashed line around the blur's bound box in
Figure 9. Note that the blur requires information below the
third scanline, which means that an additional scanline
which isn't output needs to be at least partially rendered.
Using a technique known as called the Painter's
Algorithm, a buffer large enough to buffer the entire image
is allocated. First, the background is filled in (Figure
8i steps 1-4), then the bottom most object, Heartl, is
rendered completely (Figure 8i steps 5-7), next the object
in front of Heartl. (Hellol) is rendered completely (Figure
8i steps 8-9) and finally, the front most object, Blurl, is
rendered completely (Figure 8i steps 10-11) using the
results of steps 6, 7, 8, 9. Once this
- 10 -


CA 02256970 1998-12-23
WH-10,340CA
- process is complete the 3 requested scanlines can be output
(Figure 8i steps 12-14).
To start using the reordered rendering method,
the render engine is invoked on the containing RenderLayer,
called "RenderLayerl." RenderLayerl returns a render job
object identified here as "RenderLayerJobl." To get a
scanline, the renderScanline method is called on
RenderLayerJobl, passing in a background. RenderLayerJobl
determines which objects affect the Scanline 1 and renders
them completely (Figure 8ii steps 1 and 2). The result of
Figure 8ii step 2 is needed by the blur, which is buffered
for later use. The resulting Scanline 1 is then returned
in Figure 8ii step 3. The next time renderScanline is
called, the blur becomes active. Since the blur needs a
pixel above and a pixel below it in order to render
correctly, the RenderLayerJobl must buffer up more
information. The result of Figure 8ii step 4-6 is buffered
as well as the result of step 7-8. These three results
(from step 2, 6 and 8) are then passed into the BlurJobl
which results in step 9. The buffer from step 2 can now be
discarded or marked for reuse. The resulting scanline 2 is
returned in step 10. To rendered scanline 3, the blur
requires more than the already buffered result of step 6
and step 8, and so RenderLayerJobl renders step 11 and step
12. These three buffers (from step 6, 8 and 12) are then
passed into the BlurJobl which results in step 13. Finally
the scanline 3 is returned in step 14, and all of the
temporary buffers can be discarded.
In this example, 3 scanline buffers were
required versus 4 scanlines buffers with the Painter's
Algorithm. With a larger render, the resource savings are
often significant. Also the result of the top of the image
became available much earlier.
Figures 4, 5 and 6 show an example of the
processing of images by the modules in the invention.
- 11 -


CA 02256970 1998-12-23
WH-10,340CA
Figure 6 shows an image of a heart ("Heartl") in its bound
box beneath the text Hello World ("Hellol") , in its bound
box. Both the Heartl and the Hellol have colour and alpha-
channel attributes "(c,a)". The composite image is
referred to as "RenderLayerl".
Figure 4 illustrates the processing of the
entire image. First, the Background color and alpha
channel information (c, a) is fed to the RenderLayerl
module, which initiates RenderJob. Starting from the
bottom element, Heartl, a transparent background is fed to
the subordinate call of RenderJob for Heartl. After the
subordinate call of RenderJob for Heartl has completed its
processing, it returns colour and alpha-channel attributes
to the calling RenderJob for RenderLayerl. These returned
attributes are forwarded to the next subordinate call of
RenderJob, i.e. the call relating to Hellol. Once its
processing is completed, its results are returned to
RenderJob for RenderLayerl. At that point, RenderJob takes
the final color and attribute information from RenderJob
for Hellol and combines it with the background colour input
to produce the final output color and alpha information.
Figure 5 illustrates subroutine calls within
RenderJob for Heartl. Here the background color and alpha-
channel information is fed to the RenderJob for the Shape
of Heartl. The RenderJob for Shape returns alpha
information to RenderJob for Heartl. This information
along with the initial color information is fed to the
ToolJob module for the Solid Color of Heartl. This module
returns colour and alpha-channel attributes to the calling
RenderJob for Heartl. These returned attributes are
forwarded to the next subordinate call of RenderJob, i.e.
the call relating to Hellol.
In another example, Figure 12 shows a composite
image comprising a heart, stylized text "Exploring the
Wilderness" and a bitmap image of an outdoor scene
- 12 -


CA 02256970 1998-12-23
wH-10,340CA
underneath the heart and the stylized text. The stylized
text is shown with its normal attributes at 12b, with a
shadow at 12c and with a wave at 12d.
As shown in figure 10, the invention processes
each element of the image according to a hierarchical
stack, having the heart ("Heartl") at the top of the stack,
the stylized text ("Layerl") in the next layer down and
finally with the bitmap ("Bitmapl") at the bottom. Layerl
is exploded to show its constituent effects, comprising a
wave effect ("Wave1"), a shadow effect ("Shadowl") and the
text ("Text1").
Figure 11 shows the hierarchy structure of the
image, where the RootLayer is the fundamental node,
representing the image. Elements of the image, i.e.
Heartl, Layerl and Bitmapl are shown as immediate
dependents of the RootLayer. Further sub-dependencies of
Layerl, i.e. Wavel, Shadowl and Text1 stem from Layerl.
Other information, such as the bound box region may also be
associated with each element. It can be appreciated that
this structure of the invention isolates the dependencies
between parent and child elements to one level of
abstraction. As such, the invention provides abstraction
between and amongst elements in an image. This abstraction
provides implementation efficiencies in code re-use and
maintenance. It can be appreciated that for more complex
images having many more elements, bitmaps and effects, the
flexibility and efficiencies of using the same code
components to processes the components of the image become
more apparent.
In the preferred embodiment, exactly one
scanline is rendered during each call to the render method
on any render object. This even holds for render groups,
since Render Layer constitutes a valid implementation of
the Render Object class. In the example implementation,
render group always passes a completely transparent
- 13 -


CA 02256970 1998-12-23
wH-10,340CA
background as input to its bottommost object. Then the
scanlines produced by applying the bottom most object to
the transparent background scanlines are passed as input to
the next higher object. Similarly, the output of the
second object is passed as input to the third object from
the bottom. This passing repeats until the cumulative
effect of all of the render group's objects is produced.
This final results are then composited onto the background
scanlines (passed by the caller) using the render group's
compositing operator.
Because some render objects have forward look-
around, it is often necessary for lower objects to render a
few scanlines ahead of objects above them. For example,
for an object with one scanline of forward look-around to
render a single scanline within its active range, the
object immediately below it must already have rendered its
result both on that scanline and on the following scanline.
Since rendering must be performed from the bottommost
object to the topmost object, therefore, in order to
guarantee that a single scanline will be completely
rendered by all objects by the end of a call to the
rendering method, it is useful to begin the process by
determining exactly how many scanlines must be rendered by
each object in the render group.
The computation is most easily done in terms of
the total number of scanlines rendered by each object so
far during the entire rendering process, as opposed to the
number of scanlines rendered by each object just during
this pass. The total number of scanlines required _of_ an
object is referred to, relative to that object, as downTo
whereas the total number of scanlines required by_ an
object is referred to, relative to that object, as
downToNeeded. Note that the downToNeeded of a given object
is always equal to the downTo of the object immediately
below it, if applicable. In the case of the bottommost
object, its downToNeeded is the number of empty input
- 14 -


CA 02256970 1998-12-23
wH-10,340CA
scanlines that must be passed to it in order for it to
satisfy the object above it, if any, or the caller
otherwise.
Although various preferred embodiments of the
present invention have been described herein in detail, it
will be appreciated by those skilled in the art, that
variations may be made thereto without departing from the
spirit of the invention or the scope of the appended
claims.
- 15 -

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 1998-12-23
(41) Open to Public Inspection 2000-06-23
Examination Requested 2003-12-23
Dead Application 2005-12-23

Abandonment History

Abandonment Date Reason Reinstatement Date
2004-12-23 FAILURE TO PAY APPLICATION MAINTENANCE FEE

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $300.00 1998-12-23
Extension of Time $200.00 2000-03-29
Maintenance Fee - Application - New Act 2 2000-12-25 $100.00 2000-12-05
Extension of Time $200.00 2001-03-29
Maintenance Fee - Application - New Act 3 2001-12-24 $100.00 2001-12-10
Maintenance Fee - Application - New Act 4 2002-12-23 $100.00 2002-12-12
Registration of a document - section 124 $100.00 2003-03-26
Registration of a document - section 124 $100.00 2003-03-26
Registration of a document - section 124 $100.00 2003-03-26
Registration of a document - section 124 $50.00 2003-06-05
Request for Examination $400.00 2003-12-23
Maintenance Fee - Application - New Act 5 2003-12-23 $150.00 2003-12-23
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
TRUESPECTRA CANADA INC.
Past Owners on Record
COULOMBE, SAM D.
GIGNAC, JOHN-PAUL J.
SUTHERLAND, STEPHEN B.
TRUESPECTRA INC.
WICK, DALE M.
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative Drawing 2000-06-19 1 12
Abstract 1998-12-23 1 21
Description 1998-12-23 15 669
Claims 1998-12-23 2 84
Drawings 1998-12-23 18 467
Cover Page 2000-06-19 1 39
Correspondence 1999-02-02 1 31
Assignment 1998-12-23 3 103
Correspondence 2000-03-29 1 31
Correspondence 2000-04-17 1 37
Correspondence 2000-04-25 1 1
Correspondence 2001-03-29 1 41
Correspondence 2001-04-23 1 13
Assignment 2001-05-11 32 2,003
Assignment 2001-06-01 19 958
Correspondence 2001-07-10 1 14
Correspondence 2001-07-10 1 16
Assignment 2003-03-26 36 2,577
Assignment 2003-06-05 3 96
Prosecution-Amendment 2003-12-23 2 67