Language selection

Search

Patent 3089113 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3089113
(54) English Title: IMPROVED SYSTEM AND METHOD FOR TRANSFORMING GRAPHICAL IMAGES
(54) French Title: SYSTEME ET PROCEDE AMELIORE POUR TRANSFORMER DES IMAGES GRAPHIQUES
Status: Examination
Bibliographic Data
(51) International Patent Classification (IPC):
  • G6T 3/04 (2024.01)
  • G6T 15/04 (2011.01)
  • G6T 17/00 (2006.01)
(72) Inventors :
  • DAVIDSON, JOHN (Canada)
  • COMPSON, NEIL (Canada)
(73) Owners :
  • DISTORTION ARTS LLC
(71) Applicants :
  • DISTORTION ARTS LLC (United States of America)
(74) Agent: ELIAS C. BORGESBORGES, ELIAS C.
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2020-07-31
(41) Open to Public Inspection: 2021-02-02
Examination requested: 2023-06-01
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
62/882,000 (United States of America) 2019-08-02

Abstracts

English Abstract


A method for transforming a two-dimensional graphic image into a two-
dimensional distorted
image which can then be used to recreate an accurate representation of the two-
dimensional
graphic image when applied onto a 3D surface. The method includes the steps of
producing and
identifying a target 3D grid from a plurality of flat polymer webs each having
a grid pattern
printed thereon, the target 3D grid bearing an accurate representation of the
3D surface. The
three-dimensional shape of the target grid is then digitized to form a
geometric model of the
target grid as a series of data points. Creating a uniform smooth 3D model
from the series of
data points and then texture mapping the two-dimensional graphic image onto
the uniform
smooth 3D model to create a texture map. The two-dimensional distorted image
is then created
from the texture map and the two-dimensional graphic image.


Claims

Note: Claims are shown in the official language in which they were submitted.


Therefore, what is claimed is:
1. A method for transforming a two-dimensional graphic image into a two-
dimensional
distorted image which is to be applied onto a three-dimensional surface of a
part, the method
comprising the steps of:
providing a plurality of flat polymer webs each having a grid pattern printed
thereon, the grid
pattern comprising a uniform array of grid markers (grid dots) separated from
each other by a
space, the polymer webs each being formed of a thermally transformable
polymer;
selecting a representative sample of the grid markers common to each polymer
web;
thermally transforming each of the polymer webs into the three-dimension
surface;
measuring and recording a position for each grid marker in the representative
sample of grid
markers for each thermally transformed polymer web;
identifying a target grid by selecting the transformed polymer web whose grid
markers in the
representative sample reside closest to a center of variance of the positions
of the grid markers in
the representative sample of all of the transformed polymer webs;
digitizing the three-dimensional shape of the target grid to form a geometric
model of the target
grid as a series of data points;
creating a uniform smooth 3D model from the series of data points by smoothing
the series of
data points;
texture mapping the two-dimensional graphic image onto the uniform smooth 3D
model to create
a mapped texture graphic, and
creating the two-dimensional distorted image by rendering the mapped texture
graphic.

2. A method for transforming a two-dimensional graphic image into a two-
dimensional distorted
graphical image which is to be pre-applied onto a three-dimensional surface of
a formed part, the
method comprising the steps of:
providing a plurality of flat webs each having a grid pattern printed thereon,
the grid pattern
comprising a uniform array of grid markers (grid dots) separated from each
other by a space, the
webs each being formed of a thermally transformable material;
thermally transforming each of the polymer webs into the three-dimension
surface;
selecting a representative sample of the grid markers common to each polymer
web;
measuring and recording positions for each grid marker in the representative
sample of grid
markers for each thermally transformed web;
identifying a target grid by selecting the transformed polymer web whose grid
markers in the
representative sample reside closest to a center of variance of the positions
of the grid markers in
the representative sample of all of the transformed webs;
digitizing the three-dimensional shape of the target grid to form a digital
representation of the
target grid as a series of data points;
creating a smooth, distinct 3D model containing a precise delineation of the
series of data points,
with the ability to have texture data attached to it, and able to manipulate
its data to output new
texture graphics and data;
texture mapping the two-dimensional graphic bitmap image onto the distinct 3D
model to create
a texture map, and
creating a two-dimensional distorted image by rendering the processed
(distorted) texture map.
21

3. The method of claim 2 wherein the two dimensional graphic image is-
comprised comprises a
vector image or a combination of vector and bitmap images, whereas a distorted
version of the
vector/bitmap image is achieved (in a vector image editing program e.g. Adobe
Illustrator) by
warping, scaling or adjusting 2D vector/bitmap image elements and/or applying
an envelope
mesh to the image whereas the shape of the envelope mesh is achieved through
manual
movement/manipulation or by processing and converting distortion data that
comes from the
digitized part's forming data and texture application data to build the
envelope shape which is
then used to distort the image.
4. The method of claim 2 wherein the grid pattern further comprises a uniform
array of unique
identifiers overlapping the uniform array of grid markers, each unique
identifier separated by
four to five grid markers.
5. The method of claim 2 wherein each of the webs has a plurality of index
placeholders for
precisely positioning in a transforming machine having corresponding index
placeholders such
that the webs are positioned identically in the transforming machine.
6. The method of claim 2 wherein the step of creating a distinct 3D model
containing a precise
delineation of the series of data points is done by the use of a polygonal
object in a 3D graphical
imaging application such as Blender.
7. A method of creating a 3D target grid to be used as a physical 3D model for
transforming a
two-dimensional graphic image into a two-dimensional distorted graphical image
which is to be
22

pre-applied onto a three-dimensional surface of a formed part, the method
comprising the steps
of:
providing a plurality of flat webs each having a grid pattern printed thereon,
the grid pattern
comprising a uniform array of grid markers (grid dots) separated from each
other by a space, the
webs each being formed of a thermally transformable material;
selecting a representative sample of the grid markers common to each flat web;
forming each of the webs into the three-dimension surface;
measuring and recording positions for each grid marker in the representative
sample of grid
markers for each formed web, and
identifying a target grid by selecting the formed web whose grid markers in
the representative
sample reside closest to a center of variance of the positions of the grid
markers in the
representative sample of all of the formed webs.
8. The method of claim 7 further comprising the step of identifying in the
target grid areas of
greater and lesser variance corresponding to areas of lesser and greater
variance in the
positions of the grid markers.
9. The method of claim 6 wherein smoothing is applied by using the 3D
graphical application's
subdivision tools and wherein a control object in the 3D application is
utilized to fix the
position of the digitized data points when the object is smoothed.
10. The method of claim 2 wherein the two-dimensional graphic bitmap image is
generated from
a vector image, a second two-dimensional distorted image being created
directly from the
23

vector image by applying an envelope distortion to the vector image, the
envelope distortion
distorting the vector image to match the two-dimensional distorted image.
24

Description

Note: Descriptions are shown in the official language in which they were submitted.


.'
,
TITLE: IMPROVED SYSTEM AND METHOD FOR TRANSFORMING GRAPHICAL
IMAGES
FIELD OF THE INVENTION
The invention relates generally to methods and systems for producing a
distorted image
to be applied to a two-dimensional web which when said web is formed into a
three-dimensional
part, the part shall display a substantially non-distorted image.
BACKGROUND OF THE INVENTION
Many production processes involve forming or molding a three-dimensional part
from a
two-dimensional web or sheet. For example, plastic thermoforming, metal
stamping, and metal
cold forming involve forming a three-dimensional part from a sheet of a
substrate material
through the use of vacuum and/or pressure that conforms the web to a mold or
die. Blow
molding involves the use of air pressure to shape a parison comprising a
substrate material inside
a mold. Other production processes in which a three-dimensional part is formed
from a two-
dimensional web include pressing, stretch forming, shrink forming, and shrink
wrapping. In
addition, in-mold decoration and insert-mold decoration are processes related
to the molding of a
three-dimensional part wherein the part is molded and decorated
simultaneously.
Those of skill in the art will appreciate that the two-dimensional web used in
these
processes may undergo complex changes during production. For example, consider
a
thermoforming process using a thermoplastic web. Prior to thermoforming, the
plastic web is flat
and has a substantially uniform thickness. During thermoforming, the heated
plastic web
1
CA 3089113 2020-07-31

stretches as it is formed. In most cases, the topographic die used in
thermoforming is colder than
the heated plastic sheet substrate (web). As a result, when the plastic
substrate makes contact
with the mold, it "freezes off' at that point and ceases stretching. Other
areas of the plastic
substrate not yet in contact with the mold continue to stretch. The effect is
a potentially large
variation in thickness and relative stretch of the substrate as it comes into
contact with the mold.
The initial steps in pre-decorating a substrate in web form often are easy and
inexpensive.
In the case of a thermoplastic web, prior to thermoforming the plastic webs
are in the form of flat
sheets or rolls and can easily be fed through a printer to apply the
decoration. A web made of
metal usually takes the form of sheets and can also be easily and
inexpensively decorated.
However, during the production process, a flat substrate deforms and stretches
to conform to the
mold or die. This stretching and deformation of the substrate misaligns and
deforms the
decoration depending on the relief of the mold. The greater the relief of the
mold, the greater the
stretching and deformation. If the relief is significant, then the
misalignment and deformation in
the decoration may be intolerable.
One approach to solving the problem of misaligned and deformed images is to
first
transform the original image before applying it to the web. The distortion
applied to transform
the image is intended to correct for the stretching and deformation of the web
as it is processed.
If the transformation is done correctly, the final part will display a non-
distorted image closely
resembling the original image. Creating the correct transformed image from the
original image is
not a trivial matter. Prior art methods for performing the correct distortion
to the original image
have met with some success but are often difficult and time consuming to use.
United States
patent no. 7,555,157 to Davidson et al., the entirety of which is incorporated
herein by reference,
2
CA 3089113 2020-07-31

- '
,
discloses a system and method for transforming an image into a pre-distorted
image which can
then be applied to webs such as thermoplastic sheets. The method disclosed in
patent 7,555,157
includes the steps of optically scanning the molded part to create a digital
model of the 3D part
and then applying the 3D digital model to the original image to create the
transformed image. A
key step in the method disclosed by the '157 patent is the formation of a 3D
substrate upon
which the image is to be applied and then carefully measuring the topography
of the SD substrate
to create the digital 3D model. The resulting 3D model created by the method
disclosed in the
'157 method was often inaccurate resulting in a transformed image which was
also inaccurate.
Furthermore, so much time and effort was required to create an accurate 3D
model using the
method of '157 that the method was impractical to use. Therefore, while the
method disclosed in
the '157 patent can theoretically produce useful transformed images, more
practical approaches
were desired.
SUMMARY OF THE INVENTION
In accordance with one aspect of the present invention, there is provided a
method for
transforming a two-dimensional graphic image into a two-dimensional distorted
graphical image
which is to be pre-applied onto a three-dimensional surface of a formed part.
The first step in the
method includes providing a plurality of flat webs each having a grid pattern
printed thereon, the
grid pattern consisting of a uniform array of grid markers (grid dots)
separated from each other
by a space. A representative sample of the grid markers common to each web is
selected and
each of the webs is transformed into the part having the three-dimensional
surface. The position
of each grid marker in the representative sample of grid markers for each
transformed web is
then recorded. A target grid is then identified by selecting the transformed
web whose grid
3
CA 3089113 2020-07-31

...
,
markers in the representative sample reside closest to a center of variance of
the positions of the
grid markers in the representative sample of all of the transformed webs. The
three-dimension
shape of the target web is then digitized to form a digital representation of
the target grid as a
series of data points. A distinct 3D model is then created from the digitized
three-dimensional
representation of the target grid, the distinct 3D model containing a precise
delineation of the
series of data points. The distinct 3D model configured to have texture data
attached to it, and
able to manipulate its data to output new texture graphics and data. Texture
mapping the two-
dimensional graphic image onto the distinct 3D model to create a texture map.
Finally, creating
a two-dimensional distorted image by rendering the processed (distorted)
texture map.
In accordance with another aspect of the present invention, there is provided
a method of
creating a 3D target grid to be used as a physical 3D model for transforming a
two-dimensional
graphic image into a two-dimensional distorted graphical image which is to be
pre-applied onto a
three-dimensional surface of a formed part. The method includes the steps of
providing a
plurality of flat webs each having a grid pattern printed thereon, the grid
pattern consisting of a
uniform array of grid markers (grid dots) separated from each other by a
space, the webs each
being formed of a thermally transformable material. The next step consists of
selecting a
representative sample of the grid markers common to each flat web and then
forming each of the
webs into the three-dimension surface. The next step involves measuring and
recording
positions for each grid marker in the representative sample of grid markers
for each formed web.
Finally, the target grid is identified by selecting the formed web whose grid
markers in the
representative sample reside closest to a center of variance of the positions
of the grid markers in
the representative sample of all of the formed webs.
With the foregoing in view, and other advantages as will become apparent to
those
4
CA 3089113 2020-07-31

. '
,
skilled in the art to which this invention relates as this specification
proceeds, the invention is
herein described by reference to the accompanying drawings forming a part
hereof, which
includes a description of the preferred typical embodiment of the principles
of the present
invention.
DESCRIPTION OF THE DRAWINGS
Figure 1 is a top view of the grid pattern portion of the present invention.
Figure 2 is a photographic top view of the grid pattern shown in figure 1
applied to different
formed sheet substrates.
Figure 3 is a photographic top view of an example production variance.
Figure 4 is a graphic illustration of the variance of a plurality of formed
sheet substrates.
Figure 5 is a photographic view of example VRsims produced by the method of
the present
invention.
Figure 6 is a photographic view of a target grid made in accordance with the
present invention
compared to the 3D data points generated by the photogrammetry of the target
grid.
Figure 7 is a photographic visualization of the 3D projection of a source
image.
Figure 8 is a photographic image illustrating art projection onto a 3D model
and the resulting
distortion image.
Figure 9 is a photographic visualization of artwork before and after
application of an envelop
distortion.
In the drawings like characters of reference indicate corresponding parts in
the different
figures.
5
CA 3089113 2020-07-31

. .
,
DETAILED DESCRIPTION OF THE INVENTION
In order to facilitate the formation of an image on a 3D part from a pre-
decorated 2D
substrate sheet (web), the present invention provides a method of producing a
transformed image
from an original image. The transformed image is applied to the 2D substrate
by means of
printing or the like, and upon processing the 2D substrate into the 3D part,
the transformed image
will be stretched and deformed into a final image which closely resembles the
original image.
The method of forming a 2D sheet with the transformed image consists of six
principle steps:
1. Printing, forming and selecting a target grid;
2. Digitizing the target grid to collect point data from the target grid.
3. Formatting the point data into a digital 3D model of the target grid.
4. Applying an original image file to the 3D model.
5. Distorting the image file using the 3D model to create the transformed
image.
6. Applying the transformed image onto the web (substrate sheet).
Applying the Grid Pattern
The critical first step in the method begins by selecting a plurality of two-
dimensional substrate
sheets each of which will be transformed into the finished three-dimension
part by molding,
blow molding, heat shrinking or whatever method is to be applied to
manufacture the finished
part. Preferably, at least ten substrate sheets are used, but often 20
substrate sheets or more are
used depending on the part being formed and the nature of the image being
applied. On a
surface of each substrate sheet there is printed a grid. The grid consists of
a uniform pattern of
small circular marks (grid dots) and unique identifiers which are arranged as
a two-dimensional
6
CA 3089113 2020-07-31

grid. This unique grid pattern is important as it makes the creation of the
three-dimensional
model, the distortion data and the resulting distorted artwork possible.
Referring to figure 1, a
preferred embodiment of the grid pattern is shown generally as item 10 and
consists of a plurality
of regularly spaced circulars marks 12 (grid dots) which are preferably
between 1.25mm to
5.08mm in diameter. The physical size of the final product determines the size
of the grid dots
used. The spacing of the grid dots preferably vary between 5mm to 25.4mm
depending on the
physical size of the final product, although the size of the grid and the
spacing are not restricted
to these preferred values.
The grid also includes a uniform matrix of spaced apart unique identifiers
which are
.. interposed between every fourth or fifth grid dot depending on the
dimensions of the grid file.
Preferably, the unique identifiers (items 14 in figure 1) consist of an
arrangement of letters (grid
letters) which are arranged in a repeating pattern such as AA, AB, AC... .BA,
BB, BC.. ..CA,
CB, CC...and so forth as shown in figure 1. The grid letters (unique
identifiers) serve as a visual
cue which allows for the identification of a regional location across the
product. Preferably the
grid letters are placed as a pair, where the first letter represents a count
of columns across the
grid while the second letter represents the count of rows across the grid
print. For example, the
first letter in the pair representing the column and the second letter in the
pair representing the
row, hence AA, AB, AC, AD, AE (and so on) would be formed down the first
column, while
BA, BB, BC (and so on) would be formed down the second column. This grid is
displayed
across the surface of the substrate sheet by means known generally in the art.
For example, if the
substrate sheet is to take the form of a thermoformable plastic sheet (say for
use in rigid
applications), then the grid can simply be printed across one surface of the
plastic sheet.
Alternatively, the grid could also be printed onto a shrink film if the
desired end product is a
7
CA 3089113 2020-07-31

=
shrink sleeve and wrapping application.
Forming the Transformed Grids
Each of the substrate sheets that have been marked with the grid pattern is
provided with
strategically positioned indexing points which are configured to mate with
corresponding index
features in the forming machine the substrate sheet is destined to be formed
on. This ensures that
each marked substrate sheet can be placed in the forming apparatus (vacuum
molding machine,
shrink wrapping device, etc) in exactly the same position. The indexing points
preferably form
register holes. Positioning each of the marked substrate sheets in identically
the same way is
necessary to ensure that the pattern of physical deformation of the plurality
of sheets is as
consistent as possible. The marked grids can then be correctly positioned in
the forming
machine and formed into the desired three-dimensional shape as shown in figure
2. Figure 2
illustrates three different sheets formed into three different shapes by three
different methods.
Item 16 being formed by shrink sleeve technology, item 18 being formed by
thermoforming and
item 20 being formed by shrink wrap. Each of the marked sheets are then
transformed into the
same desired three-dimensional shape (grid) in the same forming machine. The
end result is a
plurality of 3D parts (grids) each of which displays the grid pattern thereon.
The step of forming
the grids is preferably performed on the machinery which is intended to create
the finished three-
dimensional parts. This may necessitate sending the marked substrate sheet to
a client location
to have the client perform the forming step on their transforming machines.
Variance Analysis
Due to a variety of factors, each substrate is formed slightly differently
even if formed in the
8
CA 3089113 2020-07-31

same machine. As a result, two different sheets can be stretched and shrunk
and deformed in
slightly different ways depending on the forming conditions. Therefore, the
sheets will vary
from one sheet to the next and this variance can have a significant impact on
the accuracy of any
image deformation which may occur. Figure 3 illustrates how the same forming
operation can
form parts which vary slightly. Parts 22 and 24 were formed from the same type
of substrate
sheet molded on the same molding machine, yet the same grid dot 26 is in a
slightly different
position in part 22 than it is in part 24. To minimize the part to part
variance, a variance analysis
is performed to evaluate the performance of the grid forming operations.
Repeatability is key to
success in distortion production and is of the utmost importance. Failure in
achieving a low level
of variance can compromise the success of a project when precise registration
of graphics and
tight tolerances are required. To perform the variance analysis, each of the
grids are
photographed. The series of photographs allows for the part to part variance
to be observed and
measured between the grids. Variance measurement data is accumulated by
recording the two-
dimensional (x,y) location of a representative sample (number) of grid dots.
Preferably at least
eight representative grid dots per region of the grid sample is used; however,
depending on the
nature of the 3D part and the image being applied, a greater number may be
required. This
measurement data is preferably based on the pixel dimensions of the grid
sample photographs,
although actual physical measurements can be taken of the grid dots. The array
of unique
identifiers makes it possible to ensure that the same representative sample of
grid dots is being
measured for each, which in turn makes it possible to compare how each grid
varies when
compared to the other grids. The variance measurements are input into a
software calculator
which converts the pixel based measurement data into the respective real world
size in
millimeters. The calculator is a spreadsheet which matches pixel based
measurements to
9
CA 3089113 2020-07-31

millimeters from a variety of different parameters and performs calculations
to determine the
level of variance of the representative grid dots (left to right and top to
bottom) of the grid. The
results of the analysis produced by the calculator include:
= The bidirectional level of variance (left to right, top to bottom).
= The levels of variance per side of a shrink package.
= The number of grid samples that are within 1 standard deviation.
= Identification of the most outlying samples in the grid series.
= Identification of the target grid to be used for digitizing and
distortion.
The Target Grid is the grid sample that resides closest to the center of
variance and is best
representative of the average or typical grid sample from the series.
Essentially, the relative
position of the sample dots are compared between the grids to find the grid
which is closest to
the center of variance. Figure 4 summarizes the variance measurement for 22
different grids
with each data points representing an average variance of each grid. The data
points in figure 4
represent the average distance from the center of variance for the combined
representative data
points. So if you have 8 data points (regions) per sheet, the data point
represents all 8 points
combined and shows their distance from the combined center of variance. We can
zero in on 1
data point if we want but usually just need to find the target grid based on
all of the averages. As
can be seen, measurement 28 taken from one grid is closest to the center of
variance while
measurements 30 and 32 representing the variance measurement from two other
grids are farther
away from the center of variance. Since the grid represented by point 28 is
the grid closest to the
center of variance, that grid is selected as the target grid which will be
used for further steps in
CA 3089113 2020-07-31

the method.
At the conclusion of variance analysis, the method can also be used to provide
a brief
variance report to a customer that outlines any observational traits with the
results of forming,
and states the overall level of variance in both millimetres and inches. The
method can also
provide a variance movie (or movie per side of a shrink package) which
visually demonstrates
the sample-to-sample variance by presenting the sequential series of grid
sample photographs.
The variance analysis is also useful in revealing which portions of the
substrate sheet are
most likely to vary significantly from part to part as the substrate sheets
are formed. The
variance analysis can be used to identify areas of higher and lower variance.
Preferably, critical
portions of the image requiring accurate positioning would be restricted to
the areas of the
substrate sheets which have been identified as likely to experience the lowest
part to part
variance.
For shrink packaging applications, the variance analysis can provide a more
comprehensive analysis which includes:
1. Identification of constant state repeatability.
2. Variance in size and shape of the shrink-wrapped part.
3. Film placement variance.
4. The average percentage of shrinkage for a given side of a package, compared
against its
native preshrunk size.
5. Identification of high shrink and high variance regions across the package.
6. Development of an Art Template that identifies the optimal size of
artwork per side of the
package.
7. 'Variance Movies' which show photos of each side of the package in
numbered
11
CA 3089113 2020-07-31

. .
succession to show the part-to-part variance that is being produced.
The variance analysis also allows for the creation of virtual simulations
(VRsim) illustrating how
the part variances can affect image distortion. A variance VRsim is a
simulation render which
depicts the effect of the overall level of variance on a simulated final
production part (the
target grid with distorted art). Figure 5 illustrates how a VRsim render can
depict image
distortion depending on part variation, with render 34 representing the image
applied to the
target grid, render 36 the image applied to a grid with a positive variance
and render 38 the
image applied to a grid with a negative variance. The variance VRsim is an
accurate simulation
of how the distorted art would perform if used with a series of final parts
which formed
identically to the grid series used in variance analysis. The variance VRsim
can be presented as
either a series of multi-view still images, an animated video, interactive 3D
PDF, or interactive
WebGL object on a webpage. The use of a 3D PDF allows for interactive toggling
between the
target grid, variance (+), and variance (-) simulations.
Digitizing ¨ Data Collection
When the target grid is selected from the variance analysis, the next step in
the process can
commence, namely the digitizing of the three-dimensional shape of the target
grid. Digitizing is
the technical process by which the shape and size of the real world target
grid will be digitally
represented as a geometric model in three dimensional space by a series of
data points. Figure 6
illustrates how the three-dimensional shape of the target grid (item 40) is
scanned into a 3D
datapoint cloud (item 42) by means of photogrammetry. The series of digitized
3D data points
are a virtual reality equivalent to the actual printed grid dots across the
surface of the physical
12
CA 3089113 2020-07-31

. ,
,
target grid. The preferred method of digitizing is photogrammetry.
Photogrammetry involves
measuring common points between multiple photographs taken from different
angles around a
real world object and calculating the location of the points in three-
dimensional space through
triangulation. The photogrammetry results in a collection of data points that
have 3 dimensional
coordinates. The resulting 3D data points from the photogrammetry digitizing
process serves as
the basis for the creation of a uniform quadrangulated 3D model (with
corresponding UV texture
mapping coordinates) to be used for the generation of a distortion.
A variety of photogrammetry hardware and software combinations are available
on the
market which are suitable for use in digitizing/data-collection step. These
photogrammetry rigs
and software are used to create 3D models of objects for a variety of purposes
including 3D
printing and the like. US patent no. 7,555,157 describes one method of
photogrammetry;
however, the method of photogrammetry disclosed therein is outdated and
cumbersome. In
recent years reliable and relatively inexpensive photogrammetry rigs and
software with improved
performance have become commercially available. The photogrammetry application
PhotomodelerTM has been shown to be convenient for use with the present
invention for
collecting the digitized 3D data points, although other suitable applications
are available such as
Agisoft MetashapeTm, Autodesk ReCapTM, AliceVision MeshroomTM, Bently
ContexCaptureTM,
VisualSFMTm and various other applications. The photogrammetry step results in
a data file
containing the 3D data points extracted from the target grid. The 3D data
points represent a sort
of "low resolution 3D model" of the target grid and, in itself, is not
sufficient to generate a
finished transformed image which will yield a finished image of suitable
quality. The 3D data
points must be converted to a smooth high resolution model, preferably by
transforming the 3D
13
CA 3089113 2020-07-31

data points into a uniform quadrangulated 3D model (`Digitized Model'). The
Digitized Model
consists of a uniform quadrangulated model with corresponding UV texture
mapping
coordinates. Numerous tools and techniques are well known in the art for
creating a smooth
higher resolution 2D image from a lower resolution 2D model. It's been
discovered that
producing a smoother high resolution 3D model for use in the present method
from a lower
resolution 3D model can be achieved in much the same way using a 3D animation
application
like Autodesk MayaTM, 3DS MaxTM, Rhinoceros 3DTM, Cinema 4DTM, ModoTM, or
BlenderTM.
One such technique is discussed below.
Creation of Final Digitized Model
The triangulated point cloud object from the digitizing step contains all of
the positional
data points collected by digitizing the grid dots. The next step is to format
the data points in to a
'Distortion Object' that will allow for graphic texture application and
correct, precise distorted
graphic image output. For example, using a 3D computer graphics application
such as
BlenderTm (a free and open source 3D computer graphics software under the
GNUTM general
public license), the data points from the triangulated point cloud are re-
organized in an order and
format that resembles the order and format of the digitized grid, such that
when the points are
quadrangulated, the result is a grid object whose data points are ordered
correctly and has each
data point assigned a UV coordinate that is also ordered correctly, thus
allowing distortion
calculations and distortion image rendering to accurately take place.
To better simulate the physical shape of the digitized grid, and to more
accurately apply
and distort an image, smoothing is applied to the distortion object.
Traditionally, smoothing a
polygonal object (in Blender or other 3D programs) is achieved by subdividing
the object so that
14
CA 3089113 2020-07-31

the vertex positions are relaxed in relation to each other, and a number of
intermediate vertexes
are interpolated between the original vertexes to achieve a smooth surface.
The invention's
'Distortion Object' achieves smoothing without relaxing the vertex positions
while interpolating
a number of intermediate vertexes. The result is an object that is smooth,
represents the physical
shape of the formed grid and keeps the coordinates of the grid data points
unchanged from their
original position when digitized.
To accomplish smoothing without changing the coordinates of the data points, a
control
object is utilized. A control object is an object (such as a bone system,
skeleton, control null,
lattice, constraint object, rig) that is connected to another object in order
to manipulate it by
adjusting the shape of the control object. An example of a control object in a
3D animation
program would be rotating a 'bone' control object to manipulate a highly
detailed polygonal
model of an arm or hand. For the purposes of the distortion process, a control
object is used to
keep the data points in position while allowing for interpolated smoothed,
intermediate points to
be added in between the data points. To use a control object for these
purposes, a custom build
of Blender has been developed to allow for an increase in vertex density of
control objects.
Transformation (Distortion) of Image
The distortion process involves producing distorted artwork from the
combination of
two-dimensional artwork (an original image) and the final digitized target
grid as illustrated in
figure 7. The artwork is designed according to a 2D CAD drawing of the final
product so that it
can be applied to the corresponding 3D model with a planar texture projection
in the modified
Blender 3D software. A texture projection applies a texture map (explained
below) to a 3D
model by projecting it like a film projector applies an image to a screen in
the real world. The
CA 3089113 2020-07-31

projection of artwork upon the 3D model creates a connection between the 2D
art file being
projected, and the 3D data that the artwork is being projected upon, whereby
each pixel of the
2D artwork has a position in relation to the 3D data and UV texture mapping
coordinates of the
digitized 3D model, which in turn is representative of the formed / shrunk
target grid and the
printed grid pattern. The texture map is a bitmap image that is applied or
displayed across the
surface geometry of a 3D model. UV texture mapping coordinates serve as a set
of instructions
or locations to assign the 2D texture map to the surface geometry of the 3D
model. Texture
mapping is a common component of many 3D and CAD programs and is a built in
feature in
Blender.
The transformed image (distorted art) is produced in a bitmap file format by
calculating
the position of each pixel of the projected art and converting the pixel
positions to have 2D
coordinates in relation to the order and dimensions of the original grid data.
Render engines in
3D applications can utilize a process called render mapping (also known across
various 3D
software packages as render 'baking'). The render mapping tools are used to
generate a UV
texture map from the surface color of the digitized 3D model by using the
model's corresponding
set of UV texture mapping coordinates. The surface color of the 3D model
consists of the
colored pixels from the 2D artwork being projected upon the surface of the
model. The UV
texture mapping coordinates of the digitized 3D model are laid out in the same
arrangement as
the grid pattern that was printed, formed / shrink wrapped, and digitized.
This produces a
relationship between the projected artwork and the printed grid pattern. The
creation of a render
is a virtual equivalent to the real world target grid being unformed /
unshrunk back into the flat
print after having "painted" graphics onto the formed / shrunk product. The
distortion render
demonstrates how the artwork must look prior to being formed / shrunk in order
to look correct
16
CA 3089113 2020-07-31

when formed / shrunk. Figure 8 illustrates a sample art projected onto a 3D
model (left) and the
resulting rendered distortion (right). The rendered distortion can then be
used for printing and
forming the final distorted part.
When the original image for a distortion project consists of vector graphics,
or a
combination of vector and bitmap graphics, the product of the distortion
render is used as a guide
for the manipulation of the shape of the vector graphics in order to match the
distorted artwork
presented in the distortion render.
Most art files are supplied in vector format, and vector art files cannot be
used in the
distortion render process without first converting the file to a bitmap
format. Therefore, the
distortion of the original vector art file must be performed with the use of
the distortion render
(created from a bitmap image) as a guide and quality control check and a
"manual" distortion of
the vector artwork must be performed using one of the warping tools provided
by software
applications like Adobe IIlustratorTM. Adobe Illustrator has warping tools
(Warp Brush and
Envelope Distort) which can be used to "manually" distort vector graphics and
bitmap images.
The method of the present invention preferably includes the use of an envelope
mesh tool such as
those included in Adobe Illustrator. The envelope mesh is used by Adobe
Illustrator's Envelope
Distort function in order to distort the vector artwork. The 2D envelope mesh
functions as a
simplified deformation or manipulation tool for graphics contained within the
envelope. The
envelope mesh distortion produces the same result as the distortion render,
and the distorted
vector artwork produced by the envelope mesh will match the distorted bitmap
artwork produced
by the distortion render. To accomplish this, the bitmap rendered distortion
is used as a guide
to match the vector shapes to by way of manipulating the vector shapes in
Illustrator or similar
graphical vector edit program or software. Vector shapes contained in an
envelope can be
17
CA 3089113 2020-07-31

manipulated by adjusting the envelope shape. The manipulation of the shape of
vector objects or
envelope shapes in Illustrator to result in the desired vector shapes involves
using tools and
techniques familiar to those of skill in the art who can appreciate the
varying of accuracy with
inherent dependency on operator interpretation, technique and ability.
While this manual approach is often useful when working with vector artwork,
it is
possible to use tools in Blender to directly output an envelope mesh for use
in Adobe Illustrator.
In this approach, a bitmap image is generated from the vector artwork which is
then used to
create the transformed image using the rendering step discussed above. Tools
in Blender are
then used to directly output the envelope mesh for the distorted image. The
outputted envelope
mesh can then be used directly in Adobe Illustrator to transform the original
vector image such
that it matches the distortion render without the need for manual adjustment
and working. Figure
9 illustrates how an undistorted image (on the left) can be distorted (on the
right) when the
envelope is applied.
The present method has several advantages over previous methods of creating
transformed images. Firstly, the method produces transformed images which
yield final 3D
images which are highly accurate and distortion free. These transformed images
can be
generated quickly from images supplied as either bitmap images, vector images
or combined
vector/bitmap images. The method provides a great deal of control over
placement of the image
on the substrate to take full advantage of where the substrate will be
distorted. The method also
allows for quick turnaround times for producing transformed images. The method
also allows
users to pre-determine how images will look when displayed on three-
dimensional parts.
A specific embodiment of the present invention has been disclosed; however,
several
variations of the disclosed embodiment could be envisioned as within the scope
of this invention.
18
CA 3089113 2020-07-31

It is to be understood that the present invention is not limited to the
embodiments described
above, but encompasses any and all embodiments within the scope of the
following claims.
19
CA 3089113 2020-07-31

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: Office letter 2024-03-28
Inactive: First IPC assigned 2024-02-02
Inactive: IPC assigned 2024-02-02
Inactive: IPC expired 2024-01-01
Inactive: IPC removed 2023-12-31
Letter Sent 2023-06-21
All Requirements for Examination Determined Compliant 2023-06-01
Request for Examination Requirements Determined Compliant 2023-06-01
Request for Examination Received 2023-06-01
Maintenance Request Received 2022-07-20
Priority Document Response/Outstanding Document Received 2021-03-04
Application Published (Open to Public Inspection) 2021-02-02
Inactive: Cover page published 2021-02-01
Letter Sent 2021-01-07
Inactive: IPC assigned 2020-12-08
Inactive: First IPC assigned 2020-12-08
Inactive: IPC assigned 2020-12-08
Inactive: IPC assigned 2020-12-08
Common Representative Appointed 2020-11-07
Letter sent 2020-08-17
Filing Requirements Determined Compliant 2020-08-17
Priority Claim Requirements Determined Compliant 2020-08-14
Request for Priority Received 2020-08-14
Common Representative Appointed 2020-07-31
Inactive: Pre-classification 2020-07-31
Small Entity Declaration Determined Compliant 2020-07-31
Application Received - Regular National 2020-07-31
Inactive: QC images - Scanning 2020-07-31

Abandonment History

There is no abandonment history.

Maintenance Fee

The last payment was received on 2023-06-01

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Fee History

Fee Type Anniversary Year Due Date Paid Date
Application fee - small 2020-07-31 2020-07-31
MF (application, 2nd anniv.) - small 02 2022-08-02 2022-07-20
MF (application, 3rd anniv.) - small 03 2023-07-31 2023-06-01
Request for examination - small 2024-07-31 2023-06-01
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
DISTORTION ARTS LLC
Past Owners on Record
JOHN DAVIDSON
NEIL COMPSON
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column (Temporarily unavailable). To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Description 2020-07-30 19 779
Claims 2020-07-30 5 150
Abstract 2020-07-30 1 22
Drawings 2020-07-30 5 228
Representative drawing 2021-01-07 1 5
Cover Page 2021-01-07 2 42
Courtesy - Office Letter 2024-03-27 2 190
Courtesy - Filing certificate 2020-08-16 1 576
Priority documents requested 2021-01-06 1 533
Courtesy - Acknowledgement of Request for Examination 2023-06-20 1 422
Request for examination 2023-05-31 7 175
New application 2020-07-30 3 78
Priority document 2021-03-03 1 27
Maintenance fee payment 2022-07-19 1 25