Language selection

Search

Patent 3056834 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 3056834
(54) English Title: SYSTEM AND METHOD FOR COLLECTING GEOSPATIAL OBJECT DATA WITH MEDIATED REALITY
(54) French Title: SYSTEME ET METHODE DE COLLECTE DE DONNEES GEOSPATIALES AU MOYEN DE REALITE ELECTRONIQUE
Status: Granted
Bibliographic Data
Abstracts

English Abstract

There is provided a system and method of collecting geospatial object data with mediated reality. The method including: receiving a determined physical position; receiving a live view of a physical scene; receiving a geospatial object to be collected; presenting a visual representation of the geospatial object to a user with the physical scene; receiving a placement of the visual representation relative to the physical scene; and recording the position of the visual representation anchored into a physical position in the physical scene using the determined physical position.


French Abstract

Il est décrit un système et une méthode servant à recueillir des données sur des objets géospatiaux au moyen de la réalité modérée. Cette méthode comprend : la réception dune position physique déterminée; la réception dune image en direct dune scène physique; la réception dun objet géospatial devant être recueilli; la présentation dune représentation visuelle de lobjet géospatial à un utilisateur au moyen de la scène physique; la réception dun emplacement de la représentation visuelle par rapport à la scène physique; lenregistrement de la position de représentation visuelle ancrée à une position physique dans la scène physique au moyen de la position physique déterminée.

Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS
1. A computer-implemented method of collecting geospatial object data with
mediated
reality, the method comprising:
receiving a determined physical position, the determined physical position
comprising one or more of latitude, longitude, elevation, and bearing, the
physical
position determined from at least one member of a group consisting of global
navigation satellite systems (GNSS), real-time kinematic (RTK) positioning,
manual
calibration, vGIS calibration, and markers;
receiving a geospatial object to be collected;
presenting a visual representation of the geospatial object to a user relative
to a
physical scene;
receiving a placement of the visual representation relative to the physical
scene; and
recording the position of the visual representation anchored into a physical
position
in the physical scene using the determined physical position.
2. The method of claim 1, wherein the physical scene comprises a counterpart
geospatial
object to the geospatial object, and wherein receiving placement of the visual

representation comprises placing the visual representation at a location of
the
counterpart geospatial object in the physical scene.
3. The method of claim 2, wherein receiving a placement of the visual
representation
comprises receiving one or more inputs from a user each representing at least
one of
moving the visual representation, rotating the visual representation, tilting
the visual
representation, and sizing the visual representation.
4. The method of claim 2, wherein placing the visual representation at the
location of the
counterpart geospatial object in the physical scene comprises using machine
vision and
artificial intelligence techniques to locate the counterpart geospatial object
in the physical
scene and place the visual representation at such location.
5. The method of claim 1, wherein recording the position of the visual
representation
anchored into the physical position in the physical scene comprises using the
latitude,
19
Date Recue/Date Received 2021-10-04

the longitude, and the elevation of the determined physical position, a
determined
azimuth, and a distance to a geospatial object in the physical scene.
6. The method of claim 5, wherein the distance to the geospatial object in the
physical
scene comprises receiving a distance measurement from a distance finder
device.
7. The method of claim 5, wherein using the latitude, the longitude,
and the elevation of the
determined physical position comprises capturing metadata from at least one of
global
navigation satellite systems (GNSS) and real-time kinematic (RTK), and
correcting the
difference in distance between the metadata and the position of the visual
representation.
8. The method of claim 1, wherein recording the position of the visual
representation
anchored into the physical position in the physical scene further comprises
recording at
least one of the size, height, and orientation of the visual representation of
the geospatial
object.
9. A system of collecting geospatial object data with mediated reality, the
system
comprising one or more processors and data storage memory in communication
with the
one or more processors, the one or more processors configured to execute:
a position module to receive a determined physical position from a spatial
sensor,
the determined physical position comprising one or more of latitude,
longitude,
elevation, and bearing, the physical position determined from at least one
member
of a group consisting of global navigation satellite systems (GNSS), real-time

kinematic (RTK) positioning, manual calibration, vGIS calibration, and
markers;
an object module to receive a geospatial object to be collected;
a display module to present, to a display device, a visual representation of
the
geospatial object relative to a physical scene;
a placement module to receive a placement of the visual representation
relative to
the physical scene; and
a recordation module to record the position of the visual representation
anchored into
a physical position in the physical scene using the determined physical
position.
Date Recue/Date Received 2021-10-04

10. The system of claim 9, wherein the physical scene comprises a counterpart
geospatial
object to the geospatial object, and wherein receiving placement of the visual

representation comprises placing the visual representation at a location of
the
counterpart geospatial object in the physical scene.
11. The system of claim 10, wherein receiving a placement of the visual
representation
comprises receiving one or more inputs from a user from an input device, where
each
input represents at least one of moving the visual representation, rotating
the visual
representation, tilting the visual representation, and sizing the visual
representation.
12. The system of claim 10, wherein placing the visual representation at the
location of the
counterpart geospatial object in the physical scene comprises using machine
vision and
artificial intelligence techniques to locate the counterpart geospatial object
in the physical
scene and place the visual representation at such location.
13. The system of claim 10, wherein recording the position of the visual
representation
anchored into the physical position in the physical scene comprises using the
latitude,
the longitude, and the elevation of the determined physical position, a
determined
azimuth, and a distance to a geospatial object in the physical scene.
14. The system of claim 1, wherein the distance to the geospatial object in
the physical
scene comprises receiving a distance measurement from a distance finder
device.
15. The system of claim 13, wherein using the latitude, the longitude, and the
elevation of
the determined physical position comprises capturing metadata from at least
one of
global navigation satellite systems (GNSS) and real-time kinematic (RTK), and
correcting the difference in distance between the metadata and the position of
the visual
representation.
21
Date Recue/Date Received 2021-10-04

Description

Note: Descriptions are shown in the official language in which they were submitted.


1 SYSTEM AND METHOD FOR COLLECTING GEOSPATIAL OBJECT DATA WITH MEDIATED
2 REALITY
3 TECHNICAL FIELD
4 [0001] The following relates generally to geospatial data management; and
more particularly, to
systems and methods for collecting geospatial object data with mediated
reality.
6 BACKGROUND
7 [0002] Surveying firms, mapping firms, municipalities, public utilities,
and many other entities,
8 collect, store, use, and disseminate vast amounts of geospatial data.
This geospatial data can
9 be used to manage daily operations and conduct mission-critical tasks;
for example, asset
maintenance, construction plan design, zoning proposals, among many others.
Traditionally,
11 geospatial data is collected using manual measurements (offsets) from
detectable local
12 landscape features; for example, a curb line. Then the collected
measurements would be
13 plotted on a map to indicate object/asset locations. The maps could then
be reprinted for use in
14 the field. While much of this geospatial data can be digitized, the
accuracy and quality of such
digital representations may affect the tasks and applications that rely on
such data. In other
16 approaches, location tools, such as global navigation satellite systems
(GNSS) and/or real-time
17 kinematic (RTK), can be used to collect digital geospatial data. These
approaches generally
18 require cumbersome, unsophisticated, and time-consuming validation
techniques.
19 SUMMARY
[0003] In an aspect, there is provided a computer-implemented method of
collecting geospatial
21 object data with mediated reality, the method comprising: receiving a
determined physical
22 position; receiving a geospatial object to be collected; presenting a
visual representation of the
23 geospatial object to a user relative to a physical scene; receiving a
placement of the visual
24 representation relative to the physical scene; and recording the
position of the visual
representation anchored into a physical position in the physical scene using
the determined
26 physical position.
27 [0004] In a particular case of the method, the determined physical
position comprises latitude,
28 longitude, elevation, and bearing.
1
CA 3056834 2019-09-26

1 [0005] In another case of the method, the physical position is received
from at least one of
2 global navigation satellite systems (GNSS) and real-time kinematic (RTK)
positioning.
3 [0006] In yet another case of the method, the physical position is
determined using at least one
4 of manual calibration, vGIS calibration, and markers.
[0007] In yet another case of the method, the physical scene comprises a
counterpart
6 geospatial object to the geospatial object, and wherein receiving
placement of the visual
7 representation comprises placing the visual representation at a location
of the counterpart
8 geospatial object in the physical scene.
9 [0008] In yet another case of the method, receiving a placement of the
visual representation
comprises receiving one or more inputs from a user each representing at least
one of moving
11 the visual representation, rotating the visual representation, tilting
the visual representation, and
12 sizing the visual representation.
13 [0009] In yet another case of the method, placing the visual
representation at the location of the
14 counterpart geospatial object in the physical scene comprises using
machine vision and artificial
intelligence techniques to locate the counterpart geospatial object in the
physical scene and
16 place the visual representation at such location.
17 [0010] In yet another case of the method, recording the position of the
visual representation
18 anchored into the physical position in the physical scene comprises
using the latitude, the
19 longitude, and the elevation of the determined physical position, a
determined azimuth, and a
distance to a geospatial object in the physical scene.
21 [0011] In yet another case of the method, the distance to the geospatial
object in the physical
22 scene comprises receiving a distance measurement from a distance finder
device.
23 [0012] In yet another case of the method, using the latitude, the
longitude, and the elevation of
24 the determined physical position comprises capture metadata from global
navigation satellite
systems (GNSS) and correcting the difference in distance between the GNSS data
and the
26 position of the visual representation.
27 [0013] In yet another case of the method, recording the position of the
visual representation
28 anchored into the physical position in the physical scene further
comprises recording at least
29 one of the size, height, and orientation of the visual representation of
the geospatial object.
2
CA 3056834 2019-09-26

1 [0014] In another aspect, there is provided a system of collecting
geospatial object data with
2 mediated reality, the system comprising one or more processors and data
storage memory in
3 communication with the one or more processors, the one or more processors
configured to
4 execute: a position module to receive a determined physical position; an
object module to
receive a geospatial object to be collected; a display module to present, to a
display device, a
6 visual representation of the geospatial object relative to a physical
scene; a placement module
7 to receive a placement of the visual representation relative to the
physical scene; and a
8 recordation module to record the position of the visual representation
anchored into a physical
9 position in the physical scene using the determined physical position.
[0015] In a particular case of the system, the determined physical position
comprises latitude,
11 longitude, elevation, and bearing.
12 [0016] In another case of the system, the physical position is received
from at least one of
13 global navigation satellite systems (GNSS) and real-time kinematic (RTK)
positioning.
14 [0017] In yet another case of the system, the physical scene comprises a
counterpart
geospatial object to the geospatial object, and wherein receiving placement of
the visual
16 representation comprises placing the visual representation at a location
of the counterpart
17 geospatial object in the physical scene.
18 [0018] In yet another case of the system, receiving a placement of the
visual representation
19 comprises receiving one or more inputs from a user from an input device,
where each input
represents at least one of moving the visual representation, rotating the
visual representation,
21 tilting the visual representation, and sizing the visual representation.
22 [0019] In yet another case of the system, placing the visual
representation at the location of the
23 counterpart geospatial object in the physical scene comprises using
machine vision and artificial
24 intelligence techniques to locate the counterpart geospatial object in
the physical scene and
place the visual representation at such location.
26 [0020] In yet another case of the system, recording the position of the
visual representation
27 anchored into the physical position in the physical scene comprises
using the latitude, the
28 longitude, and the elevation of the determined physical position, a
determined azimuth, and a
29 distance to a geospatial object in the physical scene.
3
CA 3056834 2019-09-26

1 [0021] In yet another case of the system, the distance to the geospatial
object in the physical
2 scene comprises receiving a distance measurement from a distance finder
device.
3 [0022] In yet another case of the system, using the latitude, the
longitude, and the elevation of
4 the determined physical position comprises capture metadata from global
navigation satellite
systems (GNSS) and correcting the difference in distance between the GNSS data
and the
6 position of the visual representation.
7 [0023] These and other aspects are contemplated and described herein. It
will be appreciated
8 that the foregoing summary sets out representative aspects of the system
and method to assist
9 skilled readers in understanding the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
11 [0024] A greater understanding of the embodiments will be had with
reference to the figures, in
12 which:
13 [0025] FIG. 1 illustrates a block diagram of a system of collecting
geospatial object data with
14 mediated reality, according to an embodiment;
[0026] FIG. 2 illustrates a flow diagram of a method of collecting geospatial
object data with
16 mediated reality, according to an embodiment;
17 [0027] FIG. 3A illustrates an exemplary image of collecting geospatial
data by placing an
18 antenna;
19 [0028] FIG. 36 illustrates an exemplary diagram of collecting geospatial
data by placing an
antenna;
21 [0029] FIG. 4 illustrates an example screenshot of selection of an
object, in accordance with the
22 system of FIG. 1;
23 [0030] FIG. 5 illustrates an example screenshot of displaying a visual
representation of an
24 object over a captured scene, in accordance with the system of FIG. 1;
[0031] FIG. 6 illustrates an example screenshot of placing the visual
representation of the
26 object, in accordance with the system of FIG. 1;
4
CA 3056834 2019-09-26

1 [0032] FIG. 7 illustrates an example screenshot after recordation of
placement of the visual
2 representation of the object, in accordance with the system of FIG. 1.
3 DETAILED DESCRIPTION
4 [0033] Embodiments will now be described with reference to the figures.
For simplicity and
clarity of illustration, where considered appropriate, reference numerals may
be repeated
6 among the Figures to indicate corresponding or analogous elements. In
addition, numerous
7 specific details are set forth in order to provide a thorough
understanding of the embodiments
8 described herein. However, it will be understood by those of ordinary
skill in the art that the
9 embodiments described herein may be practiced without these specific
details. In other
instances, well-known methods, procedures, and components have not been
described in detail
11 so as not to obscure the embodiments described herein. Also, the
description is not to be
12 considered as limiting the scope of the embodiments described herein.
13 [0034] Various terms used throughout the present description may be read
and understood as
14 follows, unless the context indicates otherwise: "or" as used throughout
is inclusive, as though
written "and/or"; singular articles and pronouns as used throughout include
their plural forms,
16 and vice versa; similarly, gendered pronouns include their counterpart
pronouns so that
17 pronouns should not be understood as limiting anything described herein to
use,
18 implementation, performance, etc. by a single gender; "exemplary" should
be understood as
19 "illustrative" or "exemplifying" and not necessarily as "preferred" over
other embodiments.
Further definitions for terms may be set out herein; these may apply to prior
and subsequent
21 instances of those terms, as will be understood from a reading of the
present description.
22 [0035] Any module, unit, component, server, computer, terminal, engine,
or device exemplified
23 herein that executes instructions may include or otherwise have access
to computer-readable
24 media such as storage media, computer storage media, or data storage
devices (removable
and/or non-removable) such as, for example, magnetic disks, optical disks, or
tape. Computer
26 storage media may include volatile and non-volatile, removable and non-
removable media
27 implemented in any method or technology for storage of information, such
as computer-
28 readable instructions, data structures, program modules, or other data.
Examples of computer
29 storage media include RAM, ROM, EEPROM, flash memory or other memory
technology, CD-
ROM, digital versatile disks (DVD) or other optical storage, magnetic
cassettes, magnetic tape,
31 magnetic disk storage or other magnetic storage devices, or any other
medium which can be
5
CA 3056834 2019-09-26

I .. used to store the desired information, and which can be accessed by an
application, module, or
2 both. Any such computer storage media may be part of the device or
accessible or connectable
3 thereto. Further, unless the context clearly indicates otherwise, any
processor or controller set
4 .. out herein may be implemented as a singular processor or as a plurality
of processors. The
.. plurality of processors may be arrayed or distributed, and any processing
function referred to
6 .. herein may be carried out by one or by a plurality of processors, even
though a single processor
7 .. may be exemplified. Any method, application, or module herein described
may be implemented
8 using computer readable/executable instructions that may be stored or
otherwise held by such
9 computer-readable media and executed by the one or more processors.
[0036] The following relates generally to geospatial data management; and more
particularly, to
11 systems and methods for collecting geospatial object data with mediated
reality.
12 .. [0037] While the following disclosure refers to mediated reality, it is
contemplated that this
13 includes any suitable mixture of virtual aspects and real aspects; for
example, augmented reality
14 (AR), mixed reality, modulated reality, holograms, and the like. The
mediated reality techniques
described herein can utilize any suitable hardware; for example, smartphones,
tablets, mixed
16 .. reality devices (for example, MicrosoftTM HoloLensTm), true holographic
systems, purpose-built
17 hardware, and the like.
18 [0038] With the rise of mobile computing, geographic information systems
(GIS) using high-
19 precision global navigation satellite systems (GNSS) and/or real-time
kinematic (RTK)
.. positioning, can be used to digitize data collection of geospatial data. As
illustrated in the
21 example of FIGS. 3A and 3B, certain computer-implemented approaches
allow the collection of
22 geospatial data by placing a GNSS antenna near or on top of a placemark
and then recording,
23 for example, latitude, longitude and elevation of the antenna; thus, the
'x, y, z' geospatial
24 coordinates of the placemark in two-dimensional (20) or three-
dimensional (3D) space. In some
cases, complimentary tools, for example laser mapping, can further enhance
collection by
26 allowing collection of data of hard-to-reach objects.
27 [0039] Captured geospatial data can be displayed on a 2D map to help a
technician, or other
28 user, validate accuracy of the data. In some cases, the elevation data
and other attributes of the
29 object can be stored as part of the captured geospatial data metadata in
text format.
[0040] Although mobile computing with GNSS and/or RTK can provide high-
accuracy (for
31 example, up to one centimeter) digital geospatial data collection, a
substantial drawback of
6
CA 3056834 2019-09-26

1 some approaches is the lack of sophisticated visual feedback. This lack
of visual feedback
2 means that it is reliant on multiple hardware components and a
technician's skills to interpret
3 readings from multiple gauges to ensure collected data's accuracy. Thus,
these approaches can
4 be heavily reliant on non-visual representations that require dubious
models of the collected
information to validate data quality.
6 [0041] Advantageously, the present embodiments can use advanced
visualization technologies
7 to implement data collection based on high-accuracy visual placement of
geospatial objects
8 using augmented or mixed realities. The present embodiments can apply the
described
9 approaches to speed up and enhance the data gathering processes with real-
time visual
representations using mediated reality; for example, augmented reality, mixed
reality, or
11 holograms.
12 [0042] Embodiments of the present disclosure can generate a geospatial
image that is
13 embedded in an image of a scene captured by a camera (for example, as in
augmented reality)
14 or displayed as a hologram (for example, as in mixed reality or
holographic systems). This can
be performed in a manner that anchors to reality through geographical
positioning, thereby
16 generating a geographically relevant composite image or a hologram that
can be presented to a
17 user.
18 [0043] Embodiments of the present disclosure can advantageously provide
a three-dimensional
19 model or a raster symbol of real time data viewable, for example, on
mobile devices, wearable
devices, or other viewing platforms. The geospatial images can be used to
provide real-time
21 visual representations to perform geospatial data collection and/or
validate accuracy of
22 geospatial data through visual feedback.
23 [0044] Turning to FIG. 1, a system of collecting geospatial object data
with mediated reality 150
24 is shown, according to an embodiment. In this embodiment, the system 150
is run on a local
computing device (for example, a mobile device). In further embodiments, the
system 150 can
26 be run on any other computing device; for example, a server, a dedicated
price of hardware, a
27 laptop computer, a smartphone, a tablet, mixed reality devices such as
MicrosoftTM HoloLensTM,
28 true holographic systems, purpose-built hardware, or the like. In some
embodiments, the
29 components of the system 150 are stored by and executed on a single
computing device. In
other embodiments, the components of the system 150 are distributed among two
or more
7
CA 3056834 2019-09-26

I computer systems that may be locally or remotely distributed; for
example, using cloud-
2 computing resources.
3 [0045] FIG. 1 shows various physical and logical components of an
embodiment of the system
4 150. As shown, the system 150 has a number of physical and logical
components, including a
central processing unit ("CPU") 152 (comprising one or more processors),
random access
6 memory ("RAM") 154, a user interface 156, a device interface 158, a
network interface 160,
7 non-volatile storage 162, and a local bus 164 enabling CPU 152 to
communicate with the other
8 components. CPU 152 executes an operating system, and various modules, as
described below
9 in greater detail. RAM 154 provides relatively responsive volatile
storage to CPU 152. The user
interface 156 enables an administrator or user to provide input via an input
device, for example
11 a mouse or a touchscreen. The user interface 156 also outputs
information to output devices;
12 for example, a mediated reality device 192, a display or multiple
displays, a holographic
13 visualization unit, and the like. The mediated reality device 192 can
include any device suitable
14 for displaying augmented or mixed reality visuals; for example,
smartphones, tablets,
holographic goggles, purpose-built hardware, or other devices. The mediated
reality device 192
16 may include other output sources, such as speakers. In some cases, the
system 150 can be
17 collocated or part of the mediated reality device 192. In some cases,
the user interface 156 can
18 have the input device and the output device be the same device (for
example, via a
19 touchscreen). The device interface 158 can communicate with one or more
other computing
devices 190 that are either internal or external to the system 150; for
example, a GNSS device
21 to capture a position and/or elevation, a camera or camera array to
capture image(s) of a scene,
22 sensors for determining position and/or orientation (for example, time-
of-flight sensors,
23 compass, depth sensors, spatial sensors, inertial measurement unit
(IMU), and the like). In
24 some cases, at least some of the computing devices 190 can be collocated or
part of the
mediated reality device 192. In further embodiments, the device interface 158
can retrieve data
26 from other devices, such as positions, elevations, and images, which
have been previously
27 captured, from the local database 166 or a remote database via the
network interface 160.
28 [0046] The network interface 160 permits communication with other
systems, such as other
29 computing devices and servers remotely located from the system 150. Non-
volatile storage 162
stores the operating system and programs, including computer-executable
instructions for
31 implementing the operating system and modules, as well as any data used
by these services.
32 Additional stored data can be stored in a database 166. During operation
of the system 150, the
8
CA 3056834 2019-09-26

1 operating system, the modules, and the related data may be retrieved from
the non-volatile
2 storage 162 and placed in RAM 154 to facilitate execution.
3 [0047] In an embodiment, the system 150 further includes a number of
modules to be executed
4 on the one or more processors 152, including an object module 170, a
position module 172, a
display module 174, a placement module 176, and a recordation module 184.
6 [0048] In an embodiment, the object module 170 can have access to library
of objects
7 designated for data collection. The library of objects can be stored
locally on the database 166
8 or as part of a remote GIS system via the network interface 160.
9 [0049] Turning to FIG. 2, a flowchart for a method of collecting
geospatial object data with
mediated reality 200 is shown, according to an embodiment.
11 [0050] At block 202, the position module 172 receives or determines a
physical position of the
12 system 150 from a spatial sensor type computing device 190; where the
physical position
13 includes geographical coordinates. In most cases, the geographical
coordinates are relative to
14 the surface of the earth, for example latitude and longitude. In other
cases, the geographical
coordinates can be relative to another object; for example, relative to a
building or landmark. In
16 some cases, the physical position includes an elevation. In some cases,
the position module
17 172 also receives or determines an orientation or bearing of the system
100; for example,
18 comprising the physical orientation of the direction of the camera. In
an example, the position
19 module 172 can determine the position and orientation in 2D or 3D space
(latitude, longitude,
and, in some cases, elevation) using internal or external spatial sensors and
positioning
21 frameworks; for example, GNSS and/or RTK, Wi-Fi positioning system
(WPS), manual
22 calibration, vGIS calibration, markers, and/or other approaches. The
position module 172 can
23 then track the position and/or the orientation during operation of the
system 150. In some cases,
24 machine vision combined with distance finders can be used to determine
position, for example,
using triangulation. In some cases, distance can be determined, for example,
by using a built-in
26 or external range finder spatial sensor directed to the object. In some
cases, distance can be
27 determined by a spatial sensor by capturing several images of the scene
and comparing pixel
28 shift. In some cases, distance can be determined using time-of-flight
(TOF) spatial sensors. In
29 some cases, beacon-based positioning can be used; for example, using
iBeaconTM.
[0051] At block 204, the display module 174 displays a mediated reality 'live'
view (such as a
31 video stream or a sequential stream of captured images) received from a
camera. This live view
9
CA 3056834 2019-09-26

1 is oriented in the direction of the system 150 as received by the
position module 172 in block
2 202. In embodiments using holographic devices, in some cases, receiving
the 'live view' can be
3 omitted because the visual representation itself is displayed in the
physical space.
4 [0052] At block 206, the object module 170 presents at least a subset of
the library of objects to
a user. The object module 170 can utilize an internal library of geospatial
object definitions, an
6 external library of geospatial object definitions, or a library of
geospatial object definitions
7 defined by the user. In some cases, the user can provide input to create
or edit object
8 definitions. An example screenshot of such a presentation is shown in
FIG. 4. In some cases,
9 object definitions be generated automatically based on one or many
dynamic factors. The object
definition can have associated therewith attributes of the object; for
example, geometry type, 2D
11 or 3D model parameters, the object type (for example, hydrant or
manhole), object condition,
12 colour, shape, and other parameters. In other cases, the object can be
defined as a simple
13 point, line, or area, without any additional attributes. In some cases,
for example where the
14 system 150 is configured to collect data of objects of a single type,
then the object selection at
block 206 can be by-passed as the object type can be automatically selected.
16 [0053] At block 208, the object module 170 receives an object
definition. In a particular case,
17 receiving the object definition can include receiving a selection from
the user with respect to an
18 object the user would like to collect. In this way, as described, the
system 150 can produce and
19 render visual representations (for example, three-dimensional geospatial
models) directly to the
user. In further cases, the object and the object definition (metadata) can be
passed to the
21 object module 170 from an external application to enable data collection
for objects that are not
22 stored in the library (for example, using dynamic data definition). An
example screenshot of a
23 selection is also shown in FIG. 4, whereby the user has selected the
"ssManhole" object. In
24 further cases, the selected object can be received from an external
software source or via a
hyperlink.
26 [0054] In further cases, the object module 170 can determine the object
definition automatically
27 via machine vision analysis of one or more physical objects in the
captured scene. As an
28 example, a machine vision algorithm can determine that the camera is
capturing a hydrant that
29 is red and 1.1m tall; which forms the object definition. As described
herein, this automatically-
generated object definition can be used by the display module 174 to a visual
representation of
31 the example hydrant. In another example, the object module 170 can
determine a point (or a
CA 3056834 2019-09-26

I cross or a bulls-eye) at a location of a utility pole base. In this
example, the object definition can
2 be a point (or cross or bulls-eye) of that location.
3 [0055] In an example embodiment, a machine vision model can be sued to
identify the object in
4 the captured scene, and then identify an aspect of the object; for
example, a closest point of the
object or a center of the object. The position module 172 can determine a
distance to that
6 aspect; for example, using a distance finder. In some cases, machine
vision can also be used to
7 identify properties of the object, as described herein.
8 [0056] In some cases, the object library can be integrated with external
systems, such that the
9 object module 170 can receive one or more object definitions from these
external systems. The
format of the objects in the library can include GeoJSON or other protocols.
In an example
11 external integration arrangement, the external system crafts a token
that contains information
12 necessary for the system 150 to understand what object is being
collected and what are the
13 properties of such object. This token can then be passed to the system
150 via the network
14 interface 160 using "pull" or "push" request approaches.
[0057] The object definition includes, at the very least, the type of object;
for example, a pipe or
16 a point. In further cases, the object definition can also include
attributes or characteristics of the
17 object. As an example, the object definition can include: a manhole 1.2m
wide and 3.2m deep
18 with grey cover installed in 1987 oriented 14d North. Generally, the
definition can be as
19 extensive or as simple as required to generate a visual representation.
[0058] At block 210, the display module 174 presents a visual representation
to the user via the
21 user interface 156, where the visual representation is associated with
the selected object. The
22 visual representation can be, for example, a three-dimensional (3D)
digital-twin model
23 resembling the collected object. In further cases, the visual
representation can be, for example,
24 a symbol representing the object, such as a point, a flag, a tag, or the
like. In further cases, the
visual representation can be, for example, a schematic representation, a
raster image, or the
26 like. In some cases, the type of visual representation can be associated
with the object in the
27 library; and in other cases, the type of visual representation can be
selected by the user.
28 [0059] In some cases, along with the visual representation, other
information can be displayed;
29 for example, distance, elevation, size, shape, colours, and the like,
can be displayed to assist
with visualization and/or precise placement. In some cases, such as with GIS,
the visual
31 representation location can be represented by a single point, line, or
outline, and to help the
11
CA 3056834 2019-09-26

1 user understand where the object is placed, a point, a cross, a line, or
other means, can be
2 used within the visual representation.
3 [0060] In other cases, the display module 174 can stream the visual
representation (for
4 example, a 3D model or model rendering) from a server, cloud-based
infrastructure, or other
external processing device. Instructions for such streaming can be provided in
any suitable
6 format (for example, KML or GeoJSON) or any other proprietary format.
FIG. 5 illustrates an
7 example screenshot of a visual representation 502 presented on the
screen. In this example,
8 the selected object 502 is a 3D model of a pipe. As illustrated, the
background of the screen can
9 be the mediated reality 'live' view received from the camera that is
oriented in the direction of
the system 150.
11 [0061] At block 212, the placement module 176 receives input from the
user via the user
12 interface 156 with respect to placement and associated aspects of the
presented visual
13 representation. Using the visual representation, the user can position
the visual representation
14 in space to align it with physical objects captured by the camera
displayed in the background of
the screen; for example, a physical counterpart of the object. The visual
representation can be
16 moved along the x or y or z axis, or along two (plain) or three (3D)
axes at once. In some cases,
17 the placement module 176 can receive elevation of the visual cue from
the user moving the
18 object along a vertical axis or moving the object in x, y, z space. In
other cases, the placement
19 module 176 can receive elevation defined in advance of placement, during
placement, or after
placement by the user. In some cases, objects that are defined as precise
shapes, for example
21 those used in engineering designs rather than simple points or lines,
can rely on an outline, a
22 bounding box, or 3D visuals to help the user align the visual
representation with a physical
23 object captured by the camera to capture not just the object position,
but also its rotation.
24 [0062] In an example, the user input could include receiving two-finger
input gestures on a
touchscreen input device for tilting, rotating, moving, and/or sizing the
visual representation
26 relative to objects on the screen. In this example, the two-finger
gestures can include moving
27 the fingers closer or farther away from each other for sizing the visual
representation, rotating
28 the fingers relative to each other for rotation of the visual
representation, moving one finger
29 relative to a stationary finger for tilting of the visual
representation, and moving the fingers
together for moving of the visual representation. In further cases, variations
or other suitable
31 gestures can be used. In other examples, the user input can be received
from any suitable input
32 device, for example, hardware buttons, a mouse, a keyboard, or the like.
In other examples, the
12
CA 3056834 2019-09-26

1 user input can include moving of the system 150 itself captured by
movement sensors such as
2 accelerometers and gyroscopes. In other examples, the user input can
include hands-free
3 gestures. In other examples, the user input can include audible or spoken
commands.
4 [0063] FIG. 6 illustrates an example screenshot of the visual
representation 502 presented on
the screen in FIG. 5 after having been aligned with a manhole cover 504 in
view of movement
6 inputs received from the user. In this example, the visual representation
502 represents a
7 manhole pipe aligned with, and located subsurface to, the manhole cover
504 captured by the
8 camera.
9 [0064] In some cases, for example where the visual representation is
anything other than a 3D
model of the object (for example, a manhole symbolized by a point or a flag),
a key location (for
11 example, the point or the base of the symbol) can be placed at a
respective key point of the
12 physical object captured by the camera (for example, at the center of
the manhole). In some
13 cases, the symbology for each object, as well as the key locations and
points, can be defined by
14 each user.
[0065] In further cases, the placement module 176 can determine the actual
object placement
16 based on the object shape without providing any visual representations
about the actual point
17 location to the user. For example, a manhole can have an inferred
location at the geometric
18 center of the manhole cover. By aligning the virtual representation of
the manhole cover with the
19 physical object, the placement module 176 can determine that the center
of it as at the object
location.
21 [0066] In some cases, the placement module 176 can "snap" the visual
representation to a
22 physical object captured by the camera. In this way, placement module
176 can set the correct
23 location, elevation, and size of the visual cue to the real counterpart
object (for example, in the
24 example of FIG. 6, snapping the diameter of the pipe to the diameter of
the manhole). In a
particular case, the snapping can be accomplished via machine vision
techniques. In an
26 example, the machine vision can recognize an object in the scene
captured by the camera and
27 recognize necessary environmental variables; for example, grade,
distance, object size, and the
28 like. These environmental variables can be coupled with information
coming from topographical
29 maps or built-in surface scanning capabilities; for instance, to
increase accuracy. The machine
vision technique can position a visual representation of the object relative
to the object captured
31 by the camera. In an example, the relative positioning can be based on
matching a geometric
13
CA 3056834 2019-09-26

I center and/or edges of the visual representation with the object captured
by the camera. In
2 some cases, the visual representation can be auto-sized to match the
object captured by the
3 camera. In some cases, display of the visual representation can be
maintained in the same
4 geospatial location as the corresponding physical object to create an
illusion that it is "glued" or
"snapped" to the physical object.
6 [0067] In some cases, once the placement module 176 receives input from
the user with
7 respect to placement of the visual representation, the visual
representation can be temporarily
8 fixed in place so that the user can look at it from different angles to
confirm the location.
9 [0068] In some cases, since topography and elevation of the area captured
by the camera can
distort accuracy (for example, objects that are deeper may appear closer
instead), the display
11 module 174 can display projected distance to an object captured by the
camera to assist the
12 user in placing the visual representation on the screen with higher
accuracy. In some cases, as
13 described herein, distance finders can be used to determine the distance
to the object; for
14 example, to ensure survey-grade data collection.
[0069] At block 214, the recordation module 178 records the position of the
presented visual
16 representation anchored into the physical space; for example, the visual
representation's
17 latitude, longitude, and elevation. In some cases, the recordation
module 178 can use machine
18 vision and/or machine learning techniques to determine the correlation
between the position of
19 the visual representation presented on the screen and a corresponding
actual position in
physical space. In some cases, the recordation module 178 can also record
other properties of
21 the presented visual representation; for example, size and height. The
recordation can include
22 storing the type of object, its position, and/or its properties in a
geospatial data storage, such as
23 on the database 166 or sent to an external storage via the network
interface 160. In a particular
24 case, the position and the orientation of the system 150 determined by
the position module 172
is used to coordinate an on-screen location with a geospatial position.
26 [0070] In some cases, the position of the object represented by the
visual representation can be
27 accurately determined, by the position module 172, by using the position
of the system 150 (for
28 example, its latitude, longitude, and elevation), an azimuth of the
system 150, and the distance
29 to one or more objects captured by the camera. In some cases, during
recordation, the position
module 172 can capture metadata from the GNSS and/or RTK device, and then
correct it for the
31 elevation and distance difference between the GNSS antenna and the
presented visual
14
CA 3056834 2019-09-26

1 representation to achieve survey-grade data accuracy. In some cases, the
user can update the
2 position and/or the properties of the visual representation manually.
FIG. 7 illustrates an
3 example screenshot of the visual representation 502 of FIG. 5 after
recordation.
4 [0071] In an example, when the position module 172 determines the
system's 150 location and
orientation in x,y,z space, the position module 172 also determines the
position and orientation
6 of the physical camera (x,y,z plus bearing). The display module 174 can
use the positioning
7 information to access spatial data (the data with x,y,z coordinates) and
create (or use an
8 existing) visual representation (for example, a 3D model). The display
module 174 can place the
9 virtual camera in the location of the physical camera (x,y,z plus
bearing) relative to the visual
representation. The visual representation can be overlaid on top of the
physical representation
11 such that it can appear in the correct location, matching physical
objects around it. In this way,
12 by understanding x,y,z and orientation of the physical camera, the
display module 174 can
13 display visual representations of objects that are in the scene (or
field of view), and size and
14 orient the visual representation to allow for visualization that matches
the physical world
accurately.
16 [0072] In some cases, latitude and longitude of the physical object can
be determined based on
17 the distance difference between a projection of the visual
representation in the physical space
18 and the corresponding physical object. The recordation module 178 can make
this
19 determination because the system 150 is capable of identifying and
tracking its location (for
example, latitude and longitude) and orientation in space. In some cases, the
system 150 can
21 track its elevation based on absolute or relative elevation models. By
factoring in the distance
22 between the physical object and the location and orientation of the
visual representation, the
23 latitude and longitude of the physical object can be determined. In an
example, the recordation
24 module 178 can use a Euclidean distance determination to determine the
physical position of
the visual representation of the object in the physical space, knowing the
physical position of the
26 system 150 and the direction and distance of the physical object.
27 [0073] In some cases, the recordation module 178 can record position
and/or elevation in
28 conjunction with a distance finder device as a computing device 190. In
this way, the
29 recordation module 178 can measure the exact distance to objects
captured by the camera. The
distance finder device can include, for example, a laser distance finder,
sonic distance finder,
31 optical distance finder, depth cameras, 3D cameras, spatial sensors,
gyroscopes,
32 accelerometers, time of flight sensors, or optical distance recognition
using triangulation from
CA 3056834 2019-09-26

1 multiple cameras to determine distance. In some cases, the distance
finder can be used in
2 combination with machine vision and/or artificial intelligence techniques
to determine physical
3 landscape properties and/or aspects of physical objects in the captured
scene (for example,
4 object type, elevation, size, shape, color and/or condition). In this
way, the visual representation
can be automatically sized and/or positioned relative to the image captured by
the camera
6 based on the landscape properties and aspects of physical objects in the
captured scene. In
7 some cases, the recordation module 178 can access stored information about
landscape
8 properties and/or aspects of physical objects in the captured scene.
9 [0074] In some cases, to ensure accurate object placement, several
approaches can be used.
A distance to a physical object can be measured using distance finders. The
distance finder
11 may also detect the object's elevation (either relative or absolute).
Examples of other distance
12 determination approaches can include optical tools, such as depth
cameras or time-of-flight
13 sensor, or image processing that compares images taken from multiple
locations or angles to
14 determine distances. These other approaches can be used separately, or they
can be
connected or associated with the system 150 to provide information
automatically. For example,
16 the display module 174 may display cross-hairs and, upon aligning the
cross-hairs with the
17 physical object, the system 150 can send a request to a distance finder
to determine the
18 distance to that point. In another example, the user interface 156 can
receive an indication from
19 the user of a point they want to measure the distance to. In another
example, an external
distance finder can be used to determine the distance to the physical object
separately, and that
21 distance can be used to ensure accurate object placement by displaying
the distance to the
22 collected object.
23 [0075] In some cases, machine vision (MV) and/or artificial intelligence
(Al) can be used. MV
24 and Al techniques can be used to automatically detect physical objects
captured by the camera,
then determine a distance to the object. Additionally, the MV and Al
techniques can be used to
26 identify elevation, size, rotation, object conditions (e.g., rust or
damages), and other parameters
27 of the object. In some cases, using MV and Al techniques, some aspects
of method 200 can be
28 combined and/or automated (for example, automated object detection can
select the correct
29 object definition from the library or generate the object definition in
real-time, then generate the
visual representation, and auto-align it with the physical object). In these
cases, the user can be
31 displayed the visual representation aligned with the physical object.
16
CA 3056834 2019-09-26

1 [0076] In some cases, uneven terrains, such as sharply sloping driveways
or highway ramps or
2 hills, can introduce visual distortions that may lead to misleading
results. To account for the
3 terrain irregularities, various approaches can be used. For example,
incorporating local
4 topographical maps, using surface scanning, receiving distances from a
distance finder, or other
approaches to construct terrain outlines or determine the object elevation. As
described herein,
6 to assist the user with the accurate placement of the visual
representation, instead of
7 manipulating the object using a flat plain, in some cases the object can
be moved in a 3D space
8 (x, y and z coordinates) either in a single step or as separate distinct
steps (for example,
9 horizontal placement being separate from the vertical elevation
adjustments).
[0077] In some cases, to accommodate survey-grade data collection
requirements, the position
11 module 172 can capture GNSS and/or RTK information when the user has
verified placement of
12 the visual representation of the object. The GNSS and/or RTK information
can be used to
13 determine the distance from the system 150 and the physical object.
Generally, the information
14 provided by GNSS and/or RTK contains latitude and longitude information.
It can also contain
elevation data and supplementary metadata that can include satellite
constellation used,
16 corrections applied, and other components required for survey-grade data
collection. Combined
17 with the distance to the object and the object elevation, GNSS and/or
RTK information can be
18 stored as part of the collected object metadata, ensuring that the
object was collected within the
19 accuracy parameters.
[0078] In some cases, the recordation module 178 can also perform
complementary attribute
21 collection via the user interface 156 and store it as metadata
associated with the collected
22 object. The user may enter additional information about the collected
object by, for example,
23 manually entering information, selecting values from drop-down menus,
validating prepopulated
24 fields, and the like. This additional information can include, for
example, elements such as
colour, material type, shape, installation date, and the like. In some cases,
the system can also
26 automatically capture complementary attributes, for example, date of the
data collection, person
27 who performed the collection, equipment used, and the like. In some cases,
these
28 complementary attributes can be determined using MV and/or Al.
29 [0079] In some cases, the system 150 can be initiated from an external
system that provides
instructions to begin. Examples of such external systems can include ticket
management or
31 spatial tracking systems. In an example, a technician may be reviewing a
work ticket using a
32 third-party ticket management system, and as part of the ticket
workflow, the system may
17
CA 3056834 2019-09-26

1 launch the system 150 for the technician to complete the assignment. In
another example, a
2 technician may be passing through an area for which they will need to
collect spatial
3 information. Upon detecting the technician's location, a third-party
process may notify the
4 worker about the assignment and automatically launch the system 150.
[0080] In an example use case of the present embodiments, a technician at a
construction site
6 may use the system 150 to visualize surrounding infrastructure. Upon
discovering a missing
7 object from the geospatial dataset, the technician can use the system 150
to add this object to
8 the geospatial dataset.
9 [0081] Advantageously, the present embodiments can provide real-time
visual validation of
object placement in 3D space while capturing critical spatial data (latitude,
longitude and
11 elevation). In this way, the recorded geospatial data of the object can
be used later for
12 construction, analytical determinations, and other purposes. Instead of
performing the
13 exhaustive, expensive, and time-consuming task of taking measurements
with multiple devices
14 and then validating the data collection accuracy using 20 maps,
additional measurements, and
models, the user can easily monitor and verify where an object is placed in
real-time to maintain
16 and improve geospatial database accuracy.
17 [0082] Advantageously, the present embodiments can speed up data
collection and increase
18 safety of field-service workers since they no longer need to "tap"
individual objects. Also
19 advantageously, the present embodiments can substantially reduce the
cost of data collection
by reducing the time needed for data capture, quality control, and additional
equipment
21 requirements.
22 [0083] While the forgoing refers to a camera to capture a physical scene
and a screen to
23 display the mixture of physical and visual representations, it is
contemplated that any apparatus
24 for blending virtual and real objects can be used; for example, a
holographic system that
displays holographic augmentation or projects holograms.
26 [0084] Although the foregoing has been described with reference to
certain specific
27 embodiments, various modifications thereto will be apparent to those
skilled in the art without
28 departing from the spirit and scope of the invention as outlined in the
appended claims.
18
CA 3056834 2019-09-26

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date 2022-01-18
(22) Filed 2019-09-26
(41) Open to Public Inspection 2021-03-26
Examination Requested 2021-10-04
(45) Issued 2022-01-18

Abandonment History

There is no abandonment history.

Maintenance Fee

Last Payment of $100.00 was received on 2023-06-30


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if small entity fee 2024-09-26 $100.00
Next Payment if standard fee 2024-09-26 $277.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $400.00 2019-09-26
Maintenance Fee - Application - New Act 2 2021-09-27 $100.00 2021-09-24
Request for Examination 2024-09-26 $816.00 2021-10-04
Final Fee 2022-03-07 $306.00 2021-11-30
Maintenance Fee - Patent - New Act 3 2022-09-26 $100.00 2022-07-25
Maintenance Fee - Patent - New Act 4 2023-09-26 $100.00 2023-06-30
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
VGIS INC.
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative Drawing 2021-02-19 1 3
Cover Page 2021-02-19 2 33
Claims 2021-10-04 3 129
Request for Examination / Amendment / PPH Request 2021-10-04 14 527
Change to the Method of Correspondence 2021-10-04 3 68
Final Fee 2021-11-30 5 146
Representative Drawing 2021-12-17 1 3
Cover Page 2021-12-17 1 32
Electronic Grant Certificate 2022-01-18 1 2,527
Abstract 2019-09-26 1 13
Description 2019-09-26 18 1,008
Claims 2019-09-26 3 126
Drawings 2019-09-26 7 693