Note: Descriptions are shown in the official language in which they were submitted.
CA 02943247 2016-09-27
Title
Method and system for dynamically positioning, viewing and
sharing location based mixed reality content
Field
The field of the invention is augmented or mixed reality methods
and systems.
Figures
Fig. 1 is a plan view map with markers to show GPS search
results for geographically tagged "geotagged" videos in the
proximity of the mobile computing device.
Fig. 2 shows thumbnails of geotagged videos shown in the mobile
computing device's viewer and overlaid on the real world
environment as viewed through the device's video sensor.
Fig. 3 shows a list view or one of more available videos to
watch.
Fig. 4 shows a video projected onto a virtual screen that stays
in a stationary position relative to the real environment as the
mobile computing device is moved or rotated.
Fig. 5 shows that an individual can record a video of themselves
or surrounding and post it using the application. When posed it
will have the geotag of the location as well as the relative
position and orientation information that maps the video when
future users of the application use it.
Detailed Description
In an aspect, a location based mobile computing system is
disclosed that can self-register local position and orientation
using on board sensors and use that positional information to
locate mixed reality content (e.g. 2D or 3D virtual objects, or
streaming videos) in the real world video sensor view.
The geographic position may be determined using Global
Positioning Systems (GPS) or other global locating features like
position determination from cellular tower trilateration. Once
the user and their mobile computing system is located
1
CA 02943247 2016-09-27
geographically, other sensing is completed to model portions of
the local 3D environment.
This local 3D environment model is used to self-register the
mobile computing system's local position and orientation
relative to the surrounding environment. This allows for a mixed
reality content viewer to be virtually positioned in the real
world. This mixed reality content viewer can be shown within the
video screen of the mobile computing device, projected into the
real world to interact with the real world or projected back
onto the user's eye in certain head up displays.
If the user creates mixed reality content while at a geographic
location, they can share this back to Youtube or other social
media sites and have geotagged as well as local 3D environment
information associated with this object so that future viewers
will experience in similar ways. The user may have input to
modify the local 3D environment information to enhance the
future viewer experience. Other users who are travelling near
the location of this geotagged mixed reality content will be
notified by the application that one or more geotagged objects
are in the vicinity. The user will be able to view these
thumbnails or markers for these objects through their mobile
computing device viewer and see the markers geographically
positioned. Once they select a marker, they will be able to
view, experience, or interact with the geotagged object and also
have the correct local 3D environment data associated with it
for orientation.
In another aspect, a method is disclosed comprising the
following steps:
- Detection of the geographic location of a mobile computing
system;
- Automatic on-line search for geotagged mixed reality
content within a search radius of the mobile computing
system;
- Recording of video that is geotagged as well as having
local position and orientation information;
- Posting of geotagged mixed reality content to a 3d party
on-line hosting site and/or social media;
2
CA 02943247 2016-09-27
- This mixed reality content will also have the ability to
have the location position and orientation information, at
time of creation or upload, associated with it; and
- The ability for an individual to view the mixed reality
content to also leave geotagged comments that are visible
to future users.
In an embodiment, the detection system and method uses the
Global Positioning System (GPS) sensors of the mobile computing
system to locate the user's terrestrial position with latitude
and longitude coordinates. A further embodiment could utilize
cellular tower trilateration to locate the user's terrestrial
position with latitude and longitude coordinates.
The detection system may also be used to pass latitude and
longitude coordinates to a software application that will search
the world wide web for geotagged mixed reality content that is
posted on popular video hosting and social network websites. An
embodiment of this would be a search of Youtube for geotagged
video as illustrated in Fig. 1.
The search can be further refined by other search terms that
could be manually input or taken from predefined user
preferences. The result of the search will be geotagged mixed
reality content with a refined topic.
The results of the world wide web search of will be shown
graphically within the software application to visually depict
virtual objects/videos that are located within a specified
vicinity around the individual. If multiple geotagged objects
are found within the search region, a visual list or
representation of all of them will be presented to the
individual. Referring now to Fig. 2, the mixed reality content
thumbnails or representations may be visually placed using a
position and orientation calculated between the individual's
latitude and longitude and the mixed reality content's latitude
and longitude.
Referring now to Fig. 3, the individual will be able to select a
specific mixed reality content to experience from the ones
presented in the search results. Only mixed reality content that
is within the geographic buffer or region of interest
surrounding the location of the individual will be shown.
3
CA 02943247 2016-09-27
While at the geographic location, the individual may view and
interact with the mixed reality content. The mixed reality
content will have a position and orientation within the local
environment around the individual. The position and orientation
will be determined and continually updated in real time using
available sensors on the mobile computing device. One embodiment
of this system and method will use the GPS, accelerometer,
gyroscope, compass and video sensor of the mobile computing
device to calculate this real time position and orientation. Key
frames taken from the video sensor will be used to calculate, in
real time, the 3D position of the surrounding environment as it
relates to the mobile computing device. The form of monocular
position and depth sensing will be completed using common
features automatically determined in the video key frames. This
position information will determine the location and rotation of
the mobile computing device as well as how to present the mixed
reality content within the local environment in perspective
correct and stationary virtual position. This position
information may also allow the mixed reality content to interact
with one or more users at the same location.
The position/orientation results will be visually presented to
the individual to create a virtual or mixed reality experience
in the surrounding environment. One embodiment will have the
mixed reality content shown on the mobile computing device's
screen as a camera/viewer within a viewer. Another embodiment
will have the positioned and oriented mixed reality content
projected into the real world using a projection device. Another
embodiment of this will have the positioned and oriented mixed
reality content to be projected onto a pair of glasses or heads-
up display in front of the individual. Another embodiment of
this will have the positioned and oriented mixed reality content
projected back onto the individual's eye.
In an embodiment, there may be functionality to allow the
individual to leave geotagged mixed reality content or
information at the geographic location that will be viewable by
other individuals. One embodiment of this will allow the
individual to record a video or leave a written comment that is
geotagged to the location. This will be posted to popular video
hosting or social networking web sites.
4
CA 02943247 2016-09-27
The posted mixed reality content may be further enhanced with
local position and orientation information that may be recorded
with the geotagged information. The position and orientation may
be determined and continually updated in real time using
available sensors on the mobile computing device. For example,
as shown in Fig. 5, an Individual can record a video of
themselves or surrounding and post it using the application.
When posed it will have the geotag of the location as well as
the relative position and orientation information that maps the
video when future users of the application use it.
The position and orientation results will be included, when
available, with geographic location information that may be
associated with the mixed reality content. An embodiment may
have the orientation and rotation information embedded in the
subtitles or description portion of the posting and hosted
directly on the video hosting, such as Youtube, web site.
Another embodiment may have the position and orientation
information hosted on a different server than the video but they
will be dynamically linked and accessible for real time
consumption.
Other curation tools will be available to the individual to
enhance what they are posting. An embodiment may be a mobile
application based dashboard that allows the individual to edit
the length or order of a video sequence, change sound, add
further positional information or rules for future users, allow
successful viewing of the virtual object/video to trigger a
secondary event of object as a reward for completion. Another
embodiment may be a web based version of this dashboard.