Note: Descriptions are shown in the official language in which they were submitted.
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
IMPROVED MANUFACTURING FOR VIRTUAL AND AUGMENTED REALITY
SYSTEMS AND COMPONENTS
FIELD OF THE INVENTION
[0001] The present disclosure relates to virtual reality and augmented
reality imaging and
visualization systems.
BACKGROUND
[0002] Modern computing and display technologies have facilitated the
development of
systems for so called "virtual reality" or "augmented reality" experiences,
wherein digitally
reproduced images or portions thereof are presented to a user in a manner
wherein they seem to
be, or may be perceived as, real. A virtual reality, or "VR", scenario
typically involves
presentation of digital or virtual image information without transparency to
actual real-world
visual input. An augmented reality, or "AR", scenario typically involves
presentation of digital
or virtual image information as an augmentation to visualization of the actual
world around the
user. For example, referring to Fig. 1, an augmented reality scene (4) is
depicted wherein a user
of an AR technology sees a real-world park-like setting (6) featuring people,
trees, buildings in
the background, and a concrete platform (1120). In addition to these items,
the user of the AR
technology also perceives that he "sees" a robot statue (1110) standing upon
the real-world
platform (1120), and a cartoon-like avatar character (2) flying by which seems
to be a
personification of a bumble bee, even though these elements (2, 1110) do not
exist in the real
world. As it turns out, the human visual perception system is very complex,
and producing a VR
or AR technology that facilitates a comfortable, natural-feeling, rich
presentation of virtual
image elements amongst other virtual or real-world imagery elements is
challenging.
[0003] There are numerous challenges when it comes to presenting 3D virtual
content to a
user of an AR system. A central premise of presenting 3D content to a user
involves creating a
perception of multiple depths. In other words, it may be desirable that some
virtual content
appear closer to the user, while other virtual content appear to be coming
from farther away.
1
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
Thus, to achieve 3D perception, the AR system should be configured to deliver
virtual content at
different focal planes relative to the user.
[0004] In order for a 3D display to produce a true sensation of depth, and
more specifically, a
simulated sensation of surface depth, it is desirable for each point in the
display's visual field to
generate the accommodative response corresponding to its virtual depth. If the
accommodative
response to a display point does not correspond to the virtual depth of that
point, as determined
by the binocular depth cues of convergence and stereopsis, the human visual
system may
experience an accommodation conflict, resulting in unstable imaging, harmful
eye strain,
headaches, and, in the absence of accommodation information, almost a complete
lack of surface
depth.
[0005] Therefore, there is a need for improved technologies to implement 3D
displays that
resolve these and other problems of the conventional approaches. The systems
and techniques
described herein are configured to work with the visual configuration of the
typical human to
address these challenges.
SUMMARY
[0006] Embodiments of the present invention are directed to devices,
systems and methods
for facilitating virtual reality and/or augmented reality interaction for one
or more users.
[0007] An augmented reality (AR) display system for delivering augmented
reality content
to a user, according to some embodiments, comprises an image-generating source
to provide one
or more frames of image data, a light modulator to transmit light associated
with the one or more
frames of image data, a diffractive optical element (DOE) to receive the light
associated with the
one or more frames of image data and direct the light to the user's eyes, the
DOE comprising a
diffraction structure having a waveguide substrate corresponding to a
waveguide refractive
index, a surface grating, and an intermediate layer (referred to also herein
as an "underlayer")
disposed between the waveguide substrate and the surface grating, wherein the
underlayer
corresponds to an underlayer diffractive index that is different from the
waveguide refractive
index.
2
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
[0008] According to some embodiments of the invention, a diffraction
structure is employed
for a DOE that includes an underlayer that resides between a waveguide
substrate and a top
grating surface. The top grating surface comprises a first material that
corresponds to a first
refractive index value, the underlayer comprises a second material that
corresponds to a second
refractive index value, and the substrate comprises a third material that
corresponds to a third
refractive index value.
[0009] Any combination of same or different materials may be employed to
implement each
of these portions of structure, e.g., where all three materials are different
(and all three
correspond to different refractive index values), or where two of the layers
share the same
material (e.g., where two of the three materials are the same and therefore
share a common
reflective index value that differs from the refractive index value of the
third material). Any
suitable set of materials may be used to implement any layer of the improved
diffraction
structure.
[0010] Thus a variety of combinations is available wherein an underlayer of
one index is
combined with a top grating of another index, along with a substrate of a
third index, and
wherein adjusting these relative values provides a lot of variation in
dependence of diffraction
efficiency upon incidence angle. A layered waveguide with different layers of
refractive indices
is presented. Various combinations and permutations are presented along with
related
performance data to illustrate functionality. The benefits include increased
angle, which
provides an increased output angle with the grating and therefore an increased
field of view with
the eyepiece. Further, the ability to counteract the normal reduction in
diffraction efficiency with
angle is functionally beneficial
[0011] According to additional embodiments, improved approaches are
provided to
implement deposition of imprint materials onto a substrate, along with
imprinting of the imprint
materials to for patterns for implementing diffraction. These approaches allow
for very precise
distribution, deposition, and/or formation of different imprint
materials/patterns onto any number
of substrate surfaces. According to some embodiments, patterned distribution
(e.g., patterned
inkjet distribution) of imprint materials are performed to implement the
deposition of imprint
materials onto a substrate. This approach of using patterned ink-jet
distribution allows for very
3
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
precise volume control over the materials to be deposited. In addition, this
approach can serve to
provide a smaller, more uniform base layer beneath a grating surface.
[0012] In some embodiments, a template is provided having a first set of
deeper depth
structures along with a second set of shallower depth structures. When
depositing imprint
materials onto an imprint receiver, a relatively higher volume of imprint
materials is deposited in
conjunction with the deeper depth structures of the template. In addition, a
relatively lower
volume of imprint materials is deposited in conjunction with the shallower
depth structures of the
template. This approach permits simultaneous deposition of different
thicknesses of materials
for the different features to be formed onto the imprint receiver. This
approach can be taken to
create distributions that are purposefully non-uniform for structures with
different depths and/or
feature parameters, e.g., where the feature structures are on the same
substrate and have different
thicknesses. This can be used, for example, to create spatially distributed
volumes of imprint
material that enable simultaneous imprint of structures of variable depth with
the same
underl ayer thickness.
[0013] Some embodiments pertain to an approach to implement simultaneous
deposition of
multiple types of imprint materials onto a substrate. This permits materials
having optical
properties to be simultaneously deposited across multiple portions of the
substrate at the same
time. This approach also provides the ability to tune local areas associated
with specific
functions, e.g., to act as in-coupling grating, orthogonal pupil expander
(OPE) gratings, or exit
pupil expander (EPE) gratings. The different types of materials may comprise
the same material
having different optical properties (e.g., two variants of the same material
having differing
indices of refraction) or two entirely different materials. Any optical
property of the materials
can be considered and selected when employing this technique, e.g., index of
refraction, opacity,
and/or absorption.
[0014] According to another embodiment, multi-sided imprinting may be
employed to
imprint multiple sides of an optical structure. This permits imprinting to
occur on different sides
of an optical element, to implement multiplexing of functions through a base
layer volume. In
this way, different eyepiece functions can be implemented without adversely
affecting grating
structure function. A first template may be used to produce one imprint on
side "A" of the
substrate/imprint receiver, forming a first pattern having a first material
onto side A of the
4
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
structure. Another template may be used to produce a second imprint on side
"B" of the same
substrate, which forms a second pattern having a second material onto side B
of the substrate.
Sides A and B may have the same or different patterns, and/or may have the
same or different
types of materials.
[0015] Additional embodiments pertains to multi-layer over-imprinting,
and/or multi-layer
separated/offset substrate integration. In either/both of these approaches, a
previously imprinted
pattern can be jetted upon and printed again. An adhesive can be jetted onto a
first layer, with a
second substrate bonded to it (possibly with an airgap), and a subsequent
jetting process can
deposit onto the second substrate and imprinted. Series-imprinted patterns can
be bonded to
each other in sequence in a roll-to-roll process. It is noted that the
approach of implementing
multi-layer over-imprinting may be used in conjunction with, or instead of,
the multi-layer
separated/offset substrate integration approach. For multi-layer over-
imprinting, a first imprint
material can be deposited and imprinted onto a substrate followed by
deposition of a second
imprint material deposition, resulting in a composite, multi-layer structure
having both a first
imprint material and a second imprint material. For multi-layer
separated/offset substrate
integration, both a first substrate 1 and a second substrate 2 may be
imprinted with the imprinting
material, and afterwards, substrate 1 and substrate 2 may be sandwiched and
bonded, possibly
with offset features (also imprinted) that provide for, in one embodiment, an
air-gap between the
active structures of substrate 2 and the back side of substrate 1. An
imprinted spacer may be
used to create the air-gap.
[0016] According to yet another embodiment, disclosed is an approach to
implement variable
volume deposition of materials distributed across the substrate, which may be
dependent upon an
apriori knowledge of surface non-uniformity. This corrects for surface non-
uniformity of the
substrate may result undesirable parallelism, causing poor optical
performance. Variable volume
deposition of imprint material may be employed to provide a level distribution
of imprint
material to be deposited independently of the underlying topography or
physical feature set. For
example, the substrate can be pulled flat by vacuum chuck, and in situ
metrology performed to
assess surface height, e.g., with low coherence or with laser based on-contact
measurement
probes. The dispense volume of the imprint material can be varied depending
upon the
measurement data to yield a more uniform layer upon replication. Any types of
non-uniformity
may also be addressed by this embodiment of the invention, such as thickness
variability and/or
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
the existence of pits, peaks or other anomalies or features associated with
local positions on the
substrate.
[0017] It is noted that any of the preceding embodiments may be combined
together.
Furthermore, additional and other objects, features, and advantages of the
invention are
described in the detail description, figures and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] Fig. 1 illustrates a user's view of augmented reality (AR) through a
wearable AR user
device, in one illustrated embodiment.
[0019] Fig. 2 illustrates a conventional stereoscopic 3-D simulation
display system.
[0020] Fig. 3 illustrates an improved approach to implement a stereoscopic
3-D simulation
display system according to some embodiments of the invention.
[0021] Figs. 4A-4D illustrates various systems, subsystems, and components
for addressing
the objectives of providing a high-quality, comfortably-perceived display
system for human VR
and/or AR.
[0022] Fig. 5 illustrates a plan view of an example configuration of a
system utilizing the
improved diffraction structure.
[0023] Fig. 6 illustrates a stacked waveguide assembly.
[0024] Fig. 7 illustrates a DOE.
[0025] Figs. 8 and 9 illustrate example diffraction patterns.
[0026] Figs. 10 and 11 illustrate two waveguides into which a beam is
injected.
[0027] Fig. 12 illustrates a stack of waveguides.
[0028] Fig. 13A illustrates an example approach to implement a diffraction
structure having
a waveguide substrate and a top grating surface, but without an underlayer.
[0029] Fig. 13B shows a chart of example simulation results.
[0030] Fig. 13C shows an annotated version of Fig. 13A.
[0031] Fig. 14A illustrates an example approach to implement a diffraction
structure having
a waveguide substrate, an underlayer, and a top grating surface.
[0032] Fig. 14B illustrates an example approach to implement a diffraction
structure having
a waveguide substrate, an underlayer, a grating surface, and a top surface.
6
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
[0033] Fig. 14C illustrates an example approach to implement stacking of
diffraction
structures having a waveguide substrate, an underlayer, a grating surface, and
a top surface.
[0034] Fig. 15A illustrates an example approach to implement a diffraction
structure having
a high index waveguide substrate, a low index underlayer, and a low index top
grating surface.
[0035] Fig. 15B shows charts of example simulation results.
[0036] Fig. 16A illustrates an example approach to implement a diffraction
structure having
a low index waveguide substrate, a high index underlayer, and a low index top
grating surface.
[0037] Fig. 16B shows charts of example simulation results.
[0038] Fig. 17A illustrates an example approach to implement a diffraction
structure having
a low index waveguide substrate, a medium index underlayer, and a high index
top grating
surface.
[0039] Fig. 17B shows a chart of example simulation results.
[0040] Fig. 18A-D illustrate modification of underlayer characteristics.
[0041] Fig. 19 illustrates an approach to implement precise, variable
volume deposition of
imprint material on a single substrate.
[0042] Figure 20 illustrates an approach to implement directed,
simultaneous deposition of
multiple different imprint materials in the same layer and imprint step
according to some
embodiments.
[0043] Figs. 21A-B illustrates an example approach to implement two-sided
imprint in the
context of total-internal reflection diffractive optical elements..
[0044] Fig. 22 illustrates a structure formed using the approach shown in
Figs. 21A-B.
[0045] Fig. 23 illustrates an approach to implement multi-layer over-
imprinting.
[0046] Fig. 24 illustrates an approach to implement multi-layer
separated/offset substrate
integration.
[0047] Fig. 25 illustrates an approach to implement variable volume
deposition of materials
distributed across the substrate to address surface non-uniformity.
DETAILED DESCRIPTION
[0048] According to some embodiments of the invention, a diffraction
structure is employed
that includes an underlayer/intermediate layer that resides between a
waveguide substrate and a
7
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
top grating surface. The top grating surface comprises a first material that
corresponds to a first
refractive index value, the underlayer comprises a second material that
corresponds to a second
refractive index value, and the substrate comprises a third material that
corresponds to a third
refractive index value.
[0049] One advantage of this approach is that appropriate selection of the
relative indices of
refraction for the three layers allows the structure to obtain a larger field
of view for a greater
range of incident light, by virtue of the fact that the lowest total internal
reflection angle is
reduced as the index of refraction is increased. Diffraction efficiencies can
be increased,
allowing for "brighter" light outputs to the display(s) of image viewing
devices.
[0050] A variety of combinations is available wherein an underlayer of one
index is
combined with a top grating of another index, along with a substrate of a
third index, and
wherein adjusting these relative values provides a lot of variation in
dependence of diffraction
efficiency upon incidence angle. A layered waveguide with different layers of
refractive indices
is presented. Various combinations and permutations are presented along with
related
performance data to illustrate functionality. The benefits include increased
angle, which
provides an increased output angle with the grating and therefore an increased
field of view with
the eyepiece. Further, the ability to counteract the normal reduction in
diffraction efficiency with
angle is functionally beneficial.
Display Systems According to Some Embodiments
[0051] This portion of the disclosure describes example display systems
that may be used in
conjunction with the improved diffraction structure of the invention.
[0052] Fig. 2 illustrates a conventional stereoscopic 3-D simulation
display system that
typically has a separate display 74 and 76 for each eye 4 and 6, respectively,
at a fixed radial
focal distance 10 from the eye. This conventional approach fails to take into
account many of
the valuable cues utilized by the human eye and brain to detect and interpret
depth in three
dimensions, including the accommodation cue.
8
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
100531 In fact, the typical human eye is able to interpret numerous layers
of depth based
upon radial distance, e.g., able to interpret approximately 12 layers of
depth. A near field limit
of about 0.25 meters is about the closest depth of focus; a far-field limit of
about 3 meters means
that any item farther than about 3 meters from the human eye receives infinite
focus. The layers
of focus get more and more thin as one gets closer to the eye; in other words,
the eye is able to
perceive differences in focal distance that are quite small relatively close
to the eye, and this
effect dissipates as objects fall farther away from the eye. At an infinite
object location, a depth
of focus! dioptric spacing value is about 1/3 diopters.
[00541 Fig. 3 illustrates an improved approach to implement a stereoscopic
3-D simulation
display system according to some embodiments of the invention, where two
complex images are
displayed, one for each eye 4 and 6, with various radial focal depths (12) for
various aspects (14)
of each image may be utilized to provide each eye with the perception of three
dimensional depth
layering within the perceived image. Since there are multiple focal planes
(e.g., 12 focal planes)
between the eye of the user and infinity, these focal planes, and the data
within the depicted
relationships, may be utilized to position virtual elements within an
augmented reality scenario
for a user's viewing, because the human eye is constantly sweeping around to
utilize the focal
planes to perceive depth. While this figure shows a specific number of focal
planes at various
depths, it is noted that an implementation of the invention may use any number
of focal planes as
necessary for the specific application desired, and the invention is therefore
not limited to
devices having only to the specific number of focal planes shown in any of the
figures in the
present disclosure.
100551 Referring to Figs. 4A-4D, some general componentry options are
illustrated
according to some embodiments of the invention. In the portions of the
detailed description
which follow the discussion of Figs. 4A-4D, various systems, subsystems, and
components are
presented for addressing the objectives of providing a high-quality,
comfortably-perceived
display system for human VR and/or AR.
100561 As shown in Fig. 4A, an AR system user (60) is depicted wearing a
frame (64)
structure coupled to a display system (62) positioned in front of the eyes of
the user. A speaker
(66) is coupled to the frame (64) in the depicted configuration and positioned
adjacent the ear
9
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
canal of the user (in one embodiment, another speaker, not shown, is
positioned adjacent the
other ear canal of the user to provide for stereo / shapeable sound control).
The display (62) is
operatively coupled (68), such as by a wired lead or wireless connectivity, to
a local processing
and data module (70) which may be mounted in a variety of configurations, such
as fixedly
attached to the frame (64), fixedly attached to a helmet or hat (80) as shown
in the embodiment
of Fig. 4B, embedded in headphones, removably attached to the torso (82) of
the user (60) in a
backpack-style configuration as shown in the embodiment of Fig. 4C, or
removably attached to
the hip (84) of the user (60) in a belt-coupling style configuration as shown
in the embodiment of
Fig. 4D.
100571 The local processing and data module (70) may comprise a power-
efficient processor
or controller, as well as digital memory, such as flash memory, both of which
may be utilized to
assist in the processing, caching, and storage of data a) captured from
sensors which may be
operatively coupled to the frame (64), such as image capture devices (such as
cameras),
microphones, inertial measurement units, accelerometers, compasses, GPS units,
radio devices,
and/or gyros; and/or b) acquired and/or processed using the remote processing
module (72)
and/or remote data repository (74), possibly for passage to the display (62)
after such processing
or retrieval. The local processing and data module (70) may be operatively
coupled (76, 78),
such as via a wired or wireless communication links, to the remote processing
module (72) and
remote data repository (74) such that these remote modules (72, 74) are
operatively coupled to
each other and available as resources to the local processing and data module
(70).
[0058] In one embodiment, the remote processing module (72) may comprise
one or more
relatively powerful processors or controllers configured to analyze and
process data and/or image
information. In one embodiment, the remote data repository (74) may comprise a
relatively
large-scale digital data storage facility, which may be available through the
interne or other
networking configuration in a "cloud" resource configuration. In one
embodiment, all data is
stored and all computation is performed in the local processing and data
module, allowing fully
autonomous use from any remote modules.
[0059] Perceptions of Z-axis difference (i.e., distance straight out from
the eye along the
optical axis) may be facilitated by using a waveguide in conjunction with a
variable focus optical
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
element configuration. Image information from a display may be collimated and
injected into a
waveguide and distributed in a large exit pupil manner using any suitable
substrate-guided optics
methods known to those skilled in the art ¨ and then variable focus optical
element capability
may be utilized to change the focus of the wavefront of light emerging from
the waveguide and
provide the eye with the perception that the light coming from the waveguide
is from a particular
focal distance. In other words, since the incoming light has been collimated
to avoid challenges
in total internal reflection waveguide configurations, it will exit in
collimated fashion, requiring a
viewer's eye to accommodate to the far point to bring it into focus on the
retina, and naturally be
interpreted as being from optical infinity ¨ unless some other intervention
causes the light to be
refocused and perceived as from a different viewing distance; one suitable
such intervention is a
variable focus lens.
[0060] In some embodiments, collimated image information is injected into a
piece of glass
or other material at an angle such that it totally internally reflects and is
passed into the adjacent
waveguide. The waveguide may be configured so that the collimated light from
the display is
distributed to exit somewhat uniformly across the distribution of reflectors
or diffractive features
along the length of the waveguide. Upon exit toward the eye, the exiting light
is passed through
a variable focus lens element wherein, depending upon the controlled focus of
the variable focus
lens element, the light exiting the variable focus lens element and entering
the eye will have
various levels of focus (a collimated flat wavefront to represent optical
infinity, more and more
beam divergence / wavefront curvature to represent closer viewing distance
relative to the eye
58).
[0061] In a "frame sequential" configuration, a stack of sequential two-
dimensional images
may be fed to the display sequentially to produce three-dimensional perception
over time, in a
manner akin to the manner in which a computed tomography system uses stacked
image slices to
represent a three-dimensional structure. A series of two-dimensional image
slices may be
presented to the eye, each at a different focal distance to the eye, and the
eye/brain would
integrate such a stack into a perception of a coherent three-dimensional
volume. Depending
upon the display type, line-by-line, or even pixel-by-pixel sequencing may be
conducted to
produce the perception of three-dimensional viewing. For example, with a
scanned light display
11
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
(such as a scanning fiber display or scanning mirror display), then the
display is presenting the
waveguide with one line or one pixel at a time in a sequential fashion.
[0062] Referring to Fig. 6, a stacked waveguide assembly (178) may be
utilized to provide
three-dimensional perception to the eye/brain by having a plurality of
waveguides (182, 184,
186, 188, 190) and a plurality of weak lenses (198, 196, 194, 192) configured
together to send
image information to the eye with various levels of wavefront curvature for
each waveguide
level indicative of focal distance to be perceived for that waveguide level. A
plurality of
displays (200, 202, 204, 206, 208), or in another embodiment a single
multiplexed display, may
be utilized to inject collimated image information into the waveguides (182,
184, 186, 188, 190),
each of which may be configured, as described above, to distribute incoming
light substantially
equally across the length of each waveguide, for exit down toward the eye.
[0063] The waveguide (182) nearest the eye is configured to deliver
collimated light, as
injected into such waveguide (182), to the eye, which may be representative of
the optical
infinity focal plane. The next waveguide up (184) is configured to send out
collimated light
which passes through the first weak lens (192; e.g., a weak negative lens)
before it can reach the
eye (58). The first weak lens (192) may be configured to create a slight
convex wavefront
curvature so that the eye/brain interprets light coming from that next
waveguide up (184) as
coming from a first focal plane closer inward toward the person from optical
infinity. Similarly,
the third up waveguide (186) passes its output light through both the first
(192) and second (194)
lenses before reaching the eye (58). The combined optical power of the first
(192) and second
(194) lenses may be configured to create another incremental amount of
wavefront divergence so
that the eye/brain interprets light coming from that third waveguide up (186)
as coming from a
second focal plane even closer inward toward the person from optical infinity
than was light
from the next waveguide up (184).
[0064] The other waveguide layers (188, 190) and weak lenses (196, 198) are
similarly
configured, with the highest waveguide (190) in the stack sending its output
through all of the
weak lenses between it and the eye for an aggregate focal power representative
of the closest
focal plane to the person. To compensate for the stack of lenses (198, 196,
194, 192) when
viewing/interpreting light coming from the world (144) on the other side of
the stacked
12
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
waveguide assembly (178), a compensating lens layer (180) is disposed at the
top of the stack to
compensate for the aggregate power of the lens stack (198, 196, 194, 192)
below. Such a
configuration provides as many perceived focal planes as there are available
waveguide/lens
pairings, again with a relatively large exit pupil configuration as described
above. Both the
reflective aspects of the waveguides and the focusing aspects of the lenses
may be static (i.e., not
dynamic or electro-active). In an alternative embodiment they may be dynamic
using electro-
active features as described above, enabling a small number of waveguides to
be multiplexed in a
time sequential fashion to produce a larger number of effective focal planes.
[0065] Various diffraction configurations can be employed for focusing
and/or redirecting
collimated beams. For example, passing a collimated beam through a linear
diffraction pattern,
such as a Bragg grating, will deflect, or "steer", the beam. Passing a
collimated beam through a
radially symmetric diffraction pattern, or "Fresnel zone plate", will change
the focal point of the
beam. A combination diffraction pattern can be employed that has both linear
and radial
elements produces both deflection and focusing of a collimated input beam.
These deflection
and focusing effects can be produced in a reflective as well as transmissive
mode.
[0066] These principles may be applied with waveguide configurations to
allow for
additional optical system control. As shown in Fig. 7, a diffraction pattern
(220), or "diffractive
optical element" (or "DOE") has been embedded within a planar waveguide (216)
such that as a
collimated beam is totally internally reflected along the planar waveguide
(216), it intersects the
diffraction pattern (220) at a multiplicity of locations. The structure may
also include another
waveguide (218) into which the beam may be injected (by a projector or
display, for example),
with a DOE (221) embedded in this other waveguide (218),
[0067] Preferably, the DOE (220) has a relatively low diffraction
efficiency so that only a
portion of the light of the beam is deflected toward the eye (58) with each
intersection of the
DOE (220) while the rest continues to move through the planar waveguide (216)
via total
internal reflection; the light carrying the image information is thus divided
into a number of
related light beams that exit the waveguide at a multiplicity of locations and
the result is a fairly
uniform pattern of exit emission toward the eye (58) for this particular
collimated beam bouncing
around within the planar waveguide (216), as shown in Fig. 8. The exit beams
toward the eye
13
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
(58) are shown in Fig. 8 as substantially parallel, because, in this case, the
DOE (220) has only a
linear diffraction pattern. However, changes to this linear diffraction
pattern pitch may be
utilized to controllably deflect the exiting parallel beams, thereby producing
a scanning or tiling
functionality.
[0068]
Referring to Fig. 9, with changes in the radially symmetric diffraction
pattern
component of the embedded DOE (220), the exit beam pattern is more divergent,
which would
require the eye to accommodation to a closer distance to bring it into focus
on the retina and
would be interpreted by the brain as light from a viewing distance closer to
the eye than optical
infinity.
[0069]
Referring to Fig. 10, with the addition of the other waveguide (218) into
which the
beam may be injected (by a projector or display, for example), a DOE (221)
embedded in this
other waveguide (218), such as a linear diffraction pattern, may function to
spread the light
across the entire larger planar waveguide (216), which functions to provide
the eye (58) with a
very large incoming field of incoming light that exits from the larger planar
waveguide (216),
e.g., a large eye box, in accordance with the particular DOE configurations at
work.
[0070] The
DOEs (220, 221) are depicted bisecting the associated waveguides (216, 218)
but
this need not be the case; they could be placed closer to, or upon, either
side of either of the
waveguides (216, 218) to have the same functionality. Thus, as shown in Fig.
11, with the
injection of a single collimated beam, an entire field of cloned collimated
beams may be directed
toward the eye (58). In addition, with a combined linear diffraction pattern /
radially symmetric
diffraction pattern scenario such as that discussed above, a beam distribution
waveguide optic
(for functionality such as exit pupil functional expansion; with a
configuration such as that of
Fig. 11, the exit pupil can be as large as the optical element itself, which
can be a very significant
advantage for user comfort and ergonomics) with Z-axis focusing capability is
presented, in
which both the divergence angle of the cloned beams and the wavefront
curvature of each beam
represent light coming from a point closer than optical infinity.
[0071] In
one embodiment, one or more DOEs are switchable between "on" states in which
they actively diffract, and "off' states in which they do not significantly
diffract. For instance, a
switchable DOE may comprise a layer of polymer dispersed liquid crystal, in
which
14
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
microdroplets comprise a diffraction pattern in a host medium, and the
refractive index of the
microdroplets can be switched to substantially match the refractive index of
the host material (in
which case the pattern does not appreciably diffract incident light) or the
microdroplet can be
switched to an index that does not match that of the host medium (in which
case the pattern
actively diffracts incident light). Further, with dynamic changes to the
diffraction terms, a beam
scanning or tiling functionality may be achieved. As noted above, it is
desirable to have a
relatively low diffraction grating efficiency in each of the DOEs (220, 221)
because it facilitates
distribution of the light, and also because light coming through the
waveguides that is desirably
transmitted (for example, light coming from the world 144 toward the eye 58 in
an augmented
reality configuration) is less affected when the diffraction efficiency of the
DOE that it crosses
(220) is lower ¨ so a better view of the real world through such a
configuration is achieved.
100721 Configurations such as those illustrated herein preferably are
driven with injection of
image information in a time sequential approach, with frame sequential driving
being the most
straightforward to implement. For example, an image of the sky at optical
infinity may be
injected at timel and the diffraction grating retaining collimation of light
may be utilized.
Thereafter, an image of a closer tree branch may be injected at time2 while a
DOE controllably
imparts a focal change, say one diopter or 1 meter away, to provide the
eye/brain with the
perception that the branch light information is coming from the closer focal
range. This kind of
paradigm can be repeated in rapid time sequential fashion such that the
eye/brain perceives the
input to be all part of the same image. This is just a two focal plane example
-- preferably the
system will include more focal planes to provide a smoother transition between
objects and their
focal distances. This kind of configuration generally assumes that the DOE is
switched at a
relatively low speed (i.e., in sync with the frame-rate of the display that is
injecting the images ¨
in the range of tens to hundreds of cycles/second).
[0073] The opposite extreme may be a configuration wherein DOE elements can
shift focus
at tens to hundreds of MHz or greater, which facilitates switching of the
focus state of the DOE
elements on a pixel-by-pixel basis as the pixels are scanned into the eye (58)
using a scanned
light display type of approach. This is desirable because it means that the
overall display frame-
rate can be kept quite low; just low enough to make sure that "flicker" is not
a problem (in the
range of about 60-120 frames/sec).
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
[0074] In between these ranges, if the DOEs can be switched at KHz rates,
then on a line-by-
line basis the focus on each scan line may be adjusted, which may afford the
user with a visible
benefit in terms of temporal artifacts during an eye motion relative to the
display, for example.
For instance, the different focal planes in a scene may, in this manner, be
interleaved, to
minimize visible artifacts in response to a head motion (as is discussed in
greater detail later in
this disclosure). A line-by-line focus modulator may be operatively coupled to
a line scan
display, such as a grating light valve display, in which a linear array of
pixels is swept to form an
image; and may be operatively coupled to scanned light displays, such as fiber-
scanned displays
and mirror-scanned light displays.
[0075] A stacked configuration, similar to those of Fig. 6, may use dynamic
DOEs to provide
multi-planar focusing simultaneously. For example, with three simultaneous
focal planes, a
primary focus plane (based upon measured eye accommodation, for example) could
be presented
to the user, and a + margin and ¨ margin (i.e., one focal plane closer, one
farther out) could be
utilized to provide a large focal range in which the user can accommodate
before the planes need
be updated. This increased focal range can provide a temporal advantage if the
user switches to a
closer or farther focus (i.e., as determined by accommodation measurement);
then the new plane
of focus could be made to be the middle depth of focus, with the + and ¨
margins again ready for
a fast switchover to either one while the system catches up.
[0076] Referring to Fig. 12, a stack (222) of planar waveguides (244, 246,
248, 250, 252) is
shown, each having a reflector (254, 256, 258, 260, 262) at the end and being
configured such
that collimated image information injected in one end by a display (224, 226,
228, 230, 232)
bounces by total internal reflection down to the reflector, at which point
some or all of the light
is reflected out toward an eye or other target. Each of the reflectors may
have slightly different
angles so that they all reflect exiting light toward a common destination such
as a pupil. Lenses
(234, 236, 238, 240, 242) may be interposed between the displays and
waveguides for beam
steering and/or focusing.
[0077] As discussed above, an object at optical infinity creates a
substantially planar
wavefront, while an object closer, such as lm away from the eye, creates a
curved wavefront
(with about lm convex radius of curvature). The eye's optical system needs to
have enough
16
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
optical power to bend the incoming rays of light so that they end up focused
on the retina
(convex wavefront gets turned into concave, and then down to a focal point on
the retina). These
are basic functions of the eye.
[0078] In many of the embodiments described above, light directed to the
eye has been
treated as being part of one continuous wavefront, some subset of which would
hit the pupil of
the particular eye. In another approach, light directed to the eye may be
effectively di scretized or
broken down into a plurality of beamlets or individual rays, each of which has
a diameter less
than about 0.5mm and a unique propagation pathway as part of a greater
aggregated wavefront
that may be functionally created with the an aggregation of the beamlets or
rays. For example, a
curved wavefront may be approximated by aggregating a plurality of discrete
neighboring
collimated beams, each of which is approaching the eye from an appropriate
angle to represent a
point of origin that matches the center of the radius of curvature of the
desired aggregate
wavefront.
[0079] When the beamlets have a diameter of about 0.5mm or less, it is as
though it is
coming through a pinhole lens configuration, which means that each individual
beamlet is
always in relative focus on the retina, independent of the accommodation state
of the eye¨
however the trajectory of each beamlet will be affected by the accommodation
state. For
instance, if the beamlets approach the eye in parallel, representing a
discretized collimated
aggregate wavefront, then an eye that is correctly accommodated to infinity
will deflect the
beamlets to all converge upon the same shared spot on the retina, and will
appear in focus. If the
eye accommodates to, say, 1 m, the beams will be converged to a spot in front
of the retina, cross
paths, and fall on multiple neighboring or partially overlapping spots on the
retina¨appearing
blurred.
[0080] If the beamlets approach the eye in a diverging configuration, with
a shared point of
origin 1 meter from the viewer, then an accommodation of 1 m will steer the
beams to a single
spot on the retina, and will appear in focus; if the viewer accommodates to
infinity, the beamlets
will converge to a spot behind the retina, and produce multiple neighboring or
partially
overlapping spots on the retina, producing a blurred image. Stated more
generally, the
accommodation of the eye determines the degree of overlap of the spots on the
retina, and a
17
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
given pixel is "in focus" when all of the spots are directed to the same spot
on the retina and
"defocused" when the spots are offset from one another. This notion that all
of the 0.5mm
diameter or less beamlets are always in focus, and that they may be aggregated
to be perceived
by the eyes/brain as though they are substantially the same as coherent
wavefronts, may be
utilized in producing configurations for comfortable three-dimensional virtual
or augmented
reality perception.
[0081] In other words, a set of multiple narrow beams may be used to
emulate what is going
on with a larger diameter variable focus beam, and if the beamlet diameters
are kept to a
maximum of about 0.5mm, then they maintain a relatively static focus level,
and to produce the
perception of out-of-focus when desired, the beamlet angular trajectories may
be selected to
create an effect much like a larger out-of-focus beam (such a defocussing
treatment may not be
the same as a Gaussian blur treatment as for the larger beam, but will create
a multimodal point
spread function that may be interpreted in a similar fashion to a Gaussian
blur).
[0082] In some embodiments, the beamlets are not mechanically deflected to
form this
aggregate focus effect, but rather the eye receives a superset of many
beamlets that includes both
a multiplicity of incident angles and a multiplicity of locations at which the
beamlets intersect
the pupil; to represent a given pixel from a particular viewing distance, a
subset of beamlets from
the superset that comprise the appropriate angles of incidence and points of
intersection with the
pupil (as if they were being emitted from the same shared point of origin in
space) are turned on
with matching color and intensity, to represent that aggregate wavefront,
while beamlets in the
superset that are inconsistent with the shared point of origin are not turned
on with that color and
intensity (but some of them may be turned on with some other color and
intensity level to
represent, e.g., a different pixel).
[0083] Referring now to Fig. 5, an example embodiment 800 of the AR system
that uses an
improved diffraction structure will now be described. The AR system generally
includes an
image generating processor 812, at least one FSD 808 (fiber scanning device),
FSD circuitry
810, a coupling optic 832, and at least one optics assembly (DOE assembly 802)
having stacked
waveguides with the improved diffraction structure described below. The system
may also
include an eye-tracking subsystem 806. As shown in Fig. 5, the FSD circuitry
may comprise
18
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
circuitry 810 that is in communication with the image generation processor 812
having a maxim
chip CPU 818, a temperature sensor 820, a piezo-electrical drive/transducer
822, a red laser 826,
a blue laser 828, and a green laser 830 and a fiber combiner that combines all
three lasers 826,
828 and 830. It is noted that other types of imaging technologies are also
usable instead of FSD
devices. For example, high-resolution liquid crystal display ("LCD") systems,
a backlighted
ferroelectric panel display, and/or a higher-frequency DLP system may all be
used in some
embodiments of the invention.
[0084] The image generating processor is responsible for generating virtual
content to be
ultimately displayed to the user. The image generating processor may convert
an image or video
associated with the virtual content to a format that can be projected to the
user in 3D. For
example, in generating 3D content, the virtual content may need to be
formatted such that
portions of a particular image are displayed on a particular depth plane while
other are displayed
at other depth planes. Or, all of the image may be generated at a particular
depth plane. Or, the
image generating processor may be programmed to feed slightly different images
to right and left
eye such that when viewed together, the virtual content appears coherent and
comfortable to the
user's eyes. In one or more embodiments, the image generating processor 812
delivers virtual
content to the optics assembly in a time-sequential manner. A first portion of
a virtual scene may
be delivered first, such that the optics assembly projects the first portion
at a first depth plane.
Then, the image generating processor 812 may deliver another portion of the
same virtual scene
such that the optics assembly projects the second portion at a second depth
plane and so on.
Here, the Alvarez lens assembly may be laterally translated quickly enough to
produce multiple
lateral translations (corresponding to multiple depth planes) on a frame-to
frame basis.
[0085] The image generating processor 812 may further include a memory 814,
a CPU 818,
a GPU 816, and other circuitry for image generation and processing. The image
generating
processor may be programmed with the desired virtual content to be presented
to the user of the
AR system. It should be appreciated that in some embodiments, the image
generating processor
may be housed in the wearable AR system. In other embodiments, the image
generating
processor and other circuitry may be housed in a belt pack that is coupled to
the wearable optics.
19
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
[0086] The AR system also includes coupling optics 832 to direct the light
from the FSD to
the optics assembly 802. The coupling optics 832 may refer to one more
conventional lenses
that are used to direct the light into the DOE assembly. The AR system also
includes the eye-
tracking subsystem 806 that is configured to track the user's eyes and
determine the user's focus.
100871 In one or more embodiments, software blurring may be used to induce
blurring as
part of a virtual scene. A blurring module may be part of the processing
circuitry in one or more
embodiments. The blurring module may blur portions of one or more frames of
image data
being fed into the DOE. In such an embodiment, the blurring module may blur
out parts of the
frame that are not meant to be rendered at a particular depth frame.
Example approaches that can be used to implement the above image display
systems, and
components therein, are described in U.S. Utility Patent Application Serial
No. 14/555,585.
Improved Diffraction Structure
[0088] As stated above, a diffraction pattern can be formed onto a planar
waveguide, such
that as a collimated beam is totally internally reflected along the planar
waveguide, the beam
intersects the diffraction pattern at a multiplicity of locations. This
arrangement can be stacked
to provide image objects at multiple focal planes within a stereoscopic 3-D
simulation display
system according to some embodiments of the invention.
[0089] Fig. 13A illustrates one possible approach that can be taken to
implement a structure
1300 of a waveguide 1302 (also referred to herein as a "light guide",
"substrate", or "waveguide
substrate"), where outcoupling gratings 1304 are directly formed onto the top
surface of the
waveguide 1302, e.g., as a combined monolithic structure and/or both formed of
the same
materials (even if not constructed out of the same monolithic structure). In
this approach, the
index of refraction of the gratings material is the same as the index of
refraction of the,
waveguide 1302. The index of refraction n (or "refractive index") of a
material describes how
light propagates through that medium, and is defined as n = c/v. where c is
the speed of light in
vacuum and v is the phase velocity of light in the medium. The refractive
index determines how
much light is bent, or refracted, when entering a material.
[0090] Fig. 13B shows a chart 1320 of example simulation results for a
single polarization of
the efficiency of the light coming out of the structure 1300 as a function of
the angle at which the
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
light is propagating within the waveguide. This chart shows that the
diffraction efficiency of the
outcoupled light for structure 1300 decreases at higher angles of incidence.
As can be seen, at an
angle of about 43 degrees, the efficiency drops relatively quickly on the
depicted plot due to total
internal reflectance variation based on incident angle in a medium with
uniform index of
refraction.
[0091] Therefore, it is possible that the usable range of configuration
1300 is somewhat
limited and therefore undesirable, as the spacing of bounces may decrease at
higher angles of
incidence, which may further reduce the brightness seen by an observer at
those angles. The
diffraction efficiency is lower at the most shallow angles of incidence, which
is not entirely
desirable, because the bounce spacing (see Fig. 13C) between interactions with
the top surface is
fairly far apart, and light has fairly few opportunities to couple out. Thus,
a dimmer signal with
fewer outcoupled samples will result from this arrangement, with this problem
being
compounded by the grating having lower diffraction efficiencies at these high
angles with this
polarization orientation. It is noted that as used herein and in the figures,
"IT" refers to the 1st
transmitted diffracted order.
[0092] In some embodiments of waveguide-based optical systems or substrate
guided optical
systems, such as those described above, different pixels in a substrate-guided
image are
represented by beams propagating at different angles within the waveguide,
where light
propagates along the waveguide by total internal reflection (T1R). The range
of beam angles that
remain trapped in a waveguide by T1R is a function of the difference in
refractive index between
the waveguide and the medium (e.g., air) outside the waveguide; the higher the
difference in
refractive index, the larger the number of beam angles. In certain
embodiments, the range of
beam angles propagating along the waveguide correlates with the field of view
of the image
coupled out of the face of the waveguide by a diffractive element, and with
the image resolution
supported by the optical system. Additionally, the angle range in which total
internal reflection
,occurs is dictated by the index of refraction of the waveguide ¨ in some
embodiments a
minimum of about 43 degrees and a practical maximum of approximately 83
degrees, thus a 40
degree range.
[0093] Fig. 14A illustrates an approach to address this issue according to
some embodiments
of the invention, where structure 1400 includes an intermediate layer 1406
(referred to herein as
"underlayer 1406") that resides between the substrate 1302 and the top grating
surface 1304.
21
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
The top surface 1304 comprises a first material that corresponds to a first
refractive index value,
the underlayer 1406 comprises a second material that corresponds to a second
refractive index
value, and the substrate 1302 comprises a third material that corresponds to a
third refractive
index value. It is noted that any combination of same or different materials
may be employed to
implement each of these portions of structure 1400, e.g., where all three
materials are different
(and all three correspond to different refractive index values), or where two
of the layers share
the same material (e.g., where two of the three materials are the same and
therefore share a
common reflective index value that differs from the refractive index value of
the third material).
Any combination of refractive index values may be employed. For example, one
embodiment
comprises a low refractive index for the underlayer, with higher index values
for the surface
grating and the substrate. Other example configurations are described below
having other
illustrative combinations of refractive index values. Any suitable set of
materials may be used to
implement structure 1500. For example, polymers, glass, and sapphire are all
examples of
materials that can be selected to implement any of the layers of structure
1400.
[0094] As shown in Fig. 15A, in some embodiments it may be desirable to
implement a
structure 1500 that uses a relatively higher refractive index substrate as
waveguide substrate
1302, with a relatively lower refractive index underlayer 1406 and relatively
lower refractive
index top grating surface 1304. This is because one may be able to obtain a
larger field of view
by virtue of the fact that the lowest total internal reflection angle is
reduced as the index of
refraction is increased through the relationship nl*sin(thetal)=n2*sin(90).
For a substrate of
index 1.5, the critical angle is 41.8 degrees; however, for a substrate index
of 1.7, the critical
angle is 36 degrees.
[0095] Gratings formed on higher index substrates may be utilized to couple
light out even if
they themselves have a lower index of refraction, so long as the layer of
material comprising the
grating is not too thick between the grating and the substrate. This is
related to the fact that one
can have a more broad range of angles for total internal reflection ("Tilt")
with such a
configuration. In other words, the TIR angle drops to lower values with such a
configuration. In
addition, it is noted that many of the current etching processes may not be
well suited for
extending to high-index glasses. It is desirable in some embodiments to
replicate an outcoupling
layer reliably and inexpensively.
22
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
[0096] The configuration of the underlayer 1406 may be adjusted to alter
the performance
characteristics of structure 1500, e.g., by changing the thickness of the
underlayer 1406. The
configuration of Fig. 15A (a construct including a grating structure 1304 on
top comprising a
relatively low index material, with an associated lower index underlayer 1406,
and which also
includes an associated high-index light guiding substrate 1302) may be
modelled to result in data
such as that depicted in Fig. 15B. Referring to this figure, the plot 1502a on
the left is related to
a configuration with zero-thickness underlayer 1502. The middle plot 1502b
shows data for a
0.05 micron thickness underlayer 1502. The right plot 1502c shows data for a
0.1 micron
thickness underlayer 1502.
[0097] As shown by the data in these plots, as the underlayer thickness is
increased, the
diffraction efficiency as a function of incident angle becomes much more
nonlinear and
suppressed at high angles, which may not be desirable. Thus in this case,
control of the
underlayer is an important functional input. However, it should be noted that
with a zero-
thickness underlayer and only grating features themselves possessing the lower
index, the range
of angles supported by the structure is governed by the TIR condition in the
higher index base
material, rather than the lower index grating feature material.
[0098] Referring to Fig. 16A, an embodiment of a structure 1600 is
illustrated featuring a
relatively high index underlayer 1406 on a lower index substrate 1302, with a
top surface
diffraction grating 1304 having a refractive index lower than the underlayer
1406 and
comparable to, but not necessarily equal to, the refractive index of the
substrate 1302. For
example, the top surface grating may correspond to a refractive index of 1.5,
the underlayer may
correspond to a refractive index of 1.84, and the substrate may correspond to
a refractive index
of 1.5. Assume for this example that the period is 0.43 um and lambda
corresponds to 0.532 um.
[0099] Simulations related to such a configuration are presented in Fig.
16B. As shown in
this figure in chart 1602a, with a 0.3 micron thick underlayer 1406,
diffraction efficiency falls off
like the previously described configuration, but then starts to rise up at the
higher end of the
angular range. This is also true for the 0.5 micron thick underlayer 1406
configuration, as shown
in chart 1602b. It is beneficial in each of these (0.3 micron, 0.5 micron)
configurations, that the
efficiency is relatively high at the higher extremes of the angular range;
such functionality may
tend to counteract the more sparse bounce spacing concern discussed above.
Also shown in this
figure is chart 1602c for an embodiment featuring a 90 degree rotated
polarization case, where
23
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
the diffraction efficiency is lower as might be expected, but shows desirable
behavior in that it
provides greater efficiency at steeper angles as compared to shallower angles.
[00100] Indeed, in some embodiments, diffraction efficiency versus angles may
increase at
high angles. This may be a desirable feature for some embodiments since it
helps to compensate
for the lower bounce spacing that may occur at higher propagation angles.
Therefore, the
structural configuration of Fig. 16A may be preferable in embodiments where it
is desirable to
compensate for the lower bounce spacing (which occurs with higher propagation
angles), since it
promotes diffraction efficiency versus angle increasing at higher angles,
which is desirable
relative to the aforementioned monolithic configurations.
[00101] Referring to Fig. 17A, another structure 1700 is depicted wherein an
underlayer 1406
has a refractive index substantially higher than the refractive index of the
substrate 1302. A
grating structure 1304 is on top, and has a refractive index that is also
higher than the refractive
index of the underlayer 1406. For example, the top surface grating may
correspond to a
refractive index of 1.86, the underlayer may correspond to a refractive index
of 1.79, and the
substrate may correspond to a refractive index of 1.5. As before, assume for
this example that
the period is 0.43 um and lambda corresponds to 0.532 urn.
[00102] Referring to Fig. 17B, chart 1702 shows simulation data is illustrated
for the structure
1700 of Fig. 17A. As shown in chart 1702, the plot of the resulting
diffraction efficiency versus
incident angle demonstrates a desirable general behavior to assist in
compensating for the
aforementioned lower bounce spacing at relatively high incident angles and
possessing
reasonable diffraction efficiency across a greater range of angles in general.
[00103] It is noted that the underlayer 1406 does not need to be uniform
across the entire
substrate. Any characteristic of the underlayer 1406 may be varied at
different locations of the
substrate, such as variances in the thickness, composition, and/or index of
refraction of the
underlayer 1406. One possible reason for varying the characteristics of the
underlayer 1406 is to
promote uniform display characteristics in the presence of known variations in
either the display
image and/or non-uniform transmission of light within the display system.
[00104] For example, as shown in Fig. 18A, consider if the waveguide structure
receives
incoming light at a single incoupling location 1802 on the waveguide. As the
incoming light is
injected into the waveguide 1302, less and less of that light will be remain
as it progresses along
the length of the waveguide 1302. This means that the output light near the
incoupling location
24
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
1802 may end up appearing "brighter" than output light farther along the
length of the
waveguide 1302. If the underlayer 1406 is uniform along the entire length of
the waveguide
1302, then the optical effects of the underlayer 1406 may reinforce this
uneven brightness level
across the substrate.
[00105] The characteristics of the underlayer 1406 can be adjusted across the
substrate 1302
to make the output light more uniform. Fig. 18B illustrates an approach
whereby the thickness
of the underlayer 1406 is varied across the length of the waveguide substrate
1302, where the
underlayer 1406 is thinner near the incoupling location 1802 and thicker at
farther distances
away from location 1802. In this manner, the effect of the underlayer 1406 to
promote greater
diffraction efficiency can at least partially ameliorate the effects of light
losses along the length
of the waveguide substrate 1302, thereby promoting more uniform light output
across the
entirety of the structure.
[00106] Fig. 18C illustrates an alternate approach where the thickness of the
underlayer 1406
is not varied, but the refractive index of the underlayer 1406 varies across
the substrate 1302.
For example, to address the issue that output light near location 1802 tends
to be brighter than
locations farther away from location 1802, the index of refraction for the
underlayer 1406 can be
configured to be the same or similar to the substrate 1302 close to location
1802, but to have an
increasing difference in those index values at locations farther away from
location 1802. The
composition of the underlayer 1406 material can be varied at different
location to effect the
different refractive index values. Fig. 18D illustrates a hybrid approach,
whereby both the
thickness and the refractive index of the underlayer 1406 is varied across the
substrate 1302. It is
noted that this same approach can be taken to vary the thickness and/or
refractive index of the
top grating surface 1304 and/or the substrate 1302 in conjunction with, or
instead of, varying the
underlayer 1406.
[00107] Thus a variety of combinations is available wherein an underlayer 1406
of one index
is combined with a top grating 1304 of another index, along with a substrate
1302 of a third
index, and wherein adjusting these relative values provides a lot of variation
in dependence of
diffraction efficiency upon incidence angle. A layered waveguide with
different layers of
refractive indices is presented. Various combinations and permutations are
presented along with
related performance data to illustrate functionality. The benefits include
increased angle, which
provides an increased output angle with the grating 1304 and therefore an
increased field of view
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
with the eyepiece. Further, the ability to counteract the normal reduction in
diffraction efficiency
with angle is functionally beneficial.
[00108] Fig. 14B illustrates an embodiment where another layer of material
1409 (top surface)
is placed above the grating layer 1304. Layer 1409 can be configurably
implemented to address
different design goals. For example, layer 1409 can form an interstitial layer
between multiple
stacked diffraction structures 1401a and 1401b, e.g., as shown in Fig. 14C. As
shown in Fig.
14C, this interstitial layer 1409 can be employed to remove any air space/gap
and provide a
support structure for the stacked diffraction components. In this use case,
the layer 1409 can be
formed from a material having a relatively low index of refraction, e.g., at
around 1.1 or 1.2.
Although not shown in this figure, other layers (such as weak lenses) may also
be placed
between the diffraction structures 1401a and 1401b.
[00109] In addition, layer 1409 can be formed from a material having a
relatively high index
of refraction. In this situation, it is the gratings on the layer 1409 that
would provide the
diffraction effects for all or a substantial amount of the incident light,
rather than the grating
surface 1304.
[00110] As is clear, different relative combinations of refractive index
values can be selected
for the different layers, including layer 1409, to achieve desired optical
effects and results.
[00111] Such structures may be manufactured using any suitable manufacturing
techniques.
Certain high-refractive index polymers such as one known as "MR 174" may be
directly
embossed, printed, or etched to produce desired patterned structures, although
there may be
challenges related to cure shrinkage and the like of such layers. Thus, in
another embodiment,
another material may be imprinted, embossed, or etched upon a high-refractive
index polymer
layer (i.e., such as a layer of MR 174) to produce a functionally similar
result. Current state of
the art printing, etching (i.e., which may include resist removal and
patterning steps similar to
those utilized in conventional semiconductor processes), and embossing
techniques may be
utilized and/or combined to accomplish such printing, embossing, and/or
etching steps. Molding
techniques, similar to those utilized, for example, in the production of DVDs,
may also be
utilized for certain replication steps. Further, certain jetting or deposition
techniques utilized in
printing and other deposition processes may also be utilized for depositing
certain layers with
precision.
26
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
[00112] The following portion of the disclosure will now describe improved
approaches to
implement the formation patterns onto substrates for diffraction, wherein
imprinting of deposited
imprint materials is performed according to some embodiments of the invention.
These
approaches allow for very precise distribution of imprint materials, as well
as very precise
formation of different imprint patterns onto any number of substrate surfaces.
It is noted that the
following description can be used in conjunction with, and to implement, the
grating
configurations described above. However, it is expressly noted that the
inventive deposition
approach may also be used in conjunction with other configurations as well.
[00113] According to some embodiments, patterned distribution (e.g., patterned
inkjet
distribution) of imprint materials are performed to implement the deposition
of imprint materials
onto a substrate. This approach of using patterned ink-jet distribution allows
for very precise
volume control over the materials to be deposited. In addition, this approach
can serve to
provide a smaller, more uniform base layer beneath a grating surface ¨ and as
discussed above,
the base thickness of a layer can have a significant effect on the performance
of an
eyepiece/optical device.
[00114] Fig. 19 illustrates an approach to implement precise, variable volume
deposition of
imprint material on a single substrate. As shown in the figure, a template
1902 is provided
having a first set of deeper depth structures 1904 and a second set of
shallower (e.g., standard)
depth structures 1906. When depositing imprint materials onto an imprint
receiver 1908, a
relatively higher volume of imprint materials 1910 is deposited in
correspondence to the portion
of the template with the deeper depth structures 1904 of the template 1902. In
contrast, a
relatively lower volume of imprint materials 1912 is deposited in conjunction
with the shallower
depth structures 1906 of the template 1902. The template then is used to
imprint the first and
second set of depth structures into the imprint materials, forming respective
structures having
different depths and/or patterns within the imprint materials. This approach
therefore permits
simultaneous formation of different features onto the imprint receiver 1908.
[00115] This approach can be taken to create distributions that are
purposefully non-uniform
for structures with different depths and/or feature parameters, e.g., where
the feature structures
are on the same substrate and have different thicknesses. This can be used,
for example, to
create spatially distributed volumes of imprint material that enable
simultaneous imprint of
structures of variable depth with the same underlayer thickness.
27
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
[00116] The bottom of Fig. 19 illustrates a structure 1920 that is formed
with the deposition
technique/apparatus described above, where the underlayer 1922 has a uniform
thickness despite
pattern depth and volume differentials. It can be seen that imprint materials
have been deposited
with non-uniform thicknesses in structure 1920. Here, the top layer 1924
includes a first portion
1926 having a first set of layer thicknesses, while a second portion 1928 has
a second set of layer
thicknesses. In this example, portion 1926 corresponds to a thicker layer as
compared to the
standard/shallower thickness of portion 1928. It is noted, however, that any
combination of
thicknesses may be constructed using the inventive concept, where thicknesses
that are
either/both thicker and/or thinner than standard thicknesses are formed onto
an underlayer.
[00117] This capability can also be used to deposit larger volumes of
material to serve as, for
example, spacer elements to aid in the construction of a multi-layer
diffractive optical element,
for example.
[00118] Some embodiments pertain to an approach to implement simultaneous
deposition of
multiple types of imprint materials onto a substrate. This permits materials
having optical
properties to be simultaneously deposited across multiple portions of the
substrate at the same
time. This approach also provides the ability to tune local areas associated
with specific
functions, e.g., to act as in-coupling grating, orthogonal pupil expander
(OPE) gratings, or exit
pupil expander (EPE) gratings.
[00119] Figure 20 illustrates an approach to implement directed, simultaneous
deposition of
multiple different imprint materials in the same layer and imprint step
according to some
embodiments. As shown in the figure, a template 2002 is provided to imprint
patterns into the
different types of imprint materials 2010 and 2012 on the imprint receiver
2008. Materials 2010
and 2012 may comprise the same material having different optical properties
(e.g., two variants
of the same material having differing indices of refraction) or two entirely
different materials.
[00120] Any optical property of the materials can be considered and selected
when employing
this technique. For example, as shown in the embodiment of Fig. 20, material
2010 corresponds
to a high index of refraction material that is deposited in one section of the
imprint receiver 2008,
while at the same time, material 2012 corresponding to a lower index of
refraction material that
is deposited in the area of a second section.
[00121] As shown in the resulting structure 2020, this forms a multi-function
diffractive
optical element having a high index of refraction portion 2026 and a lower
index of refraction
28
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
portion 2028. In this case, high index portion 2026 pertaining to a first
function and portion
2028 pertaining to a second function were imprinted simultaneously.
[00122] While this example illustratively identifies the refractive index
of the materials as the
optical property to "tune" when simultaneously depositing the materials, it is
noted that other
optical properties may also be considered when identifying the type of
materials to deposit in
different portions of the structure. For example, opacity and absorption are
other properties that
can be used to identify materials for deposition in different portions of the
structure to tune the
local properties of the final product.
[00123] In addition, one type of material may be deposited above/below another
material
before imprinting. For example, one index of refraction material may be
deposited directly
below a second index of refraction material just prior to imprinting,
producing a gradient index
to form a diffractive optical element. This can be used, for example, to
implement the structure
shown in Fig. 17A (or any of the other pertinent structures described above or
in the figures).
[00124] According to another embodiment, multi-sided imprinting may be
employed to
imprint multiple sides of an optical structure. This permits imprinting to
occur on different sides
of an optical element, to implement multiplexing of functions through a base
layer volume. In
this way, different eyepiece functions can be implemented without adversely
affecting grating
structure function.
[00125] Figs. 21A-B illustrates an example approach to implement two-sided
imprint in the
context of total-internal reflection diffractive optical elements. As
illustrated in Fig. 21A, a first
template 2102a may be used to produce one imprint on side "A" of the
substrate/imprint receiver
2108. This forms a first pattern 2112 having a first material onto side A of
the structure.
[00126] As illustrated in Fig. 21B, template 2102b may be used to produce a
second imprint
on side "B" of the same substrate. This forms a second pattern 2114 having a
second material
onto side B of the substrate.
[00127] It is noted that sides A and B may have the same or different
patterns, and/or may
have the same or different types of materials. In addition, the pattern on
each side may comprise
varying layer thicknesses (e.g., using the approach of Fig. 19) and/or have
different material
types on the same side (e.g., using the approach of Fig. 20).
[00128] As shown in Fig. 22, a first pattern 2112 has been imprinted onto side
A and a second
pattern 2114 onto the opposite side B of the substrate 2108. The compound
function of the
29
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
resulting two-sided imprinted element 2200 can now be realized. In particular,
when input light
is applied to the two-sided imprinted element 2200, some the light exits from
the element 2200
to implement a first function 1 while other light exits to implement a second
function 2.
1001291 Additional embodiments pertain to multi-layer over-imprinting, and/or
multi-layer
separated/offset substrate integration. In either/both of these approaches, a
previously imprinted
pattern can be jetted upon and printed again. An adhesive can be jetted onto a
first layer, with a
second substrate bonded to it (possibly with an airgap), and a subsequent
jetting process can
deposit onto the second substrate and imprinted. Series-imprinted patterns can
be bonded to
each other in sequence in a roll-to-roll process. It is noted that the
approach of implementing
multi-layer over-imprinting may be used in conjunction with, or instead of,
the multi-layer
separated/offset substrate integration approach.
[00130] Fig. 23 illustrates an approach to implement multi-layer over-
imprinting. Here, a first
imprint material 2301 can be deposited onto a substrate 2308 and imprinted.
This is followed by
deposition (and possible imprinting) of a second imprint material 2302.
This results in a
composite, multi-layer structure having both a first imprint material 2301 and
a second imprint
material 2302. In one embodiment, subsequent imprinting may be implemented for
the second
imprint material 2302. In an alternate embodiment, subsequent imprinting is
not implemented
for the second imprint material 2302.
1001311 Fig. 24 illustrates an approach to implement multi-layer
separated/offset substrate
integration. Here, both a first substrate 1 and a second substrate 2 may be
deposited with the
imprinting material and then imprinted. Afterwards, substrate 1 and substrate
2 may be
sandwiched and bonded, possibly with offset features (also imprinted) that
provide for, in one
embodiment, an air-gap 2402 between the active structures of Substrate 2 and
the back side of
substrate 1. An imprinted spacer 2404 may be used to create the airgap 2402
1001321 According to yet another embodiment, disclosed is an approach to
implement variable
volume deposition of materials distributed across the substrate, which may be
dependent upon an
apriori knowledge of surface non-uniformity. To explain, consider the
substrate 2502 shown in
Fig. 25. As shown, the surface non-uniformity of the substrate 2502 may result
undesirable
parallelism, causing poor optical performance. In this case, the substrate
2502 (or a previously
imprinted layer) may be measured for variability.
WO 2016/141372 PCT/US2016/021093
CA 02976955 2017-08-16
[00133] Variable volume deposition of imprint material may be employed to
provide a level
distribution of imprint material to be deposited independently of the
underlying topography or
physical feature set. For example, the substrate can be pulled flat by vacuum
chuck, and in situ
metrology performed to assess surface height, e.g., with low coherence or with
laser based on-
contact measurement probes. The dispense volume of the imprint material can be
varied
depending upon the measurement data to yield a more uniform layer upon
replication. In this
example, portion 2504a of the substrate has the greatest level of variability,
portion 2504b has a
medium level of variability, and portion 2504c has the lowest level of
variability. Therefore,
high volume imprint material may be deposited in portion 2504a, medium volume
imprint
material is deposited into portion 2504b, and low/standard volume imprint
material is deposited
into portion 2504c. As shown by the resulting product 2506, this results in a
more uniform total
substrate/imprint material/imprint pattern thickness, which may in turn tune
or benefit
performance of the imprinted device.
[00134] It is noted that while the example shows the variability due to non-
uniformity in
thickness, other types of non-uniformity may also be addressed by this
embodiment of the
invention. In another embodiment that variability may be due to existence of
pits, peaks or other
anomalies or features associated with local positions on the substrate.
[00135] In the foregoing specification, the invention has been described with
reference to
specific embodiments thereof. It will, however, be evident that various
modifications and
changes may be made thereto without departing from the broader spirit and
scope of the
invention. For example, the above-described process flows are described with
reference to a
particular ordering of process actions. However, the ordering of many of the
described process
actions may be changed without affecting the scope or operation of the
invention. The
specification and drawings are, accordingly, to be regarded in an illustrative
rather than
restrictive sense.
[00136] Various example embodiments of the invention are described herein.
Reference is
made to these examples in a non-limiting sense. They are provided to
illustrate more broadly
applicable aspects of the invention. Various changes may be made to the
invention described and
equivalents may be substituted without departing from the true spirit and
scope of the invention.
31
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
In addition, many modifications may be made to adapt a particular situation,
material,
composition of matter, process, process act(s) or step(s) to the objective(s),
spirit or scope of the
present invention. Further, as will be appreciated by those with skill in the
art that each of the
individual variations described and illustrated herein has discrete components
and features which
may be readily separated from or combined with the features of any of the
other several
embodiments without departing from the scope or spirit of the present
inventions. All such
modifications are intended to be within the scope of claims associated with
this disclosure.
[00137] The invention includes methods that may be performed using the subject
devices. The
methods may comprise the act of providing such a suitable device. Such
provision may be
performed by the end user. In other words, the "providing" act merely requires
the end user
obtain, access, approach, position, set-up, activate, power-up or otherwise
act to provide the
requisite device in the subject method. Methods recited herein may be carried
out in any order of
the recited events which is logically possible, as well as in the recited
order of events.
[00138] Example aspects of the invention, together with details regarding
material selection
and manufacture have been set forth above. As for other details of the present
invention, these
may be appreciated in connection with the above-referenced patents and
publications as well as
generally known or appreciated by those with skill in the art. The same may
hold true with
respect to method-based aspects of the invention in terms of additional acts
as commonly or
logically employed.
[00139] In addition, though the invention has been described in reference to
several examples
optionally incorporating various features, the invention is not to be limited
to that which is
described or indicated as contemplated with respect to each variation of the
invention. Various
changes may be made to the invention described and equivalents (whether
recited herein or not
included for the sake of some brevity) may be substituted without departing
from the true spirit
and scope of the invention. In addition, where a range of values is provided,
it is understood that
every intervening value, between the upper and lower limit of that range and
any other stated or
intervening value in that stated range, is encompassed within the invention.
[00140] Also, it is contemplated that any optional feature of the inventive
variations described
may be set forth and claimed independently, or in combination with any one or
more of the
32
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
features described herein. Reference to a singular item, includes the
possibility that there are
plural of the same items present. More specifically, as used herein and in
claims associated
hereto, the singular forms "a," "an," "said," and "the" include plural
referents unless the
specifically stated otherwise. In other words, use of the articles allow for
"at least one" of the
subject item in the description above as well as claims associated with this
disclosure. It is
further noted that such claims may be drafted to exclude any optional element.
As such, this
statement is intended to serve as antecedent basis for use of such exclusive
terminology as
"solely," "only" and the like in connection with the recitation of claim
elements, or use of a
"negative" limitation.
[00141] Without the use of such exclusive terminology, the term "comprising"
in claims
associated with this disclosure shall allow for the inclusion of any
additional element--
irrespective of whether a given number of elements are enumerated in such
claims, or the
addition of a feature could be regarded as transforming the nature of an
element set forth in such
claims. Except as specifically defined herein, all technical and scientific
terms used herein are to
be given as broad a commonly understood meaning as possible while maintaining
claim validity.
[00142] The breadth of the present invention is not to be limited to the
examples provided =
and/or the subject specification, but rather only by the scope of claim
language associated with
this disclosure.
[00143] The above description of illustrated embodiments is not intended to be
exhaustive or
to limit the embodiments to the precise forms disclosed. Although specific
embodiments of and
examples are described herein for illustrative purposes, various equivalent
modifications can be
made without departing from the spirit and scope of the disclosure, as will be
recognized by
those skilled in the relevant art. The teachings provided herein of the
various embodiments can
be applied to other devices that implement virtual or AR or hybrid systems
and/or which employ
user interfaces, not necessarily the example AR systems generally described
above.
[00144] For instance, the foregoing detailed description has set forth various
embodiments of
the devices and/or processes via the use of block diagrams, schematics, and
examples. Insofar as
such block diagrams, schematics, and examples contain one or more functions
and/or operations,
it will be understood by those skilled in the art that each function and/or
operation within such
33
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
block diagrams, flowcharts, or examples can be implemented, individually
and/or collectively,
by a wide range of hardware, software, firmware, or virtually any combination
thereof.
[00145] In one embodiment, the present subject matter may be implemented
via Application
Specific Integrated Circuits (ASICs). However, those skilled in the art will
recognize that the
embodiments disclosed herein, in whole or in part, can be equivalently
implemented in standard
integrated circuits, as one or more computer programs executed by one or more
computers (e.g.,
as one or more programs running on one or more computer systems), as one or
more programs
executed by on one or more controllers (e.g., microcontrollers) as one or more
programs
executed by one or more processors (e.g., microprocessors), as firmware, or as
virtually any
combination thereof, and that designing the circuitry and/or writing the code
for the software and
or firmware would be well within the skill of one of ordinary skill in the art
in light of the
teachings of this disclosure.
[00146] When logic is implemented as software and stored in memory, logic or
information
can be stored on any computer-readable medium for use by or in connection with
any processor-
related system or method. In the context of this disclosure, a memory is a
computer-readable
medium that is an electronic, magnetic, optical, or other physical device or
means that contains
or stores a computer and/or processor program. Logic and/or the information
can be embodied
in any computer-readable medium for use by or in connection with an
instruction execution
system, apparatus, or device, such as a computer-based system, processor-
containing system, or
other system that can fetch the instructions from the instruction execution
system, apparatus, or
device and execute the instructions associated with logic and/or information.
[00147] In the context of this specification, a "computer-readable medium" can
be any
element that can store the program associated with logic and/or information
for use by or in
connection with the instruction execution system, apparatus, and/or device.
The computer-
readable medium can be, for example, but is not limited to, an electronic,
magnetic, optical,
electromagnetic, infrared, or semiconductor system, apparatus or device. More
specific
examples (a non-exhaustive list) of the computer readable medium would include
the following:
a portable computer diskette (magnetic, compact flash card, secure digital, or
the like), a random
access memory (RAM), a read-only memory (ROM), an erasable programmable read-
only
34
CA 02976955 2017-08-16
WO 2016/141372 PCT/US2016/021093
memory (EPROM, EEPROM, or Flash memory), a portable compact disc read-only
memory
(CDROM), digital tape, and other nontransitory media.
[00148] Many of the methods described herein can be performed with variations.
For
example, many of the methods may include additional acts, omit some acts,
and/or perform acts
in a different order than as illustrated or described.
[00149] The various embodiments described above can be combined to provide
further
embodiments. To the extent that they are not inconsistent with the specific
teachings and
definitions herein, all of the U.S. patents, U.S. patent application
publications, U.S. patent
applications, foreign patents, foreign patent applications and non-patent
publications referred to
in this specification and/or listed in the Application Data Sheet. Aspects of
the embodiments can
be modified, if necessary, to employ systems, circuits and concepts of the
various patents,
applications and publications to provide yet further embodiments.
[00150] These and other changes can be made to the embodiments in light of the
above-
detailed description. In general, in the following claims, the terms used
should not be construed
to limit the claims to the specific embodiments disclosed in the specification
and the claims, but
should be construed to include all possible embodiments along with the full
scope of equivalents
to which such claims are entitled. Accordingly, the claims are not limited by
the disclosure.
[00151] Moreover, the various embodiments described above can be combined to
provide
further embodiments. Aspects of the embodiments can be modified, if necessary
to employ
concepts of the various patents, applications and publications to provide yet
further
embodiments.
[00152] These and other changes can be made to the embodiments in light of the
above-
detailed description. In general, in the following claims, the terms used
should not be construed
to limit the claims to the specific embodiments disclosed in the specification
and the claims, but
should be construed to include all possible embodiments along with the full
scope of equivalents
to which such claims are entitled. Accordingly, the claims are not limited by
the disclosure.