Language selection

Search

Patent 2952470 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2952470
(54) English Title: PARALLAX FREE THIN MULTI-CAMERA SYSTEM CAPABLE OF CAPTURING FULL WIDE FIELD OF VIEW IMAGES
(54) French Title: SYSTEME A APPAREILS PHOTOS MULTIPLES MINCES SANS PARALLAXE PERMETTANT DE CAPTURER DES IMAGES A GRAND CHAMP DE VISION COMPLET
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04N 5/335 (2011.01)
  • G03B 37/00 (2006.01)
(72) Inventors :
  • OSBORNE, THOMAS WESLEY (United States of America)
(73) Owners :
  • QUALCOMM INCORPORATED (United States of America)
(71) Applicants :
  • QUALCOMM INCORPORATED (United States of America)
(74) Agent: SMART & BIGGAR LLP
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2015-06-19
(87) Open to Public Inspection: 2015-12-23
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/US2015/036648
(87) International Publication Number: WO2015/196050
(85) National Entry: 2016-12-14

(30) Application Priority Data:
Application No. Country/Territory Date
62/015,329 United States of America 2014-06-20
62/057,938 United States of America 2014-09-30
62/073,856 United States of America 2014-10-31
14/743,818 United States of America 2015-06-18

Abstracts

English Abstract

Methods and systems for producing wide field of view field-of-view images are disclosed. In some embodiments, an imaging system includes a front camera having a first field-of-view (FOV) in a first direction and an optical axis that extends through the first FOV, a back camera having an optical axis that extends through the first FOV, a plurality of side cameras disposed between the front camera and the back camera, a back light re-directing reflective mirror component disposed between the back camera and plurality of side cameras, the back light re-directing reflective mirror component further disposed perpendicular to the optical axis of the back camera, and a plurality of side light re-directing reflective mirror components, each of the plurality of side cameras positioned to receive light re-directed reflected from one of the plurality of light redirecting reflective mirror components.


French Abstract

La présente invention concerne des procédés et des systèmes permettant de produire des images à grand champ de vision. Dans certains modes de réalisation, un système d'imagerie comprend un appareil photo avant présentant un premier champ de vision (FOV) dans une première direction et un axe optique qui s'étend à travers le premier FOV, un appareil photo arrière présentant un axe optique qui s'étend à travers le premier FOV, une pluralité d'appareils photos latéraux disposés entre l'appareil photo avant et l'appareil photo arrière, un composant de miroir réfléchissant qui redirige la lumière arrière, disposé entre l'appareil photo arrière et la pluralité d'appareils photos latéraux, le composant de miroir réfléchissant qui redirige la lumière arrière étant en outre disposé perpendiculairement à l'axe optique de l'appareil photo arrière, et une pluralité de composants de miroirs réfléchissants qui redirigent la lumière latérale, chaque appareil photo parmi la pluralité d'appareils photos latéraux étant positionné de façon à recevoir la lumière redirigée réfléchie à partir de l'un des multiples composants de miroirs réfléchissants qui redirigent la lumière.

Claims

Note: Claims are shown in the official language in which they were submitted.


What is Claimed is:
1. An imaging system, comprising:
an optical component comprising at least four light redirecting surfaces;
at least four cameras each configured to capture one of a plurality of
partial images of a target scene, each of the at least four cameras having:
an optical axis aligned with a corresponding one of the at least four
light redirecting surfaces of the optical component,
a lens assembly positioned to receive light representing one of the
plurality of partial images of the target scene redirected from the
corresponding one of the at least four light redirecting surfaces, and
an image sensor that receives the light after passing of the light
through the lens assembly; and
a virtual optical axis passing through the optical component, a point of
intersection of the optical axis of at least two of the at least four cameras
located
on the virtual optical axis.
2. The imaging system of claim 1, wherein cooperation of the at least four
cameras forms a virtual camera having the virtual optical axis.
3. The imaging system of claim 1, further comprising a processing module
configured to assemble the plurality of partial images into a final image of
the target
scene.
4. The imaging system of claim 1, wherein the optical component and each
of the at least four cameras are arranged within a camera housing having a
height of less
than or equal to approximately 4.5 mm.
5. The imaging system of claim 1, wherein a first set of the at least four
cameras cooperate to form a central virtual camera having a first field of
view and a
second set of the at least four cameras are arranged to each capture a portion
of a second
field of view, the second field of view including portions of the target scene
that are
outside of the first field of view.
-54-

6. The imaging system of claim 5, comprising a processing module
configured to combine images captured of the second field of view by the
second set of
the at least four cameras with images captured of the first field of view by
the first set of
the at least four cameras to form a final image of the target scene.
7. The imaging system of claim 5, wherein the first set includes four
cameras
and the second set includes four additional cameras, and wherein the optical
component
comprises eight light redirecting surfaces.
8. The imaging system of claim 1, further comprising a substantially flat
substrate, wherein each of the image sensors are positioned on the substrate
or inset into a
portion of the substrate.
9. The imaging system of claim 1, further comprising, for each of the at
least
four cameras, a secondary light redirecting surface configured to receive
light from the
lens assembly and redirect the light toward the image sensor.
10. The imaging system of claim 9, wherein the secondary light redirecting
surface comprises a reflective or refractive surface.
11. The imaging system of claim 1, wherein a size or position of one of the
at
least four light redirecting surfaces is configured as a stop limiting the
amount of light
provided to a corresponding one of the at least four cameras.
12. The imaging system of claim 1, further comprising an aperture, wherein
light from the target scene passes through the aperture onto the at least four
light
redirecting surfaces.
13. A method of capturing an image substantially free of parallax,
comprising:
receiving light representing a target image scene through an aperture;
splitting the light into at least four portions via at least four light
redirecting surfaces;
redirecting each portion of the light toward a corresponding camera of at
least four cameras each positioned to capture image data from a location of a
-55-

virtual camera having a virtual optical axis, an optical axis of each of the
at least
four cameras intersecting with the virtual optical axis; and
for each of the at least four cameras, capturing an image of a
corresponding one of the at least four portions of the light at an image
sensor.
14. The method of claim 13, wherein cooperation of the plurality of image
sensors forms a virtual camera having the virtual optical axis.
15. The method of claim 13, further comprising assembling the images of
each
portion of the light into a final image.
16. The method of claim 13, wherein splitting the light into at least four
portions comprises splitting the light into eight portions via four primary
light redirecting
surfaces corresponding to four primary cameras and via four additional light
redirecting
surfaces corresponding to four additional cameras, wherein the four primary
cameras and
four additional cameras cooperate to form the virtual camera.
17. The method of claim 13, wherein capturing the image of each portion of
the light comprises capturing a first field of view of the target image scene
using a first
set of the at least four cameras and capturing a second field of view of the
target image
scene using a second set of the at least four cameras, wherein the second
field of view
includes portions of a target scene that are outside of the first field of
view.
18. The method of claim 17, further comprising combining images captured of

the second field of view by the second set of the at least four cameras with
images
captured of the first field of view by the first set of the at least four
cameras to form a
final image.
19. The method of claim 17, wherein the first set includes four cameras and

the second set includes four cameras.
20. An imaging system, comprising:
means for redirecting light representing a target image scene in at least
four directions;
-56-

a plurality of capturing means each having:
an optical axis aligned with a virtual optical axis of the imaging
system and intersecting with a point common to at least one other optical
axis of another of the capturing means,
focusing means positioned to receive, from the means for
redirecting light, a portion of the light redirected in one of the at least
four
directions, and
image sensing means that receives the portion of the light from the
focusing means;
means for receiving image data comprising, from each of the plurality of
capturing means, an image captured of the portion of the light; and
means for assembling the image data into a final image of the target image
scene.
21. The imaging system of claim 20, wherein cooperation of the plurality of

capturing means forms a virtual camera having the virtual optical axis.
22. The imaging system of claim 20, wherein a first set of the capturing
means
are arranged to capture a first field of view and a second set of the
capturing means are
arranged to capture a second field of view, the second field of view including
portions of
the target scene that are outside of the first field of view.
23. The imaging system of claim 22, wherein the means for assembling the
image data combines images of the second field of view with images of the
first field of
view to form the final image.
24. A method of manufacturing an imaging system, the method comprising:
providing an optical component comprising at least four light redirecting
surfaces;
positioning at least four cameras around the optical component, each
camera of the at least four cameras configured to capture one of a plurality
of
partial images of a target scene, wherein positioning the at least four
cameras
comprises, for each camera:
-57-

aligning an optical axis of the camera with a corresponding one of
the at least four light redirecting surfaces of the optical component,
further positioning the camera such that the optical axis intersects
at least one other optical axis of another of the at least four cameras at a
point located along a virtual optical axis of the imaging system, and
providing an image sensor that captures one of the plurality of
partial images of the target scene; and
positioning the optical component such that the virtual optical axis passes
through the optical component.
25. The method of claim 24, wherein cooperation of the plurality of image
cameras forms a virtual camera having the virtual optical axis.
26. The method of claim 24, further comprising positioning a first set of
the at
least four cameras and corresponding light redirecting surfaces to capture a
first field of
view and positioning a second set of the plurality of cameras and
corresponding light
redirecting surfaces to capture a second field of view, wherein the second
field of view
includes portions of the target scene that are outside of the first field of
view.
27. The method of claim 24, further comprising providing a substantially
flat
substrate and, for each of the at least four cameras, positioning the image
sensor on or
inset into the substantially flat substrate.
28. The method of claim 24, further comprising, for each of the at least
four
cameras, providing a lens assembly between the image sensor and the optical
component.
29. The method of claim 24, further comprising, for each of the at least
four
cameras, providing a reflective or refractive surface between the image sensor
and the
optical component.
30. The system of claim 24, further comprising configuring at least one of
the
at least four light redirecting surfaces as a stop limiting the amount of
light provided to a
corresponding image sensor.
-58-

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
PARALLAX FREE THIN MULTI-CAMERA SYSTEM CAPABLE OF
CAPTURING FULL WIDE FIELD OF VIEW IMAGES
TECHNICAL FIELD
[0001] The
present disclosure relates to imaging systems and methods that
include a multi-camera system. In particular, the disclosure relates to
systems and
methods for capturing wide field of view images in a thin form factor.
BACKGROUND
[0002] Many
mobile devices, such as mobile phones and tablet computing
devices, include cameras that may be operated by a user to capture still
and/or video
images. Because the imaging systems are typically designed to capture high-
quality
images, it can be important to design the cameras or imaging systems to be
free or
substantially free of parallax. Moreover, it may be desired for the imaging
system to
capture an image of a wide field of view scene where the captured image is
parallax free
or substantially parallax free. Imaging systems may be used to capture various
fields of
view of a scene from a plurality of locations near a central point. However,
many of
these designs involve images with a large amount of parallax because the
fields of view
originate from various locations and not from a central point.
SUMMARY
[0003] An
example of one innovation includes an imaging system that
includes an optical component with four, eight or more cameras. The optical
component
can include at least four, eight or more light redirecting reflective mirror
surfaces. The at
least four cameras are each configured to capture one of a plurality of
partial images of a
target scene. Each of the at least four cameras have an optical axis, a lens
assembly, and
an image capture device such as an image sensor, array of sensors,
photographic film and
etc. (hereafter collectively referred to as an image sensor or sensor). The
optical axis is
aligned with a corresponding one of the at least four light redirecting
reflective mirror
surfaces of the optical component. The lens assembly is positioned to receive
light
representing one of the plurality of partial images of the target scene
redirected from the
corresponding one of the at least four light redirecting reflective mirror
surfaces. The
image sensor receives the light after passing of the light through the lens
assembly.
-1-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0004] An
example of another innovation includes a method of capturing an
image substantially free of parallax includes receiving light, splitting
light, redirecting
each portion of the light, and capturing an image for each of at least four
cameras. In
some embodiments of this innovation, light that represents a target image
scene is
essentially received through a virtual entrance pupil made up of a plurality
of virtual
entrance pupils associated with each camera and mirror surface pairs within
the camera
system. Received light is split into four or eight portions via at least four
or eight light
redirecting reflective mirror surfaces. Each portion of the light is
redirected towards a
corresponding camera, where each camera-mirror pair are positioned to capture
image
data through a virtual camera entrance pupil.
[0005] An
example of another innovation includes an imaging system, the
imaging system including means for redirecting light, a plurality of capturing
means
having an optical axis, focusing means, and image sensing means, means for
receiving
image data, and means for assembling the image data. In some embodiments of
this
innovation, the means for redirecting light directs light from a target image
scene in at
least four directions. A plurality of capturing means each have an optical
axis aligned
with a virtual optical axis of the imaging system and intersecting with a
point common to
at least one other optical axis of another of the capturing means, focusing
means
positioned to receive, from the means for redirecting light, a portion of the
light redirected
in one of the at least four directions, and image sensing means that receives
the portion of
the light from the focusing means. The means for receiving image data may
include a
processor coupled to memory. The means for assembling the image data into a
final
image of the target image scene includes a processor configured with
instructions to
assemble multiple images into a single (typically larger) image.
[0006] An
example of another innovation includes a method of manufacturing
an imaging system includes providing an optical component, positioning at
least four
cameras, aligning an optical axis of the camera, further positioning the
camera, providing
an image sensor, and positioning the optical component. In some embodiments of
this
innovation, an optical component is provided that includes at least four light
redirecting
surfaces. At least four cameras are positioned around the optical component.
Each
camera of the at least four cameras is configured to capture one of a
plurality of partial
images of a target scene. The at least four cameras that are positioned
include, for each
camera, aligning an optical axis of the camera with a corresponding one of the
at least
four light redirecting surfaces of the optical component, further positioning
the camera
-2-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
such that the optical axis intersects at least one other optical axis of
another of the at least
four cameras at a point located along a virtual optical axis of the imaging
system, and
providing an image sensor that captures one of the plurality of partial images
of the target
scene.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The
disclosed aspects will hereinafter be described in conjunction with
the appended drawings and appendices, provided to illustrate examples and not
to limit
the disclosed aspects. The reference numbers in each figure apply only to that
figure.
[0008] Figure
lA illustrates an example of a top view of an embodiment of an
eight camera imaging system.
[0009] Figure
1B illustrates an example of a top view of an embodiment of an
eight camera imaging system.
[0010] Figure
1C illustrates an example of a top view of an embodiment of a
four camera imaging system.
[0011] Figure
2A illustrates an example of a side view of an embodiment of a
portion of a wide field of a multi-camera configuration including a central
camera and a
first camera.
[0012] Figure
2B illustrates an example of a side view of an embodiment of a
portion of a wide field of view multi-camera configuration that replaces the
single central
camera of Figure 1B.
[0013] Figure
3A illustrates a schematic of two cameras of an embodiment of
a multiple camera configuration.
[0014] Figure
3B illustrates a schematic of two cameras of an embodiment of
a multiple camera configuration.
[0015] Figure 4
illustrates an embodiment of a camera shown in Figures 1A-
3B and Figures 5-6 and illustrates positive and negative indications of the
angles and
distances for Figures 1A-3B and Figures 5-6.
[0016] Figure 5
illustrates an embodiment of side view cross-section of the
eight camera system.
[0017] Figure 6
illustrates an embodiment of a side view cross-section of a
four camera imaging system.
[0018] Figure
7A shows the top view of a reflective element that can be used
as the multi mirror system 700a of Figure 1A.
-3-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0019] Figure
7B illustrates a side view of an embodiment of a portion of an
eight camera configuration.
[0020] Figure 8
illustrates a cross-sectional view of cameras 114a and 116b of
Figure 5 with a folded optics camera structure for each camera.
[0021] Figure 9
illustrates a cross-sectional side view of an embodiment of a
folded optic multi-sensor assembly.
[0022] Figure
10 illustrates an example of a block diagram of an embodiment
of an imaging device.
[0023] Figure
11 illustrates blocks of an example of a method of capturing a
target image.
DETAILED DESCRIPTION
A. Introduction
[0024]
Implementations disclosed herein provide examples of systems,
methods and apparatus for capturing wide field of view images with an imaging
system
that may fit in a thin form factor and that is parallax free or substantially
parallax free.
Aspects of various embodiments relate to an arrangement of a plurality of
cameras (also
referred to herein as a multi-camera system) exhibiting little or no parallax
artifacts in the
captured images. The arrangement of the plurality of cameras captures wide
field of
images, whereby a target scene being captured is partitioned into multiple
images. The
images are captured parallax free or substantially parallax free by designing
the
arrangement of the plurality of cameras such that they appear to have the same
common
real or virtual entrance pupil. The problem with some designs is they do not
have the
same real or virtual common entrance pupil and thus may not parallax free or,
stated
another way, free of parallax artifacts.
[0025] Each
sensor in the arrangement of the plurality of cameras receives
light from a portion of the image scene using a corresponding light
redirecting light
reflective mirror component (which is sometimes referred to herein as "mirror"
or "mirror
component"), or a surface equivalent to a mirror reflective surface.
Accordingly, each
individual mirror component and sensor pair represents only a portion of the
total multi-
camera system. The complete multi-camera system has a synthetic aperture
generated
based on the sum of all individual aperture rays. In any of the
implementations, all of the
cameras may be configured to automatically focus, and the automatic focus may
be
controlled by a processor executing instructions for automatic focus
functionality.
-4-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0026] In
various embodiments, the multi-camera system includes four or
eight or more cameras, each camera arranged for capturing a portion of a
target scene
such that eight or four or more or less portions of an image may be captured.
The system
includes a processor configured to generate an image of the scene by combining
all or a
portion of the eight or four or more or less portions of the image. In some
embodiments,
eight cameras (or a plurality of cameras) can be configured as two rings or
radial
arrangements of four cameras each, a virtual center camera formed by
cooperation of the
four cameras in the first ring , wherein the four cameras of the second ring
cameras also
capture images from the point of view of the virtual center camera. A
plurality of light
redirecting reflective mirror components are configured to redirect a portion
of incoming
light to each of the eight cameras for the eight camera configuration or each
of the four
cameras for each of a the four camera configuration. The portion of incoming
light from
a target scene can be received from areas surrounding the multi-camera system
by the
plurality of light redirecting reflective mirror components. In some
embodiments, the
light redirecting reflective mirror components may comprise a plurality of
individual
components, each having at least one light redirecting reflective mirror
component. The
multiple components of the light redirecting reflective mirror component may
be coupled
together, coupled to another structure to set their position relative to each
other, or both.
[0027] As used
herein, the phrase "parallax free images" (or the like) refers
also to effectively or substantially parallax free images, and "parallax
artifact free
images" (or the like) refers also to effectively or substantially parallax
artifact free
images, wherein minimally acceptable or no visible parallax artifacts are
present in final
images captured by the system.
[0028] As an
example, cameras systems designed to capture stereographic
images using two side-by-side cameras are examples of cameras systems that are
not
parallax free. One way to make a stereographic image is to capture images from
two
different vantage points. Those skilled in the art may be aware it may be
difficult or
impossible, depending on the scene, to stitch both stereographic images
together to get
one image without having some scene content duplicated or missing in the final
stitched
image. Such artifacts are provided as examples of parallax artifacts. Further,
those
skilled in the art may be aware that if the vantage points of the two
stereographic cameras
are moved together so that both look at the scene from one vantage point it
should then be
possible to stitch the images together in such a way parallax artifacts are
not observable.
-5-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0029] For
parallax free images, when two or more images are stitched
together image processing is not used to alter the images by adding content or
removing
content from the images or the final stitched together image.
[0030] To
produce parallax free images, a single lens camera can be rotated
about a stationary point located at the center point of its entrance pupil
while capturing
images in some or all directions. These images can be used to create a wide
field of view
image showing wide field of view scene content surrounding the center point of
the
entrance pupil of a virtual center camera lens of the system. The virtual
center camera of
the multi-camera system will be further described below with respect to Figure
2A.
These images may have the added property of being parallax free and/or
parallax artifact
free. Meaning, for example, the images can be stitched together in a way where
the scene
content is not duplicated in the final wide field of view image and or the
scene content
may not be missing from the final stitched wide field of view image and or
have other
artifacts that may be considered to be parallax artifacts.
[0031] A single
camera can be arranged with other components, such as light
redirecting (for example, reflective or refractive) mirror components, to
appear as if its
entrance pupil center most point is at another location (that is, a virtual
location), than the
center most point of a the actual real camera's entrance pupil that is being
used. In this
way, two or more cameras with other optical components, such as light
redirecting
reflective mirror components for each camera, can be used together to create
virtual
cameras that capture images that appear to be at a different vantage point;
that is, to have
a different entrance pupil center most point located at a virtual location. In
some
embodiments it may be possible to arrange light redirecting reflective mirror
component
associated with each respective camera so that two or more cameras may be able
to share
the same center most point of each cameras virtual camera entrance pupil.
[0032] It can
be very challenging to build systems with sufficient tolerance for
two or more virtual cameras to share the exact same center most point of each
cameras
respective virtual camera entrance pupil. It may be possible given the pixel
resolutions of
a camera system and or the resolution of the lenses to have the virtual
optical axis of two
or more virtual cameras either intersect or come sufficiently close to
intersecting each
other near or around the center most point of a shared entrance pupil so that
there is little
or no parallax artifacts in the stitched together images or, as the case may
be, the stitched
together images will meet requirements of having less than a minimal amount of
parallax
artifacts in the final stitched together images. That is, without using
special software to
-6-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
add content or remove content or other image processing to remove parallax
artifacts, one
would be able to take images captured by such cameras and stitch these image
together so
they produce a parallax free wide field of view image or meeting requirements
of a
minimal level of parallax artifacts. In this context one may use the terms
parallax free or
effectively parallax free based on the system design having sufficient
tolerances.
[0033] Herein,
when the terms parallax free, free of parallax artifacts,
effectively parallax free or effectively free of parallax artifacts is used,
it is to be
understood that the physical realities may make it difficult or nearly
impossible to keep
physical items in the same location over time or even have the property of
being exactly
the same as designed without using tolerances. The realities are things may
change in
shape, size, position, relative position to possible other objects across time
and or
environmental conditions. As such, it is difficult to talk about an item or
thing as being
ideal or non-changing without assuming or providing tolerance requirements.
Herein the
terms such as effectively parallax free shall mean and be taken to mean the
realities are
most physical items will require having tolerances to where the intended
purpose of the
assembly or item is being fulfilled even though things are not ideal and may
change over
time. The terms of parallax free, free of parallax artifacts, effectively
parallax free or
effectively free of parallax artifacts with or without related wording should
be taken to
mean that it is possible to show tolerances requirements can be determined
such that the
intended requirements or purpose for the system, systems or item are being
fulfilled.
[0034] In the
following description, specific details are given to provide a
thorough understanding of the examples. However, the examples may be practiced

without these specific details.
B. Overview of Example Four and Eight Camera Systems
[0035] Figure
lA illustrates an example of a top view of an embodiment of an
eight camera imaging system 100a including first ring of cameras 114a-dand a
second
camera 116a-d that will be further described herein. The wide field of view
camera
configuration 100a also comprises at least several light redirecting
reflective mirror
components 124a-d that correspond to each of the cameras 114a-d in the first
ring of
cameras. Further, the wide field of view camera configuration 100a also
comprises at
least several light redirecting reflective mirror components 126a-d that
correspond to each
of the cameras 116a-d in the first ring of cameras. For instance, the light
redirecting
reflective mirror component ("mirror") 124a corresponds to the camera 114a,
and mirror
-7-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
126a corresponds to the camera 116a. The mirrors 124a-d and 126a-d reflect
incoming
light towards the entrance pupils of each of the corresponding cameras 114a-d
and 116a-
d. In this
embodiment, there is a mirror corresponding to each camera. The light
received by the first ring of four cameras 114a-d and the second ring of four
cameras
116a-d from a mosaic of images covering a wide field of view scene is used to
capture an
image as described more fully below with respect to Figures 1-3, 5 and 6.
Although
described in terms of mirrors, the light redirecting reflective mirror
components may
reflect, refract, or redirect light in any manner that causes the cameras to
receive the
incoming light.
[0036] The
component 160, the dashed square line 150 and the elliptic and
circular lines will be further described using figures 2-8 herein.
[0037] The full
field of view of the final image after cropping is denoted by
dashed line 170 over component 160. The shape of the cropped edge 170
represents a
square image with an aspect ratio of 1:1. The cropped image 170 can be further
cropped
to form other aspect ratios.
[0038] Figure
1B illustrates a top view of an embodiment of an eight camera
configuration 510. A central reflective element 532 can have a plurality of
reflective
surfaces which can be a variety of optical elements, including but not limited
to one or
more mirrors or as illustrated here, a prism. In some embodiments, a camera
system has
eight (8) cameras 512a-h, each camera capturing a portion of a target image
such that
eight image portions may be captured. The system includes a processor
configured to
generate a target image by combining all or a portion of the eight image
portions,
described further in reference to Figure 7A. As illustrated in Figure 1B, the
eight cameras
512a-h can be configured as two sets of four (4) cameras, four of the cameras
512a, 512c,
512e, 512g collectively forming a virtual central camera, and the other four
cameras
512b, 512d, 512f, 512h are used to create a wider field of view camera. The
central
reflective element 532 is disposed at or near the center of the eight camera
arrangement,
and is configured to reflect a portion of incoming light to each of the eight
cameras 512a-
h. In some embodiments the central reflective element 532 may comprise one
component
having at least eight reflective surfaces. In some other embodiments, the
central
reflective element 532 may be comprised of a plurality of individual
components, each
having at least one reflective surface. The multiple components of the central
reflective
element 532 may be coupled together, coupled to another structure to set their
position
relative to each other, or both.
-8-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0039] In some
embodiments, an optical axis (e.g., 530) of each camera of the
eight cameras 512a-h can intersect any location on its associated central
object side
reflective surface. With this freedom of positioning and orienting the
cameras, each of
the cameras can be arranged such that its optical axis is pointed to a certain
location on a
corresponding associated reflective surface (that reflects light to the
camera) that may
yield a wider aperture than other intersection points on its associated
reflective surface.
Generally, the wider the aperture, the lower the f-number of a camera can be,
provided
the effective focal length of the camera remains substantially the same. Those
skilled in
the art may be aware that the lower the f-number the higher the diffraction
limit of the
optical system may be. The shape of the aperture may affect the shape of the
Point Spread
Function (PSF) and/or Line Spread Function (LSF) of the lens system and can be
spatially
different across the image plane surface. The aperture of the system can be
affected by
the reflective surface if not all the rays arriving from a point in the object
space are
reflected to the camera lens assembly, with respect to the rays that would
have entered the
camera if the center object side reflective surface associated with the camera
were not
present, where it is to be understood that in this case the camera's actual
physical location
would be at its vertical location with the same common entrance pupil with all
the other
cameras in the system.
[0040] As an
example, the object side reflective surface associated with a
camera can act as an aperture stop if it does not reflect rays that would
normally enter the
camera lens system that would normally enter if the reflective surface were
not present.
Another example is, the optical axis of the camera can intersect near an edge
of the
associated reflective surface and thereby reduce the visible area of the
reflective surface
associated with that camera. The rays outside of this area may not be
reflected so that
they enter the lens assembly of the camera as it would if the associated
reflective surface
were not present, whereby in this way the reflective surface can be considered
a stop and
as a result the effective aperture will be reduced relative to pointing at a
location that
would reflect more of the rays. Another advantage of being able to choose any
location
on the reflective surface as an intersect point of an associated camera is the
image area on
the image plane can be increased or maximized. For example, some embodiments
may
point at a location closer to an edge of the reflective surface and thereby
reduce the image
area as compared to another intersection point on the associated reflection
surface which
may produce a wider image area. Another advantage of choosing any intersection
point
on the reflective surface is an intersection location can be found that will
produce a
-9-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
desired Point Spread Function (PSF) or Line Spread Function (LSF) across the
image
plane, for example a particular PSF or LSP shape at a subset of areas in the
image area or
across the image area. Another advantage of being able to change the
intersection point
of a cameras' optical axis on the reflective surface is the ability during
calibration to find
an alignment between all the cameras that will yield a desired orientation of
the reflective
surfaces in order to optimize all factors such as the image areas of the
cameras and the
shape of the PSF and LSF as seen across the image areas of the other cameras.
Another
advantage of being able to select the intersection point of the center
reflective surface
associated with a camera is added degrees of freedom when designing or
developing the
shape of the reflective surface in order to yield a desired orientation of the
reflective
surfaces in order to optimize all factors such as the image areas of the
cameras and the
shape of the PSF and LSF as seen across the image areas of the other cameras.
It should
be understood the reflective surfaces of the center object side reflector or
refractive
reflector element are part of the entire optical system so the shape of these
surfaces can be
other than planar and considered part of the optical system for each and every
camera.
For example the shape of each surface can be spherical, aspherical, or complex
in other
ways.
[0041] Figure
1C illustrates a top view of an example of an embodiment of a
four camera configuration 110. In some embodiments, a camera system has four
(4)
cameras 112a-d, each camera capturing a portion of a scene such that four
images may be
captured. The system includes a processor configured to generate an image of
the scene
by combining all or a portion of the four images. As illustrated in Figure 1C,
the four
cameras 112a-d can be configured as a set of four (4) cameras, the four
cameras 112a-d
collectively forming a virtual central camera. A reflective element 138 is
disposed at or
near the center of the four camera arrangement, and is configured to reflect a
portion of
incoming light to each of the four cameras 112a-d. In some embodiments the
reflective
element 138 may comprise one component having at least four reflective
surfaces. In
some other embodiments, the reflective element 138 may comprise a plurality of

individual components, each having at least one reflective surface. Because
Figure 1C
illustrates a top view, the fields of view 120, 122, 124, 126 are illustrated
as circles. The
reflective surfaces 140, 142, 144, 146 can be a variety of optical elements,
including but
not limited to one or more mirrors or as illustrated here, a prism. The
multiple
components of the reflective element 138 may be coupled together, coupled to
another
structure to set their position relative to each other, or both.
-10-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0042] In some
embodiments, the optical axes 128, 130, 132, 134 of each
camera of the four cameras 112a-d can intersect any location on its associated
central
object side reflective surface 140, 142, 144, 146, so long as the cameras
cooperate to form
a single virtual camera. Further details of positioning the cameras and
aligning their
respective optical axes is described in reference to Figures 4A and 4B. With
this freedom
of positioning and orienting the cameras, each of the cameras can be arranged
such that
its optical axis is pointed to a certain region on a corresponding associated
reflective
surface 140, 142, 144, 146 (that reflects light to the camera) that may yield
a wider
aperture than other intersection points on its associated reflective surface
140, 142, 144,
146. Generally, the wider the aperture, the lower the f-number of a camera can
be,
provided the effective focal length of the camera remains substantially the
same. Those
skilled in the art may be aware that the lower the f-number the higher the
diffraction limit
of the optical system may be. The shape of the aperture may affect the shape
of the Point
Spread Function (PSF) and/or Line Spread Function (LSF) of the lens system and
can be
spatially different across the image plane surface.
[0043]
Reflective surfaces 140, 142, 144, 146 can reflect light along the
optical axes 128, 130, 132, 134 such that each of the corresponding cameras
112a-d can
capture a partial image comprising a portion of the target image according to
each
camera's field of view 120, 122, 124, 126. The fields of view 120, 122, 124,
126 may
share overlapping regions 148, 150, 152, 154. The captured portions of the
target image
for each of cameras 112a-d may share the same or similar content (e.g.,
reflected light)
with respect to the overlapping regions 148, 150, 152, 154. Because the
overlapping
regions 148, 150, 152, 154 share the same or similar content, this content can
be used by
an image stitching module to output a target image. Overlaying image portion
136
includes portions of the reflected portions of the target image. Using a
stitching
technique, the stitching module can output a target image to an image
processor. For
example, overlapping regions 148, 150, 152, 154 of the fields of view 120,
122, 124, 126
may be used by an image stitching module to perform a stitching technique on
the partial
images captured by cameras 112a-d and output a stitched and cropped target
image to an
image processor.
[0044] In order
to output a single target image, the image stitching module
may configure the image processor to combine the multiple partial images to
produce a
high-resolution target image. Target image generation may occur through known
image
-11-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
stitching techniques. Examples of image stitching can be found in U.S. Patent
Application No. 11/623,050 which is hereby incorporated by reference.
[0045] For
example, the image stitching module may include instructions to
compare the areas of overlap along the edges of the partial images for
matching features
in order to determine rotation and alignment of the partial images relative to
one another.
Due to rotation of partial images and/or the shape of the field of view of
each sensor, the
combined image may form an irregular shape. Therefore, after aligning and
combining
the partial images, the image stitching module may call subroutines which
configure the
image processor to crop the combined image to a desired shape and aspect
ratio, for
example a 4:3 rectangle or 1:1 square. The cropped image may be sent to the
device
processor for display on the display or for saving in the storage.
C. Overview of Parallax Free Camera Positionine
[0046] The
imaging system of Figure 2A includes a plurality of cameras.
Central camera 112 is located in a position having a first field of view a
directed towards
a first direction. The first field of view a, as shown in Figure 2A, faces a
first direction
which can be any direction the central camera 112 is facing. The central
camera 112 has
an optical axis 113 that extends through the first field of view a. The image
being
captured by central camera 112 in the first field of view a is around a
projected optical
axis 113 of the central camera 112, where the projected optical axis 113 of
central camera
112 is in the first direction.
[0047] Figure
2B illustrates a side cross-section view of the central camera
112, camera 116a and its associated mirror component 126a. The arrangements of
each
of the side cameras 116a-d are positioned around the illustrated optical axis
113 of
camera 112. Each of the plurality of side cameras 116a-d may be referred to as
a
"concentric ring" of cameras, in reference to each of the pluralities of side
cameras 116a-
d forming a ring which is concentric to the illustrated line 113 which is the
optical axis of
the actual camera 112. For clarity, only one camera from each of the rings
116a-d and the
central camera 112 are shown in Figures 2A and 2B. Side camera 116a is part of
a
second concentric ring of 4 cameras, each of the 4 cameras being positioned 90
degrees
from its neighboring camera to form a 360 degree concentric ring of cameras.
Side
cameras 114a-d are not shown in Figure 2A. Similarly cameras 114a-d are part
of first
concentric rings of cameras positioned similarly to the cameras of the second
concentric
ring of cameras, which will be further described when Figure 3 is explained..
The term
-12-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
"ring" is used to indicate a general arrangement of the cameras around, for
example, line
113, the term ring does not limit the arrangement to be circular-shaped. The
term
"concentric" refers to two or more rings that share the same center or axis.
[0048] As shown
in Figure 2A, the radius 1542b of each second concentric
ring about the optical axis 113, is the distance from the optical axis line
113 to the center
most point of the entrance pupil of camera 116a. Similarly as shown in Figure
2B, the
radius 1541a of the first concentric ring about the optical axis 113 is the
distance from the
optical axis line 113 and the center most point of the entrance pupil for
camera 114a. In
some embodiments the radius distances 1542d and 1541a may be equal for all
cameras
116a-d and cameras 114a-d, respectfully. It is not necessary that the radius
distance
1542d is equal for all cameras in the second concentric rings. Similarly, it
is not
necessary the radius 1541a is equal for all cameras in the first concentric
ring. The
embodiment shown in Figure 2A has the same radius 1542b for all cameras 116a-d
and
similarly the embodiment shown in Figure 2B has the same radius 1541a for all
cameras
114a-d.
[0049] The
first concentric ring of cameras 114a-d are arranged and
configured to capture images in a third field of view c in a direction along
an optical axis
115. The second concentric ring of cameras 116a-d are arranged and configured
to
capture images in a second field of view b in a direction along an optical
axis 117.
[0050] In
another embodiment, the side cameras 114a-d, 116a-d, are each
respectively part of a first, and second set of array cameras, where each of
the first, and
second set of array cameras collectively have a field of view that includes at
least a
portion of the target scene. Each array camera includes an image sensor. The
image
sensor may be perpendicular and centered about the optical axis 186a-d of each
respective
camera 116a-d as shown schematically in Figure 2A for the second concentric
ring.
Similarly, the image sensor may be perpendicular and centered about to the
optical axis
184a-d of each respective camera 114a-d as shown schematically in Figure 2B
for the
first concentric ring.
[0051] As will
be shown herein it may be possible to replace camera 112
shown in Figure 2A with a field of view "a" with the first concentric ring of
cameras
114a-d as shown in Figure 2B if the field of view "c" is approximately greater
or equal to
one-half the field of view "a". In such a case cameras 116a-d in the second
concentric
ring and cameras 114a-d in the first concentric ring can be configured and
arranged such
that images captured by all cameras 114a-d and 116a-d may collectively
represent a wide
-13-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
field of view image as seen from a common perspective vantage point located
substantially or effectively at the centermost point of the vertical entrance
pupil of all the
cameras 114a-d and 116a-c of the imaging system, where the center most point
of the
virtual entrance pupil of all the cameras 114a-d and 116a-d have been
configured and
arranged such that the centermost point of all virtual entrance pupils are
substantially or
effectively at one common point in space.
[0052] The
imaging concentric ring systems shown in Figures 2A and 2B
include a light redirecting reflective mirror surfaces 134a-d for the first
concentric ring
shown in Figure 2B and a light redirecting reflective mirror surfaces 136a-d
for the
second concentric ring shown in Figure 2A.
[0053] In each
of the above light redirecting reflective mirror components
134a-d, 136a-d, the light redirecting reflective mirror components 134a-d,
136a-d, include
a plurality of reflectors.
[0054] As will
now be described, the wide field of view camera configuration
100a comprises various angles and distances that enable the wide field of view
camera
configuration 100a to be parallax free or effectively parallax free and to
have a single
virtual field of view from a common perspective. Because the wide field of
view camera
configuration 100a has a single virtual field of view, the configuration 100a
is parallax
free or effectively parallax free.
[0055] In some
embodiments, such as that shown in Figures 1A- 2B, the
single virtual field of view comprises a plurality of fields of view that
collectively form a
wide field of view scene as if the virtual field of view reference point of
each of cameras
114a-d, and 116a-d have a single virtual point of origin 145, which is the
effective center
most point of the entrance pupil of the camera system 100a located at point
145. The first
concentric ring of cameras 114a-d captures a portion of a scene according to
angle c, its
virtual field of view from the single point of origin 145, in a direction
along the optical
axis 115. Second concentric ring camera 116a-d captures a portion of a scene
according
to angle b, its virtual field of view from the single point of origin 145.
Because the first
concentric ring of cameras 114a-d, and the second concentric ring of cameras
116a-d
collective virtual fields of views will capture a wide field of view scene
that includes at
least the various angles b and c of the virtual fields of views. In order to
capture a wide
field of view, all of the cameras 114a-d, 116a-d individually need to have
sufficiently
wide enough fields of view to assure all the actual and or virtual fields of
view fully
-14-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
overlap with the actual and or virtual neighboring fields of view to be sure
all image
content in the wide field of view may be captured.
[0056] The
single virtual field of view appears as if each of the cameras is
capturing a scene from a single point of origin 145 despite the actual
physical locations of
the cameras being located at various points away from the single point of
origin 145. As
shown in Figure 2B the virtual field of view of the first camera 114a would be
as if the
first camera 114a were capturing a scene of field of view c from the center
most point of
the virtual entrance pupil located at 145. And similarly, the virtual field of
view of the
second camera 116a as shown in Figure 2A would be as if the second camera 116a
were
capturing a scene of field of view b from the center most point of the virtual
entrance
pupil located at 145 Accordingly, the first camera 114a, second camera 116a
have a
single virtual field of view reference point at the center most point of the
virtual entrance
pupil located at 145.
[0057] In other
embodiments, various fields of view may be used for the
cameras. For example, the first camera 114a may have a narrow field of view,
the second
camera 116a may have a wide field of view, the third camera 114b may have a
narrower
field of view and so on. As such, the fields of view of each of the cameras
need not be
the same to capture a parallax free or effectively parallax free image.
However, as
described below in an example of one embodiment and with reference to the
figures and
tables, the cameras have actual fields of view of approximately 60 degrees so
that it may
be possible to essentially overlap the neighboring fields of view of each
camera in areas
where the associated mirrors and component are not blocking or interfering
with the light
traveling from points in space towards associated mirrors and then on to each
respective
cameras actual entrance pupil. In the embodiment described below, the fields
of view
essentially overlap. However, overlapping fields of view are not necessary for
the
imaging system to capture a parallax free or effectively parallax free image.
[0058] The
above described embodiment of a parallax free or effectively
parallax free imaging system and virtual field of view is made possible by
various inputs
and outputs as listed in the following tables of angles, distances and
equations.
[0059] One
concept of taking multiple images that are free of parallax artifacts
or effectively free of parallax artifacts is to capture images of a scene in
the object space
by pivoting the optical axis of a camera where the center most point of the
camera's
entrance pupil remains in the same location each time a image is captured.
Those skilled
in the art of capturing panoramic pictures with none or effectively minimal
parallax
-15-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
artifacts may be aware of such a method. To carry out this process one may
align the
optical axis of camera 112, as shown in Figure 2A, along the optical axis 115,
as shown in
Figure 2B, and place the center most point of camera 112 entrance pupil to
contain point
145, where in this position the optical axis of camera 112 should be at an
angle hl from
the camera system optical axis 113 where optical axes 113 and 115 effectively
intersect
each other at on or near the point 145. At this position an image can be
captured. The
next step one may rotate clockwise the optical axis of camera 112 to the
optical axis 117
as shown in Figure 2A, where in this position the optical axis of camera 112
should be at
an angle (2*h l+h2) from the camera system optical axis 113 where optical axes
113, 115
and 117 effectively intersect each other at on or near the point 145. While in
both angular
directions 115 and 117 the point 145 is kept in the center most point of
camera 112
entrance pupil and keeping the optical axis of camera 112 in the plane of the
page shown
respectfully in Figures 2A and 2B and then capture a second image. Let's
further assume
the field of view of camera 112 is actually greater than the larger of angles
2*f2, 2*.hi and
2*h2. Both these images should show similar object space image content of the
scene
where the fields of view of the two images overlap. When the images are
captured in this
way it should be possible to merge these two images together to form an image
that has
no parallax artifacts or effectively no parallax artifacts. Those skilled in
the art of
merging two or more images together may understand what parallax artifacts may
look
like and appreciate the objective to capture images that are free of parallax
for effectively
free of parallax artifacts.
[0060] It may
not be desirable to capture parallax free or effectively parallax
free images by pivoting the optical axis of a camera about its entrance pupil
location. It
may be preferable to use two cameras fixed in position with respect to each
other. In this
situation it may not be possible to make two cameras with their entrance
pupils occupying
the same physical location. As an alternative one may use a light redirecting
reflective
mirror surface to create a virtual camera that has its entrance pupil center
point containing
or effectively containing the entrance pupil center point of another camera
such as 112,
such as that shown in Figure 2A. This is done by appropriately positioning a
light
redirecting reflective mirror surface, such as surface 136a, and the second
camera, such as
116a. Figure 2A provides a drawing of such a system where a light redirecting
reflective
mirror surface 136a is used to create a virtual camera of camera of 116a,
where the center
of the virtual camera entrance pupil contains point 145. The idea is to
position the light
redirecting reflective mirror surface 136a and place camera 116a entrance
pupil and
-16-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
optical axis in such a way camera 116a will observe off the light redirecting
reflective
mirror 136a reflective surface the same scene its virtual camera would observe
if the light
redirecting reflective mirror surface was not present. It is important to
point out the
camera 116a may observe only a portion of the scene the virtual camera would
observe
depending on the size and shape of the light redirecting reflective mirror
surface. If the
light redirecting reflective mirror surface 136a only occupies part of the
field of view of
camera 116a then camera 116a would see only part of the scene its virtual
camera would
see.
[0061] Once one
selects values for the length 1522a and the angles f2, h2 and
k2, as shown in Figure 2A, one can use the equations of Table 1 to calculate
the location
of camera 116a entrance pupil center point and the angle of its optical axis
with respect to
line 111. The entrance pupil center point of camera 116a is located a distance
1542a from
the multi-camera systems optical axis 113 and length 1562a from the line 111,
which is
perpendicular to line 113. Figure 4, described below, provides the legend
showing
angular rotation direction depending on the sign of the angle and the
direction for lengths
from the intersection point of lines 111 and 113 depending on the sign of the
length.
TABLE 1
(Distance 1522a) 2 mm
f2 21 deg
15 deg
k2 27 deg
ul 27 =k2 deg
u2 -63 = -90 + ul deg
J2 39 = 90 ¨ (f2 + 2 * h2) deg
(Distance 158a) 2.142289987 = (Distance 1522a) / cos(f2) mm
(Distance 150a) 0.76772807 = (Distance 158a) * sin(f2) mm
(Distance 160a) 1.592031719 = (Distance 158a) * cos(2 * h2 ¨ ul j2)
mm
(Distance 1562a) 1.445534551 = 2 * (Distance 160a) * sin(u1) mm
(Distance 1542a) 2.837021296 = 2 * (Distance 160a) * cos(u1) mm
-17-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
m2 63 ¨ 90 ¨ (h2 i2 - ul) deg
n2 63 =m2 deg
P2 63 =n2 deg
q2 180 = 180 ¨ (180 ¨ (h2 + j2 + P2 M2)) deg
[0062] The
distances, angles and equations in Table 1 and 2 will now be
described with reference to Figures 2A and 2B. With reference to Figures 2A
and 2B,
line 111 can be thought of as a plane containing the virtual entrance pupil
145 and is
perpendicular to the multi-camera system optical axis 113, where the optical
axis 113 is
contained in the plane of the page. The center most point of the virtual
entrance pupil 145
is located ideally at the intersection of the plane 111 and the optical axis
113, where the
plane 111 is perpendicular to the page displaying the figure. In actual
fabrication
variations in components and positioning may result in the center point of the
entrance
pupil 145 not being at the intersection of the optical axis 113 and the plane
111; and,
likewise, it may be the actual location and alignment of the virtual entrance
pupil center
most point of camera 114a, as shown in Figure 2B may not exactly coincide with
a the
common virtual entrance pupil 145, where in these cases we can use the
concepts of
"effective" or equivalently worded as "effectively" to mean that if it is
possible to show
tolerances requirements can be determined such that the intended requirements
and or
purposes for the system, systems or item are being fulfilled, then both the
ideal case and
within aforementioned tolerances the system, systems and or item may be
considered
equivalent as to meeting the intended requirements and or purposes. Hence,
within
tolerances the virtual entrance pupil 145 effectively coincides with the
virtual entrance
pupil of camera 114a and the center most point of the virtual entrance pupil
of all of the
cameras used in the multi-camera system, such as cameras 114a-d and 116a-d
being
described in the embodiment shown and or described in Figures 1A-11 herein.
Further,
optical axes of all the cameras, such as 114a-d and 116a-d effectively
intersect with the
plane 111, optical axis 113 and the multi-camera system common virtual
entrance pupil
center most point 145.
[0063] The
meaning of the current camera will change for each of the Tables
1 and 2. For Tables 2, we will refer to the camera having the half angle field
of view of
hl as being the current camera. The current camera as it pertains to Table 2
applies to the
set of cameras 114a-d
-18-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0064] The
current camera and all of the cameras used for an embodiment
may each be a camera system containing multiple cameras or may be another type
of
camera that may be different than a traditional single barrel lens camera. In
some
embodiments, each camera system used may be made up of an array of cameras or
a
folded optics array of cameras.
TABLE 2
inputs
(Distance 1521a) 4 mm
fi 0 deg
15 deg
37.5 deg
Outputs
ul 37.5 T=k1 deg
u2 -52.5 = -90 + ul deg
ji 60 = 90 ¨ (fi + 2 * hi) deg
(Distance 158a) 4 = (Distance 1521a) / cos(f1) mm
(Distance 150a) 0 = (Distance 158a) * sin(t) mm
(Distance 160a) 2.435045716 = (Distance 158a) * cos(2 * h1 ¨ul +ji)
mm
(Distance 1561a) 2.96472382 = 2 * (Distance 160a) * sin(u1) mm
(Distance 1541a) 3.863703305 = 2 * (Distance 160a) * cos(u1) mm
mi 52.5 = 90 (hi +ji ¨u1) deg
ni 52.5 = mi deg
pl 52.5 =n1 deg
qi 180 = 180 ¨ (180 ¨ (hi + ji + pi + mi)) deg
[0065] Below we
will refer to terms first camera because it is from the first
ring of cameras. Similarly we will refer to the second camera because it is
from the
second ring of cameras. In Figure 2A, the angles and distances of Table 1 are
illustrated.
The entrance pupil of the first camera 116a is offset from the virtual
entrance pupil 145
according to Distance 1542a and Distance 1562a. Distance length 1542a
represents the
-19-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
coordinate position from the optical axis 113 and the entrance pupil center
point of the
second camera 116a , where the distance 1542a is measured perpendicular to the
optical
axis 113. Here, the current camera is second camera 116a.
[0066] Distance
length 1562a represents the coordinate position from the plane
111 and a plane containing the entrance pupil center point of the first camera
116a and is
parallel to plane 111. Here, the current camera is second camera 116a.
[0067] Still
referring to Figure 2A , point 137 shown in Figure 2A for system
200a is located on the plane of the page showing Figure 2A and is distance
150a from the
optical axis 113 and distance 1522a from the line formed by the intersection
of plane 111
and the plane of the page for Figure 2A. For ease of explaining sometimes we
will refer
to line 111, which is to be understood as the line formed by the intersection
of plane 111
and the plane of the page showing Figures 2A.
[0068] Planar
light redirecting reflective mirror surface 136a is shown with
the line formed by the intersection of the planar surface 136a and the plane
of the page
showing Figure 2A. For the purpose of explaining Figures 2A and 2B we will
assume
planar surfaces 134a and 136a are perpendicular to the plane of the page.
However, it is
important to point out that the planar surface 134a and 136a do not need to be

perpendicular to the plane of the page.
[0069] When we
refer to line 136a it is to be understood we are referring to
the line formed by the intersection of planar surface 136a and the plane of
the page. Also,
when we refer to line 134a it is to be understood we are referring to the line
formed by the
intersection of planar surface 134a and the plane of the page.
[0070] Table 1
provides the angle k2 which is the clockwise rotation angle
from the line 136a to a line parallel to the optical axis 113 and also
contains point 137,
where point 137 is also contained in the plane of the page and line 136a. The
field of
view edges of camera 112 is shown by the two intersecting lines labeled 170a
and 170b,
where these two lines intersect at the center most point 145 of the entrance
pupil of
camera 112. The half angle field of view of camera 112 is f2 between the multi-
camera
optical axis 113 and the field of view edge 170a and 170b.
[0071] As shown
in Figure 2A, camera 112 has its optical axis coinciding with
line 113. The half angle field of view of camera 116a is h2 with respect to
camera 116a
optical axis 117. The optical axis of camera 116a is shown being redirected
off of light
redirecting reflective mirror surface 136a. Assume the light redirecting
reflective mirror
surface 136a is perfectly flat and is a plane surface perpendicular to the
plane of the page
-20-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
Figure 2A . Further assume the light redirecting reflective mirror planar
surface 136a
fully covers the field of view of camera 116a. As shown in Figure 2A, the
optical axis
117 intersects at a point on the planar light redirecting reflective mirror
surface 136a.
Counter clockwise angle p2 is shown going from light redirecting reflective
mirror
surface 136a to the optical axis 117 of camera 116a. Based on the properties
of light
reflection off a mirror or equivalent light reflecting mirror surface, and the
assumption the
lines shown in Figure 2A are contained in the plane of page of Figure 2A, we
find counter
clockwise angles m2 and n2 are equal to p2. A light ray may travel along the
optical axis
117 towards camera 116a within the plane of the page showing Figure 2A and
reflect off
the light redirecting reflective mirror equivalent surface 136a towards the
center point of
the entrance pupil of camera 116a , where the angles n2 and p2 must be equal
based on
the properties of light reflection off mirror equivalent surfaces. The optical
axis 117 of
camera 116a is shown extending pass the light reflecting surface 136a towards
the virtual
entrance pupil center point 145, where the virtual entrance pupil center most
point is
effectively located. Counter clockwise rotation angle m2 can be shown to be
equal to n2
based on trigonometry.
[0072] For all
surfaces 136a-d and 134-d shown we will assume, for the
purposed of explaining the examples described herein that these surfaces are
planar and
perpendicular to the plane of the page in the figures as well as the
descriptions.
[0073] From
this we it can be shown that an extended line containing the
planar light redirecting reflective mirror surface 136a will intersect
perpendicularly the
line going from the entrance pupil center point of camera 112 to the entrance
pupil center
point of camera 116a. Hence the two line lengths 160a can be shown to be
equally
distant.
[0074] It is
possible the planar light redirecting reflective mirror surface 136a
covers only part of the field of view of camera 116a. In this case not all the
rays that
travel from the object space towards the virtual camera entrance pupil that
contains at its
center most point 145, as shown in Figure 2A, will reflect off the planar
portion of a the
light redirecting reflective mirror surface 136a that partially covers the
field of view of
camera 116a. From this perspective it is important to keep in mind camera 116a
has a
field of view defined by half angel field of view h2, the optical axis 117 and
the location
of its entrance pupil as described by lengths 1542a and 1562a and the legend
shown in
Figure 4. Within this field of view a surface such as the light reflecting
planar portion of
the light redirecting reflective mirror surface 136a may be partially in its
field of view.
-21-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
The light rays traveling from the object space toward the entrance pupil of
the virtual
camera of camera 116a and reflect off the planar portion of light redirecting
reflective
mirror surface 136a will travel onto the entrance pupil of camera 116a
provided the planar
portion of light redirecting reflective mirror surface 136a and cameras 112
and 116a are
positioned as shown in Figure 2A, and in accordance with the legend shown on
Figure 4,
the equations of Table 1 and in accordance with the input values 1522a, f2, h2
and k2.
[0075] Figure
2B illustrates a side view of an example of an embodiment of a
portion of the wide field of view camera configuration 300a including a
central camera
112, a first camera 114a. Notice it does not include a camera 112. This is
because
camera system 300a can be used in place of camera 112 shown in Figure 2A. The
parameters, angles and values shown in Table 2 will place the camera 114a
entrance
pupil, optical axis 115 and the respective mirror 134a positions such that
camera 114a
will cover a portion of camera 112 field of view. If we use Table 1 to
calculate the
positions of cameras 114b-d in the same way as we did for 114a, then it should
be
possible capture images that collectively will include the field of view a of
camera 112,
provided the half field of view hi is greater than or equal to f2 and the
actual field of
view of camera 114a-d are sufficiently wide enough so that when the collective
images
are stitched together the scene content of 112 will be within the captured
image of the
scene content of the stitched together images of the camera system 300a. In
this example
camera system 300a will be used to replace camera 112, provided camera system
300a
captures the same scene content within the circular Field of View a of camera
112 as
shown in figure 2A. In a more general view camera 112 may be not necessary if
the
images captured by cameras 114a-d and cameras 116a-d collectively contain the
same
scene content after the images are stitched together as that captured by
camera 112 and
cameras 116a-d after the images are stitched together.. In this embodiment,
the second
camera 114a is the current camera as shown in Figure 2B.
[0076] What the
intended meaning is for phrase or similar phases such as
"scene content" and the like mean is the scene content relates to the light
traveling in a
path from points in the object space towards a the camera system. The scene
content that
is carried by light is contained in the light just before entering the camera
system. The
camera system may affect the fidelity of the image captured; i.e. the fidelity
of the camera
system may introduce artifacts such as the camera system may alter the light
or add
artifacts and or add noise to the light before or during the process of
capturing an image
-22-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
from the light by a the image detector. Other factors related to the camera
system and
aspects outside of the camera system may also affect the fidelity of the image
capture
with respect to the scene content contained in the light just before entering
the camera
system.
[0077] The
above distances, angles and equations have a similar relationship
as described above with respect to Figures 2A. Some of the inputs of Table 2
differ from
the inputs of Tables 1. In Figure 2B and Table 2, some of the distances have
identifications numbers and a subscript "a", such as 1521a, 1541a, and or
1561a" and
some of the angles have a subscript "1." These subscripted distances and
angles of Table
2 have a similar relationship to the subscripted distances and angles of
Figure 2A and
Table 1. For example, Figure 2A and Table 1 may show similar identification
numbers
with subscript "a", such as 1522a, 1542a, and or 1562a and some of the angles
may have
subscript "2" instead of "1".
[0078] An
explanation of one way to design a multi-camera system will now
be explained. One approach is to develop a multi-camera system using the model
shown
in Figure 2A, the legend shown in Figure 4 and the equations shown in Table 1.
One of
the first decisions is to determine if the central camera 112 will be used. If
the central
camera 112 is not to be used then half angle field of view f2 should be set to
zero. In the
example presented Tables 1 and 2 and the Figures 2A and 2B, the half field of
view angle
f2 shown in Table 1 is not zero, so a real actual central camera 112 is part
of the
schematic design shown in Figure 2A and described in Table 1. Next the half
angle field
of view h2 may be selected based on other considerations those designing such
a system
may have in mind. The length 1522a, as shown in Figure 2A, will scale the size
of the
multi-camera system. One objective while developing a design is to assure the
sizes of
the cameras that may or will be used will fit in the final structure of the
design. The
length 1522a can be changed during the design phase to find a suitable length
accommodating the cameras and other components that may be used for the multi-
camera
system. There may be other considerations to take into account when selecting
a suitable
value for 1522a. The angle k2 of the light redirecting reflective mirror
planar surface can
be changed with the objective of finding a location for the entrance pupil
center most
point of camera 116a. The location of the entrance pupil center most point of
camera
116a is provided by the coordinate positions 1542a and 1562a and the legend
shown on
Figure 4. The optical axis of 116a, in this example, is contained in the plane
of the page,
contains the entrance pupil center most point of camera 116a and is rotated by
an angle q2
-23-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
counter clockwise about the center most point of the camera's 116a entrance
pupil with
respect to a line parallel with line 111, where this parallel reference line
also contains the
center most point of camera's entrance pupil.
[0079] One may
want the widest multi-camera image one may be able to
obtain by merging together all the images from each camera in the system; i.e.
cameras
112 and 116a-d. In such a case it may be desirable to keep each camera and or
other
components out of the fields of view of all the cameras, but it is not
necessary to keep
each camera or other components out of the fields of view of one or more
cameras
because factors such as these depend on the decisions made by those designing
or
developing the camera system. One may need to try different inputs for 1522a,
f2, h2, and
k2 until the desired combined image field of view is achieved.
[0080] Once a
multi-camera system has been specified by inputs 1522a, f2, h2,
and k2 according to Table 1 and Figure 2A, we now have positions and
arrangements for
the cameras 112, 116a-d and light redirecting reflective mirrors 136a-d. Table
1 shows an
example of input values for 1522a, f2, h2, and k2 and the resulting calculated
values for
camera system example being described. Accordingly one can use the values in
Table 1
and the drawing shown in Figure 2A as a schematic to develop such a camera
system.
[0081] Suppose
we would like to replace camera 112 with a multiple camera
arrangement. One way to do this is to use the model shown in Figure 2A and set
the half
angle value f2 to zero. Such a system is shown in Figure 2B, where camera 112
is not
present. The center most point 145 of the virtual entrance pupil for camera
114a is shown
in figure 2B. Table 2 shows example inputs values for length 1521a and angles
fl, hl,
and kl and the resulting calculated values using the equations of Table 1. The
multi-
camera system of cameras 114a-d in accordance with the camera system
represented by
Figure 2B and Table 2 should be able to observe the same scene content within
the field
of view a of camera 112. Accordingly one should then be able to replace camera
112 in
Figure 2A and described in Table 1 with the multi-camera system schematic
example
described by Figure 2B and Table 2. If camera system described by Figure 2B
and Table
2 can physically be combined with the multi-camera system described by Figure
2A and
Table 1 without camera 112 being present, and where the point 145 is the
center most
point of virtual entrance pupil of all the cameras 114a-d and 116a-d then we
should have
a multi-camera system that does not include a center camera 112 and should be
able to
observe the same scene content as the multi-camera system shown in figure 2A
and
described in Table 1 using a the center camera 112 and cameras 116a-d. In this
way we
-24-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
can continue to stack a multi-camera system on top of another multi-camera
system while
having center most point of all the cameras virtual entrance pupil effectively
located at
point 145 as shown in Figure 2A.
[0082] In the
example shown in Figures 2A and 2B and Tables 1 and 2 it may
be necessary to rotate the camera system shown in Figure 2B by and angle such
as 22.5
degrees about the camera system optical axis 113 in order for cameras 114a-d
and 116a-d
to fit with one and other. Figure lA provides an example of such an
arrangement.
[0083] One can
think of the camera system containing cameras 114a-d as the
first concentric ring about the multi-camera system's optical axis 113 and
described by
Figures 2A and 2B and Tables 1 and 2. Likewise one can think of the camera
system
containing cameras 116a-d as the second concentric ring. One can continue to
add
concentric rings of cameras where for each ring there is essentially a table
like that shown
in Table 1 and additionally the virtual entrance pupil center most point of
all the cameras
in the multi-camera system is effectively located at point 145 as shown in
Figures 2A.
[0084] For
example, once the design for the first and second concentric rings
are complete and aligned so they fit together, one can consider adding a third
concentric
ring using the same approach described above for rings 1 and 2. The process
can
continue in this way as long as the cameras can all fit with one another and
meet the
design criteria of the multi-camera system being design and or developed.
[0085] The
shape of each concentric ring can be different than the other
concentric rings. Given such flexibility one could design a camera system
using the
principles above and create a system of cameras that follow a contour of a
surface other
than a flat surface, such as polygonal surfaces such as a parabolic shape or
elliptical shape
or many other possible shapes. In such case the individual cameras can each
have
different fields of view than the others or in some cases they can have the
same field of
view. There are many ways to use the methods describe above to capture an
array of
images. It is not necessary for the images of the cameras to overlap. The
images can be
discontinuous and still have the properties of being parallax free or
effectively parallax
free.
[0086] There
may be more or less camera rings than the first ring, the second
ring, the third ring and so one. By using more or less camera rings you may be
able to
devise, design or conceive of a wide field of view camera, a hemisphere wide
field of
view camera or a ultra wide field of view camera greater than a hemisphere or
as much of
a spherical camera as maybe be desired or required. An actual design depends
on the
-25-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
choices made while developing a multi-camera system. As previously stated it
is not
necessary for any of the cameras to have the same field of view as any of the
other
cameras. All of the light redirecting reflective mirror surfaces do not have
to have the
same shape, size or orientation with respect to its associated camera or
cameras viewing
that light redirecting reflective mirror surface. It should be possible to
arrange a camera
system using the principles, descriptions and methods described herein and the
light
redirecting reflective mirrors so that more than one camera can share the same
light
redirecting mirror system. It should be possible to use a not planar light
redirecting
mirror surface to capture wide field of view images using the descriptions and
methods
described herein. It is also not necessary for all the cameras to fully
overlap the fields of
view of the neighboring images in order to have a multi-camera system
described as
being capable of capturing parallax free or effectively parallax free images.
[0087] One
other aspect or feature of the model shown in Figure 2A is the
optical axis 117 intersecting the light redirecting reflective mirror surface
136a, is it can
be shown that a multi-camera system such as that shown in Figure 2A will still
be
parallax free or effectively parallax free if the intersection point of the
optical axis 117 is
moved to any location on the planar light redirecting reflective mirror
surface 136a. The
intersection point is the point where the optical axis 117 of camera 116a
intersects the
optical axis of its virtual camera and the intersection point is located on
the planar light
redirecting reflective mirror surface 136a. One can think of the virtual
camera of camera
116a as a camera whose entrance pupil center most point is point 145 and whose
optical
axis intersects the light redirecting reflective mirror surface 136a at the
same location the
optical axis 117 of camera 116a interests mirror surface 136a. In this way the
virtual
camera of 116a will move as the optical axis 117 of camera 116a intersects
different
locations on the mirror surface 136a. Also, the light redirecting reflective
mirror surface
136a can be any angle with respect to the plane of the page of Figure 2A. In
this way
camera 116a, which is the real camera in this case, is associated with its
virtual camera
have the same optical axis as that of camera 116a between the mirror surface
136a and the
scene in the object space.
[0088] In a
multi-camera parallax free or effectively parallax free camera
system the fields of view of each of the cameras used do not have to be equal.
[0089] It may
be possible to design a parallax free or effectively parallax free
multi-camera system where the light redirecting reflective mirror surfaces
represented by
light redirecting reflective mirror surface 136a in Figure 2A in such a way
that surface
-26-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
136a is not planar but could reflect or refract light that is part of the
design of an overall
camera system. The mirror surface may be accomplished in many ways. Those
skilled in
the art may know of some such as using total internal reflection properties of
a material
that has a planar or other contour shapes. One may use a material that
refracts light where
the light may reflect off a reflective material attached to the surface of a
refractive
material and not have to depend on properties such as total internal
reflection to achieve a
light redirecting reflect mirror like surface.
[0090] Figure
3A illustrates a schematic 410 of one camera 428 of one
example of an embodiment of a multiple camera configuration. With respect to
Figure
3A, angles will be indicated using small alpha characters (e.g., j), distances
will be
indicated using distance designations (e.g., Distance 412) and points, axes,
and other
designations will be indicated using item numbers (e.g., 420). As shown below
in Tables
1 and 2, a number of inputs Distance 412, z, f1-2, j are used to determine a
number of
outputs j, b, h, Distance 412, Distance 472, Distance 424a-b, Distance 418,
Distance 416,
e, c, d, a for the configuration of schematic 410. The configuration of Figure
3A results
in a camera with sixty (60) degrees dual field of view, provided that camera
428 does not
block the field of view.
[0091] The
input parameters will now be described. Distance 412 represents
the distance from the virtual entrance pupil 420 of the camera 428 to the
furthest terminal
end of the reflective surface 450, which is at the point 452 of the prism.
Distance 412 can
be approximately 4.5 mm or less. In Figure 3A, distance 412 is 4 mm.
[0092] Angle z
represents the collective field of view of the camera
configuration between the optical axis 466 of the virtual field of view of the
schematic
410 and a first edge 466 of the virtual field of view of the camera 428. In
this
embodiment, angle z is zero (0) because the optical axis 466 of the virtual
field of view is
adjacent to the first edge 466 of the virtual field of view of the camera 428.
The virtual
field of view of the camera 428 is directed towards the virtual optical axis
434 and
includes the area covered by the angles f1-2. The virtual optical axis 466a of
the entire
multiple camera configuration (other cameras not shown) is a virtual optical
axis of the
combined array of multiple cameras. The virtual optical axis 466a is defined
by the
cooperation of at least a plurality of the cameras. The virtual optical axis
466a passes
through the optical component 450a. A point of intersection 420a of the
virtual optical
axis 466a is defined by the intersection of optical axis 434a and virtual
optical axis 466a.
-27-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0093] The
optical component 450a has at least four light redirecting surfaces
(only one surface of the optical component 450a is shown for clarity and the
optical
component 450a represents the other light redirecting surfaces not shown in
Figure 3A).
At least four cameras (only camera 428a is shown for clarity and camera 428a
represents
the other cameras in the system illustrated in Figure 3A) are included in the
imaging
system. Each of the at least four cameras 428a are each configured to capture
one of a
plurality of partial images of a target scene. Each of the at least four
cameras 428a has an
optical axis 432a aligned with a corresponding one of the at least four light
redirecting
surfaces of the optical component 450a. Each of the at least four cameras 428a
has a lens
assembly 224, 226 positioned to receive light representing one of the
plurality of partial
images of the target scene redirected from the corresponding one of the at
least four light
redirecting surfaces. Each of the at least four cameras 428a has an image
sensor 232, 234
that receives the light after passing of the light through the lens assembly
224, 226. A
virtual optical axis 466a passing through the optical component 450a, a point
of
intersection of the optical axis 420a of at least two of the at least four
cameras 428a
located on the virtual optical axis 466a.
[0094]
Cooperation of the at least four cameras 428a forms a virtual camera
430a having the virtual optical axis 466a. The imaging system also includes a
processing
module configured to assemble the plurality of partial images into a final
image of the
target scene. The optical component 450a and each of the at least four cameras
428a are
arranged within a camera housing having a height 412a of less than or equal to

approximately 4.5 mm. A first set of the at least four cameras 428a cooperate
to form a
central virtual camera 430a having a first field of view and a second set of
the at least four
cameras 428a are arranged to each capture a portion of a second field of view.
The
second field of view includes portions of the target scene that are outside of
the first field
of view. The imaging system includes a processing module configured to combine

images captured of the second field of view by the second set of the at least
four cameras
428a with images captured of the first field of view by the first set of the
at least four
cameras 428a to form a final image of the target scene. The first set includes
four
cameras 428a and the second set includes four additional cameras 428a, and
wherein the
optical component 450a comprises eight light redirecting surfaces. The imaging
system
includes a substantially flat substrate, wherein each of the image sensors are
positioned on
the substrate or inset into a portion of the substrate. The imaging system
includes, for
each of the at least four cameras 428a, a secondary light redirecting surface
configured to
-28-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
receive light from the lens assembly 224, 226 and redirect the light toward
the image
sensor 232, 234. The secondary light redirecting surface comprises a
reflective or
refractive surface. A size or position of one of the at least four light
redirecting surfaces
450a is configured as a stop limiting the amount of light provided to a
corresponding one
of the at least four cameras 428a. The imaging system includes an aperture,
wherein light
from the target scene passes through the aperture onto the at least four light
redirecting
surfaces 450a.
[0095] Angles
f1-2 each represent half of the virtual field of view of the
camera 428. The combined virtual field of view of the camera 428 is the sum of
angles
f1-2, which is 30 degrees for this example.
[0096] Angle j
represents the angle between the plane parallel to the virtual
entrance pupil plane 460 at a location where the actual field of view of the
camera 428
intersects the reflective surface 450, which is represented as plane 464, and
a first edge
468 of the actual field of view of the camera 428. Here, angle j is 37.5
degrees.
TABLE 1B
1n puts
(Distance 412) 4 mm
0 deg
f1-2 15 deg
37.5 deg
[0097] The
output parameters will now be described. Angle j of the output
parameters shown in Table 2B is the same as angle j of the input parameters
shown in
Table 1B. Angle b represents the angle between the optical axis 466 of the
schematic 410
and the back side of the reflective surface 450. Angle h represents the angle
between the
virtual entrance pupil plane 460 and one edge (the downward projected edge of
the
camera 428) of the actual field of view of the camera 428.
[0098] Distance
412 is described above with respect to the input parameters of
Table 1B. Distance 472 represents the distance of half of the field of view at
a plane
extending between a terminal end 452 of the reflective surface 450 and the
edge 466 of
the virtual field of view of the camera 428 such that the measured Distance
472 is
perpendicular to the optical axis 434 of the virtual field of view of the
camera 428.
Distance 424a-b represents half the distance between the entrance pupil of the
camera 428
-29-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
and the virtual entrance pupil 420. Distance 418 represents the distance
between the
virtual entrance pupil plane 460 and the plane of the entrance pupil of the
camera 428,
which is parallel to the virtual entrance pupil plane 460. Distance 416
represents the
shortest distance between the plane perpendicular to the virtual entrance
pupil plane 460,
which is represented as plane 466, and the entrance pupil of the camera 428.
[0099] Angle e
represents the angle between the optical axis 434 of the virtual
field of view for the camera 428 and the back side of the reflective surface
450. Angle c
represents the angle between the optical axis 434 of the virtual field of view
for the
camera 428 and the front side of the reflective surface 450. Angle d
represents the angle
between the front side of the reflective surface 450 and the optical axis 432
of the actual
field of view for the camera 428. Angle a represents the angle between the
optical axis of
the projected actual field of view for a camera opposite the camera 428 and
the optical
axis 432 of the projected actual field of view for the camera 428.
[0100] Point
422 is the location where the optical axis 432 of the actual field
of view for the camera 428 intersects the optical axis 434 of the virtual
field of view for
the camera 428. The virtual field of view for the camera 428 is as if the
camera 428 were
"looking" from a position at the virtual entrance pupil 420 along the optical
axis 434.
However, the actual field of view for the camera 428 is directed from the
actual entrance
pupil of the camera 428 along the optical axis 432. Although the actual field
of view of
the camera 428 is directed in the above direction, the camera 428 captures the
incoming
light from the virtual field of view as a result of the incoming light being
redirected from
the reflective surface 450 towards the actual entrance pupil of the camera
428.
-30-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
TABLE 2B
Outiuts
37.5 deg
-52.5 = - 90 + j deg
60 = 90 ¨ (z + 2 * f1-2) deg
(Distance 412) 4 = (Distance 412) / cos(z) mm
(Distance 472) 0 = (Distance 412) * sin(z) mm
(Distance 424a-b) 2.435045716 = (Distance 412) * cos(2 * f1-2 ¨j + h) mm
(Distance 418) 2.96472382 = 2 * (Distance 424a-b) * sin(j) mm
(Distance 416) 3.863703305 = 2 * (Distance 424a-b) * cos(j) mm
52.5 = 90 ¨ (f1-2 + h ¨ j) deg
52.5 =e deg
52.5 =c deg
a 180 = 180 ¨ (180 ¨ (f1-2 + h + d + e) deg
[0101] Figure
3B illustrates a schematic of two cameras 428b, 430b of an
embodiment of a multiple camera configuration 410b. Figure 3B also represents
a model
upon which many different parallax free or substantially parallax free multi-
camera
embodiments can be conceived of, designed, and/or realized using methods
presented
herein. Table 3 provides equations used to determine the distances and angles
shown in
Figure 1B based on the length 412b and angles g2, f2 and 1(2.
TABLE 3
Inputs
(Distance 412b) 4 mm
g2 22.5 deg
f2 22.5 deg
k2 0 deg
-31-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
Outputs
ul 0 =k2 deg
u2 -90 = -90 + ul deg
J2 22.5 = 90 ¨ (g2 + 2 * f2) deg
(Distance 434b) 4.329568801 = (Distance 412b) / cos(g2) mm
(Distance 455b) 1.656854249 = (Distance 434b) * sin(g2) mm
(Distance 460b) 1.656854249 = (Distance 434b) * cos(2 * f2 ¨ ul +j2) mm
(Distance 418b) 0 = 2 * (Distance 460b) * sin(u1) mm
(Distance 416b) 3.313708499 = 2 * (Distance 460b) * cos(u1) mm
e2 45 ¨ 90 ¨ (f2 i2 ¨ ul) deg
c2 45 =e2 deg
45 =c2 deg
q2 135 = 180 ¨ (180 ¨ (f2 + i2 d2 e2)) deg
[0102] In
Figure 3B, the angles and distances of Table 3 are illustrated. The
central camera 430b and side camera 428b are shown. The entrance pupil of the
side
camera 428b is offset from the virtual entrance pupil 420b according to
Distance 416b and
Distance 418b. Distance 416b represents the distance between the optical axis
472b and
the entrance pupil center point of the side camera 428b, where the distance
416b is
measured perpendicular to the optical axis 472b.
[0103] Distance
418b represents the distance between the plane 460b and a
plane containing the entrance pupil center point of the side camera 428b and
is parallel to
plane 460b.
[0104] The
remaining distances and angles can be found in Table 3 and are
illustrated in Figure 3B.
[0105] Table 3
provides the angle k2 of the light redirecting surface 450b with
respect to a point intersecting point 437 and perpendicular to line 460b.
Point 437 is
located on a plane perpendicular to the plane of the page showing Figure 3B
and hence
perpendicular to the multi-camera system optical axis 472b, and is at a
distance 412b from
the line 460b. The field of view of camera 430b is shown by the two
intersecting lines
labeled 434b where these two lines intersect at the center point of the
entrance pupil of
camera 430b. The half angle field of view of camera 430b is g2 between the
multi-camera
optical axis 472b and the field of view edge 434b.
-32-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0106] As shown
in Figure 3B camera 430b has its optical axis coinciding with
line 472b. The half angle field of view of camera 428b is f2 with respect to
camera 428b
optical axis 435b. The optical axis of the virtual camera for camera 428b is
shown being
redirected off of light redirecting surface 450b. Assuming the light
redirecting surface
450b is perfectly flat and is a plane surface perpendicular to the plane of
the page Figure
3B is shown on and further assume the light redirecting planar surface fully
covers the
circular field of view of camera 428b. As shown in Figure 3B, the optical axis
435b
intersects at a point on the planar light redirecting surface 450b. Suppose
now a ray of
light is traveling from a point in the object space along the virtual cameras
optical axis
435b. If there are now obstructions it will intercept the light redirecting
surface and
reflect off the planar light redirecting surface 450b and travel along the
optical axis 435b
of the camera 428b. The angles c2 and d2 will be equal based on the principles
and
theories of optics. And hence the angle e2 will equal c2. From this we can
show the
planar light redirecting surface 450b will intersect perpendicularly the line
going from the
entrance pupil center point of camera 430b to the entrance pupil center point
of camera
428b. Hence the two line lengths 460b can be shown to be equally distant.
[0107] It is
possible the planar light redirecting surface 450b covers only part
of the field of view of camera 428b. In this case not all the rays that travel
from the object
space towards the virtual camera entrance pupil that contains at its center
the point 420b,
as shown in Figure 3B, will reflect off the planar portion of a the light
redirecting surface
450b that partially covers the field of view of camera 428b. From this
perspective it is
important to keep in mind camera 428b has a field of view defined by half
angel field of
view f2, the optical axis 435b and the location of its entrance pupil as
described by lengths
416b and 418b. Within this field of view a surface such as the light
reflecting planar
portion of the light redirecting surface 450b may be partially in its field of
view. The light
rays traveling from the object space toward the entrance pupil of the virtual
camera of
camera 428b and reflect off the planar portion of light redirecting surface
450b will travel
onto the entrance pupil of camera 428b provided the planar portion of light
redirecting
surface 450b and cameras 430b and 428b are positioned as shown in Figure 3B,
the
equations of Table 3 and in accordance with the selected input values 412b,
g2, f2 and k2.
[0108] Figure 4
illustrates an embodiment of a camera 20 shown in Figures
lA to 2B and 5-6. As shown in Figure 4 the center most point of the entrance
pupil 14 is
located on the optical axis 19 and at where the vertex of the Field of View
(FoV) 16
intersects the optical axis 19. The embodiment of camera 20 is shown
throughout Figures
-33-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
1 to 2B and shown in Figures 5 and 6 as cameras 114a-d and 116a-d. The front
portion of
the camera 20 is represented as a short bar 15. The plane contains the
entrance pupil and
point 14 is located on the front of 15. The front of the camera and the
location of the
entrance pupil is symbolized by 15. The short bar 15 sometimes may be shown as
a
narrow rectangle box or as a line in Figures 1 to 6. The center of the camera
system 20 is
the optics section 12, symbolizing the optical components used in the camera
system 20.
The image capture device is symbolized by 17 at the back of the camera system.
The
image capture device and or devices are further described herein. In Figures
lA to 2B
and in Figures 5 and 6, the entire assembly of a the camera system represented
by 20 in
Figure 4 may be pointed at by using a straight or curved arrow line and a
reference
number near the arrow line.
[0109] Angle
designations are illustrated below the camera 20. Positive
angles are designated by a circular line pointing in a counterclockwise
direction.
Negative angles are designated by a circular line pointing in a clockwise
direction.
Angles that are always positive are designated by a circular line that has
arrows pointing
in both the clockwise and counterclockwise directions. The Cartesian
coordinate system
is shown with the positive horizontal direction X going from left to right and
the positive
vertical direction Y going from the bottom to top.
[0110] The
image sensors of each camera, as shown as 17 in Figure 4, and
represented as part of the cameras 112, 114a-d and 116a-d as shown throughout
the
Figures 1-6, in Figure 8 and Figure 9 as 336a-d 334a-d may include, in certain

embodiments, a charge-coupled device (CCD), complementary metal oxide
semiconductor sensor (CMOS), or any other image sensing device that receives
light and
generates image data in response to the received image. Each image sensor of
cameras
112, 114a-d, 116a-d and or of more concentric rings of cameras may include a
plurality of
sensors (or sensor elements) arranged in an array. Image sensors 17 as shown
in Figure 4
and represented in Figures 1A-6 and 8 and 9 can generate image data for still
photographs
and can also generate image data for a captured video stream. Image sensors 17
as shown
in Figure 4 and represented in Figures 1A-6 and 8 and 9 may be an individual
sensor
array, or each may represent arrays of sensors arrays, for example, a 3x1
array of sensor
arrays. However, as will be understood by one skilled in the art, any suitable
array of
sensors may be used in the disclosed implementations.
[0111] Image
sensors 17 as shown in Figure 4 and represented in Figures 1A-
6 and 8 and 9 may be mounted on the substrate as shown in Figure 8 as 304 and
306 or
-34-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
one more substrates. In some embodiments, all sensors may be on one plane by
being
mounted to the flat substrate, shown as an example in Figure 9 for substrate
336.
Substrate 336, as shown in Figure 9, may be any suitable substantially flat
material. The
central reflective element 316 and lens assemblies 324, 326 may be mounted on
substrate
336 as well. Multiple configurations are possible for mounting a sensor array
or arrays, a
plurality of lens assemblies, and a plurality of primary and secondary
reflective or
refractive surfaces.
[0112] In some
embodiments, a central reflective element 316 may be used to
redirect light from a target image scene toward the sensors 336a-d, 334a-d.
Central
reflective element 316 may be a reflective surface (e.g., a mirror) or a
plurality of
reflective surfaces (e.g., mirrors), and may be flat or shaped as needed to
properly redirect
incoming light to the image sensors 336a-d, 334a-d. For example, in some
embodiments,
central reflective element 316 may be a mirror sized and shaped to reflect
incoming light
rays through the lens assemblies 324, 326 to sensors 332a-d, 334a-d. The
central
reflective element 316 may split light comprising the target image into
multiple portions
and direct each portion at a different sensor. For example, a first reflective
surface 312 of
the central reflective element 316 (also referred to as a primary light
folding surface, as
other embodiments may implement a refractive prism rather than a reflective
surface)
may send a portion of the light corresponding to a first field of view 320
toward the first
(left) sensor 334a while a second reflective surface 314 sends a second
portion of the light
corresponding to a second field of view 322 toward the second (right) sensor
334a. It
should be appreciated that together the fields of view 320, 322 of the image
sensors 336a-
d, 334a-d, cover at least the target image.
[0113] In some
embodiments in which the receiving sensors are each an array
of a plurality of sensors, the central reflective element may be made of
multiple reflective
surfaces angled relative to one another in order to send a different portion
of the target
image scene toward each of the sensors. Each sensor in the array may have a
substantially different field of view, and in some embodiments the fields of
view may
overlap. Certain embodiments of the central reflective element may have
complicated
non-planar surfaces to increase the degrees of freedom when designing the lens
system.
Further, although the central element is discussed as being a reflective
surface, in other
embodiments central element may be refractive. For example, central element
may be a
prism configured with a plurality of facets, where each facet directs a
portion of the light
comprising the scene toward one of the sensors.
-35-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0114] After
being reflected off the central reflective element 316, at least a
portion of incoming light may propagate through each of the lens assemblies
324, 326.
One or more lens assemblies 324, 326 may be provided between the central
reflective
element 316 and the sensors 336a-d, 334a-d, and reflective surfaces 328, 330.
The lens
assemblies 324, 326 may be used to focus the portion of the target image which
is
directed toward each sensor 336a-d, 334a-d.
[0115] In some
embodiments, each lens assembly may comprise one or more
lenses and an actuator for moving the lens among a plurality of different lens
positions.
The actuator may be a voice coil motor (VCM), micro-electronic mechanical
system
(MEMS), or a shape memory alloy (SMA). The lens assembly may further comprise
a
lens driver for controlling the actuator.
[0116] In some
embodiments, traditional auto focus techniques may be
implemented by changing the focal length between the lens 324, 326 and
corresponding
sensors 336a-d, 334a-d, of each camera. In some embodiments, this may be
accomplished by moving a lens barrel. Other embodiments may adjust the focus
by
moving the central light redirecting reflective mirror surface up or down or
by adjusting
the angle of the light redirecting reflective mirror surface relative to the
lens assembly.
Certain embodiments may adjust the focus by moving the side light redirecting
reflective
mirror surfaces over each sensor. Such embodiments may allow the assembly to
adjust
the focus of each sensor individually. Further, it is possible for some
embodiments to
change the focus of the entire assembly at once, for example by placing a lens
like a
liquid lens over the entire assembly. In certain implementations,
computational
photography may be used to change the focal point of the camera array.
[0117] Fields
of view 320, 322 provide the folded optic multi-sensor assembly
310 with a virtual field of view perceived from a virtual region 342 where the
virtual field
of view is defined by virtual axes 338, 340. Virtual region 342 is the region
at which
sensors 336a-d, 334a-d, perceive and are sensitive to the incoming light of
the target
image. The virtual field of view should be contrasted with an actual field of
view. An
actual field of view is the angle at which a detector is sensitive to incoming
light. An
actual field of view is different from a virtual field of view in that the
virtual field of view
is a perceived angle from which incoming light never actually reaches. For
example, in
Figure 3, the incoming light never reaches virtual region 342 because the
incoming light
is reflected off reflective surfaces 312, 314.
-36-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0118] Multiple
side reflective surfaces, for example, reflective surfaces 328
and 330, can be provided around the central reflective element 316 opposite
the sensors.
After passing through the lens assemblies, the side reflective surfaces 328,
330 (also
referred to as a secondary light folding surface, as other embodiments may
implement a
refractive prism rather than a reflective surface) can reflect the light
(downward, as
depicted in the orientation of Figure 3) onto the sensors 336a-d, 334a-d, . As
depicted,
sensor 336b may be positioned beneath reflective surface 328 and sensor 334a
may be
positioned beneath reflective surface 330. However, in other embodiments, the
sensors
may be above the side reflected surfaces, and the side reflective surfaces may
be
configured to reflect light upward. Other suitable configurations of the side
reflective
surfaces and the sensors are possible in which the light from each lens
assembly is
redirected toward the sensors. Certain embodiments may enable movement of the
side
reflective surfaces 328, 330 to change the focus or field of view of the
associated sensor.
[0119] Each
sensor's field of view 320, 322 may be directed into the object
space by the surface of the central reflective element 316 associated with
that sensor.
Mechanical methods may be employed to tilt the mirrors and/or move the prisms
in the
array so that the field of view of each camera can be directed to different
locations on the
object field. This may be used, for example, to implement a high dynamic range
camera,
to increase the resolution of the camera system, or to implement a plenoptic
camera
system. Each sensor's (or each 3x1 array's) field of view may be projected
into the
object space, and each sensor may capture a partial image comprising a portion
of the
target scene according to that sensor's field of view. As illustrated in
Figure 2B, in some
embodiments, the fields of view 320, 322 for the opposing sensor arrays 336a-
d, 334a-d,
may overlap by a certain amount 318. To reduce the overlap 318 and form a
single
image, a stitching process as described below may be used to combine the
images from
the two opposing sensor arrays 336a-d, 334a-d, . Certain embodiments of the
stitching
process may employ the overlap 318 for identifying common features in
stitching the
partial images together. After stitching the overlapping images together, the
stitched
image may be cropped to a desired aspect ratio, for example 4:3 or 1:1, to
form the final
image. In some embodiments, the alignment of the optical elements relating to
each FOV
are arranged to minimize the overlap 318 so that the multiple images are
formed into a
single image with minimal or no image processing required in joining the
images.
-37-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
D. Overview of Further Example Four and Eight Camera Systems
[0120] Figure 5
illustrates an embodiment of side view cross-section of the
eight camera system 500a. Entrance pupil locations for two of the cameras in
each of the
first and second ring are shown, and light rays reflecting off mirror surfaces
134a, 134c,
136a and 136c are shown. The entrance pupil of the camera 116a is vertically
offset from
the virtual entrance pupil center most point 145 according to Distance 1542a
and Distance
1562a. The entrance pupil of the camera 114a is vertically offset from the
virtual
entrance pupil according to Distance 1541a and Distance 1561a. Likewise, the
entrance
pupil of the camera 116c is vertically offset from the virtual entrance pupil
center most
point 145 according to Distance 1542c and Distance 1562c. The entrance pupil
of the
camera 114c is vertically offset from the virtual entrance pupil according to
Distance
1541, and Distance 1561,.
[0121] Figure 6
illustrates an embodiment of a side view cross-section of the
four camera system. The entrance pupil center most point of the camera 114a is
vertically
offset from the virtual entrance pupil according to Distance 1541a and
Distance 1561a.
Likewise, the entrance pupil center most point of the camera 114c is
vertically offset from
the virtual entrance pupil according to Distance 1541, and Distance 1561,
[0122] Figure
7A shows an example of the top view of a reflective element
160 that can be used as the multi mirror system 700a of Figure 1A. Figure 7A
further
illustrates 8 reflective surfaces 124a-d and 126a-d that can be used for
surfaces 134a-d
and 136a-d, respectively as shown in figures 2A, 2B, 5, 6 and 8. Surfaces 134a-
d are
associated with cameras 114a-d and are higher than the mirrors 136a-d. Mirrors
surfaces
136a-d are associated with cameras 116a-d. Figure 5 provides a side view
example for
the top view shown in Figure 7A. In figure 5 we show mirror surfaces 134a and
134c,
which represent the example surfaces 124a and 124c shown in Figure lA and
Figure 7A.
Likewise, surfaces 136a-d are associated with cameras 116a-d and are lower
than the
mirrors surfaces 134a-d as shown in Figures 2A, 2B, 5, 6 and 8. As shown in
Figures lA
and 7A the mirror surfaces 124a-d are rotated 22.5 about the multi-camera
system optical
axis 113, where the optical axis 113 is not shown in figures lA and 7A but is
shown in
figure 2A and 2B. In figure 7A, circles are shown around the mirror surfaces
124a-d and
elliptical surfaces are shown around mirror surfaces 126a-d. The elliptical
circles
symbolize the tilt of the field of view covered by for example camera 116a
taken together
-38-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
with its associated mirror 126a. The tilt of field of view for the camera
mirror
combination 116a and 136a, according to Tables 1 and 2 is greater than that
for camera-
mirror combination 114a and 134a camera mirror combination. The circles and
ellipses
around the mirror surfaces 124a-d and 126a-d, as shown in figure 7A, reflect
the field of
view of these camera mirror combinations. The overlapping regions represent an

example of how the fields of view may over overlap. The overlap represents
scene
content that may be within the field of views of neighboring or other cameras
in the
multi-camera system.
[0123] In
Figure 5 we show mirror surfaces 134a and 134c, which represent
the example surfaces 124a and 124c shown in Figure 1 A and Figure 7A
illustrates a
reflective element 700a comprising a plurality of reflective surfaces (not
shown
separately). Each of the reflective surfaces can reflect light along optical
axes such that
each of corresponding cameras can capture a partial image comprising a portion
of the
target image according to each camera-mirror combination field of view. The
full field
of view of the final image is denoted by dashed line 170 after cropping. The
shape of the
cropped edge 170 represents a square image with an aspect ratio of 1:1. The
cropped
image 170 can be further cropped to form other aspect ratios.
[0124] The
multi-camera system can use such techniques as tilting the mirrors
to point the optical axis of each camera-mirror combination in different
directions than
that used for the examples of Figures 2A and 2B and Tables 1 and 2. Using
methods such
as these may enable an arrangement that may produce overlapping patterns that
may be
more suited for other aspect ratios than that of a 1:1 aspect ratio shown in
Figures lA and
7A.
[0125] The
fields of view 124a-d and 126a-d may share overlapping regions.
In this embodiment, the fields of view may overlap in certain regions with
only one other
field of view.
[0126] In other
regions, fields of view may overlap more than one other field
of view. The overlapping regions share the same or similar content when
reflected
toward the eight cameras. Because the overlapping regions share the same or
similar
content (e.g., incoming light), this content can be used by an image stitching
module to
output a target image. Using a stitching technique, the stitching module can
output a
target image to an image processor.
[0127] Figures
7B illustrates a side view of an embodiment of a portion of an
eight camera configuration 710. The embodiment of Figure 7B shows a reflective
-39-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
element 730 for an eight camera configuration free of parallax and tilt
artifacts.
Reflective element 730 can have a plurality of reflective surfaces 712a-c. In
the
embodiment of Figure 7, reflective surfaces 712a-c are in the shape of prisms.
Reflective
element 730 is disposed at or near the center of the eight camera
configuration, and is
configured to reflect a portion of incoming light to each of the eight cameras
(three
cameras 718a-c are illustrated in Figure 7B for clarity of this illustration).
In some
embodiments the reflective element 730 may be comprised of one component
having at
least eight reflective surfaces. In some other embodiments, the reflective
element 730
may comprise a plurality of individual components, each having at least one
reflective
surface. The multiple components of the reflective element 730 may be coupled
together,
coupled to another structure to set their position relative to each other, or
both. The
reflective surfaces 712a, 712b, 712c can be separated from one another to be
their own
distinct parts. In another embodiment, the reflective surfaces 712a, 712b,
712c can be
joined together to form one reflective element 730.
[0128] In the
illustrated embodiment, the portion of an eight camera
configuration 710 has cameras 718a-c, each camera capturing a portion of a
target image
such that a plurality portions of a target image may be captured. Cameras 718a
and 718c
are at a same or substantially the same distance (or height) 732 from the base
of reflective
element 730. Camera 718b is at a different distance (or height) 734 as
compared to the
distance 732 of cameras 718a and 718c. As illustrated in Figure 7, camera 718b
is at a
greater distance (or height) 734 from the base of reflective element 730 than
that of
cameras 718a and 718c. Positioning cameras 718a and 718c at a different
distance from
the base of reflective element 730 provides an advantage of capturing both a
central field
of view as well as a wide field of view. Reflective surface 712b, near the top
region of
reflective element 730, can reflect incoming light providing for a central
field of view.
Reflective surfaces 712a and 712c, near the base of reflective element 730,
can reflect
incoming light providing for a wide field of view.
[0129] Placing
reflective surface 712b at a different angle than reflective
surfaces 712a and 712c provides both a central field of view as well as a wide
field of
view. However, reflective surfaces 712a-c are not required to be placed at
different
distances or angles from the base of reflective element 730 to capture both a
central field
of view as well as a wide field of view.
[0130] Cameras
718a-c have optical axes 724a-c such that cameras 718a-c are
capable of receiving a portion of incoming light reflected from reflective
surfaces 712a-c
-40-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
to cameras 718a-c. In accordance with Figure 1, similar techniques may be used
for
configuration 710 to capture a target image.
[0131] In
another embodiment, an inner camera 718b creates a +/- 21 degree
image using a reflective surface 712. Outer cameras 718a and 718c use other
reflective
surfaces 712a and 712c, to create a solution where multiple portions of a
target image are
captured. In this example reflective surface 712b has a tilted square shape.
This provides
a good point spread function (PSF) when it is uniform. Reflective surfaces
712a and 712c
cover more area than reflective surface 712b but do not have a symmetrical
shape. The
reflective surfaces act as stops when they are smaller than the camera
entrance pupil.
[0132] Figure 8
illustrates a cross-sectional view of cameras 114a and 116b of
Figure 5 with a folded optics camera structure for each camera. As shown in
Figure 8, a
folded optics array camera arrangement can be used where a light redirecting
reflective
mirror surface such as 394a and 396b may be used to redirect the light
downward towards
a sensor 334a and upward towards a sensor 336b. In the schematic
representation shown
in Figure 8 the sensor 334a-d may be attached to one common substrate 304.
Similarly,
in the schematic representation shown in Figure 8 the sensor 336a-d may be
attached to
one common substrate 306. The substrates 304 and 306 in this embodiment, as
shown
schematically in Figure 8, may provide support and interconnections between
the sensor
334a-d to the Sensor Assembly A 420a interface shown in Figure 10, and
similarly the
interconnections between the sensors 336a-d the substrate 306 may provide
support and
interconnections between the sensors 336a-d and the Sensor Assembly B 420b.
There
may be other embodiments those skilled in the art that may be implemented in a
different
manner or by different technology. Greater or fewer concentric rings of
cameras may be
used in other embodiments, where if more are added the other sensor assembly
interfaces
420c to 420n as shown in Figure 10 may be used (sensor assembly interface 420c
is not
shown). The image sensors of the first set of array cameras may be disposed on
a first
substrate, the image sensors of the second set of array cameras may be
disposed on a
second and likewise form three or more substrates substrate. The substrate can
be, for
example, plastic, wood, etc. Further, in some embodiments the first, second or
maybe
more substrates may be disposed in planes that are parallel.
[0133] Figure 9
illustrates a cross-sectional side view of an embodiment of a
folded optic multi-sensor assembly. As illustrated in Figure 9, the folded
optic multi-
sensor assembly 310 has a total height 346. In some embodiments, the total
height 346
can be approximately 4.5 mm or less. In other embodiments, the total height
346 can be
-41-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
approximately 4.0 mm or less. Though not illustrated, the entire folded optic
multi-sensor
assembly 310 may be provided in a housing having a corresponding interior
height of
approximately more or less than 4.5 mm or less or approximately 4.0 mm or
less.
[0134] The
folded optic multi-sensor assembly 310 includes image sensors
332, 334, reflective secondary light folding surfaces 328, 330, lens
assemblies 324, 326,
and a central reflective element 316 which may all be mounted (or connected)
to a
substrate 336.
[0135] The
image sensors 332, 334 may include, in certain embodiments, a
charge-coupled device (CCD), complementary metal oxide semiconductor sensor
(CMOS), or any other image sensing device that receives light and generates
image data
in response to the received image. Each sensor 332, 334 may include a
plurality of
sensors (or sensor elements) arranged in an array. Image sensors 332, 334 can
generate
image data for still photographs and can also generate image data for a
captured video
stream. Sensors 332 and 334 may be an individual sensor array, or each may
represent
arrays of sensors arrays, for example, a 3x1 array of sensor arrays. However,
as will be
understood by one skilled in the art, any suitable array of sensors may be
used in the
disclosed implementations.
[0136] The
sensors 332, 334 may be mounted on the substrate 336 as shown
in Figure 9. In some embodiments, all sensors may be on one plane by being
mounted to
the flat substrate 336. Substrate 336 may be any suitable substantially flat
material. The
central reflective element 316 and lens assemblies 324, 326 may be mounted on
substrate
336 as well. Multiple configurations are possible for mounting a sensor array
or arrays, a
plurality of lens assemblies, and a plurality of primary and secondary
reflective or
refractive surfaces.
[0137] In some
embodiments, a central reflective element 316 may be used to
redirect light from a target image scene toward the sensors 332, 334. Central
reflective
element 316 may be a reflective surface (e.g., a mirror) or a plurality of
reflective surfaces
(e.g., mirrors), and may be flat or shaped as needed to properly redirect
incoming light to
the image sensors 332, 334. For example, in some embodiments, central
reflective
element 316 may be a mirror sized and shaped to reflect incoming light rays
through the
lens assemblies 324, 326 to sensors 332, 334. The central reflective element
316 may
split light comprising the target image into multiple portions and direct each
portion at a
different sensor. For example, a first reflective surface 312 of the central
reflective
element 316 (also referred to as a primary light folding surface, as other
embodiments
-42-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
may implement a refractive prism rather than a reflective surface) may send a
portion of
the light corresponding to a first field of view 320 toward the first (left)
sensor 332 while
a second reflective surface 314 sends a second portion of the light
corresponding to a
second field of view 322 toward the second (right) sensor 334. It should be
appreciated
that together the fields of view 320, 322 of the image sensors 332, 334 cover
at least the
target image.
[0138] In some
embodiments in which the receiving sensors are each an array
of a plurality of sensors, the central reflective element may be made of
multiple reflective
surfaces angled relative to one another in order to send a different portion
of the target
image scene toward each of the sensors. Each sensor in the array may have a
substantially different field of view, and in some embodiments the fields of
view may
overlap. Certain embodiments of the central reflective element may have
complicated
non-planar surfaces to increase the degrees of freedom when designing the lens
system.
Further, although the central element is discussed as being a reflective
surface, in other
embodiments central element may be refractive. For example, central element
may be a
prism configured with a plurality of facets, where each facet directs a
portion of the light
comprising the scene toward one of the sensors.
[0139] After
being reflected off the central reflective element 316, at least a
portion of incoming light may propagate through each of the lens assemblies
324, 326.
One or more lens assemblies 324, 326 may be provided between the central
reflective
element 316 and the sensors 332, 334 and reflective surfaces 328, 330. The
lens
assemblies 324, 326 may be used to focus the portion of the target image which
is
directed toward each sensor 332, 334.
[0140] In some
embodiments, each lens assembly may comprise one or more
lenses and an actuator for moving the lens among a plurality of different lens
positions.
The actuator may be a voice coil motor (VCM), micro-electronic mechanical
system
(MEMS), or a shape memory alloy (SMA). The lens assembly may further comprise
a
lens driver for controlling the actuator.
[0141] In some
embodiments, traditional auto focus techniques may be
implemented by changing the focal length between the lens 324, 326 and
corresponding
sensors 332, 334 of each camera. In some embodiments, this may be accomplished
by
moving a lens barrel. Other embodiments may adjust the focus by moving the
central
light redirecting reflective mirror surface up or down or by adjusting the
angle of the light
redirecting reflective mirror surface relative to the lens assembly. Certain
embodiments
-43-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
may adjust the focus by moving the side light redirecting reflective mirror
surfaces over
each sensor. Such embodiments may allow the assembly to adjust the focus of
each
sensor individually. Further, it is possible for some embodiments to change
the focus of
the entire assembly at once, for example by placing a lens like a liquid lens
over the entire
assembly. In certain implementations, computational photography may be used to
change
the focal point of the camera array.
[0142] Fields
of view 320, 322 provide the folded optic multi-sensor assembly
310 with a virtual field of view perceived from a virtual region 342 where the
virtual field
of view is defined by virtual axes 338, 340. Virtual region 342 is the region
at which
sensors 332, 334 perceive and are sensitive to the incoming light of the
target image. The
virtual field of view should be contrasted with an actual field of view. An
actual field of
view is the angle at which a detector is sensitive to incoming light. An
actual field of
view is different from a virtual field of view in that the virtual field of
view is a perceived
angle from which incoming light never actually reaches. For example, in Figure
9, the
incoming light never reaches virtual region 342 because the incoming light is
reflected off
reflective surfaces 312, 314.
[0143] Multiple
side reflective surfaces, for example, reflective surfaces 328
and 330, can be provided around the central reflective element 316 opposite
the sensors.
After passing through the lens assemblies, the side reflective surfaces 328,
330 (also
referred to as a secondary light folding surface, as other embodiments may
implement a
refractive prism rather than a reflective surface) can reflect the light
(downward, as
depicted in the orientation of Figure 9) onto the sensors 332, 334. As
depicted, sensor
332 may be positioned beneath reflective surface 328 and sensor 334 may be
positioned
beneath reflective surface 330. However, in other embodiments, the sensors may
be
above the side reflected surfaces, and the side reflective surfaces may be
configured to
reflect light upward. Other suitable configurations of the side reflective
surfaces and the
sensors are possible in which the light from each lens assembly is redirected
toward the
sensors. Certain embodiments may enable movement of the side reflective
surfaces 328,
330 to change the focus or field of view of the associated sensor.
[0144] Each
sensor's field of view 320, 322 may be directed into the object
space by the surface of the central reflective element 316 associated with
that sensor.
Mechanical methods may be employed to tilt the mirrors and/or move the prisms
in the
array so that the field of view of each camera can be directed to different
locations on the
object field. This may be used, for example, to implement a high dynamic range
camera,
-44-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
to increase the resolution of the camera system, or to implement a plenoptic
camera
system. Each sensor's (or each 3x1 array's) field of view may be projected
into the
object space, and each sensor may capture a partial image comprising a portion
of the
target scene according to that sensor's field of view. As illustrated in
Figure 9, in some
embodiments, the fields of view 320, 322 for the opposing sensor arrays 332,
334 may
overlap by a certain amount 318. To reduce the overlap 318 and form a single
image, a
stitching process as described below may be used to combine the images from
the two
opposing sensor arrays 332, 334. Certain embodiments of the stitching process
may
employ the overlap 318 for identifying common features in stitching the
partial images
together. After stitching the overlapping images together, the stitched image
may be
cropped to a desired aspect ratio, for example 4:3 or 1:1, to form the final
image. In some
embodiments, the alignment of the optical elements relating to each FOV are
arranged to
minimize the overlap 318 so that the multiple images are formed into a single
image with
minimal or no image processing required in joining the images.
[0145] As
illustrated in Figure 9, the folded optic multi-sensor assembly 310
has a total height 346. In some embodiments, the total height 346 can be
approximately
4.5 mm or less. In other embodiments, the total height 346 can be
approximately 4.0 mm
or less. Though not illustrated, the entire folded optic multi-sensor assembly
310 may be
provided in a housing having a corresponding interior height of approximately
4.5 mm or
less or approximately 4.0 mm or less.
[0146] As used
herein, the term "camera" may refer to an image sensor, lens
system, and a number of corresponding light folding surfaces; for example, the
primary
light folding surface 314, lens assembly 326, secondary light folding surface
330, and
sensor 334 are illustrated in Figure 9. A folded-optic multi-sensor assembly,
referred to
as an "array" or "array camera," can include a plurality of such cameras in
various
configurations.
E. Overview of Example Imnin2 System
[0147] Figure
10 depicts a high-level block diagram of a device 410 having a
set of components including an image processor 426 linked to one or more
cameras 420a-
n. The image processor 426 is also in communication with a working memory 428,

memory component 412, and device processor 430, which in turn is in
communication
with storage 434 and electronic display 432.
-45-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0148] Device
410 may be a cell phone, digital camera, tablet computer,
personal digital assistant, or the like. There are many portable computing
devices in
which a reduced thickness imaging system such as is described herein would
provide
advantages. Device 410 may also be a stationary computing device or any device
in
which a thin imaging system would be advantageous. A plurality of applications
may be
available to the user on device 410. These applications may include
traditional
photographic and video applications, high dynamic range imaging, panoramic
photo and
video, or stereoscopic imaging such as 3D images or 3D video.
[0149] The
image capture device 410 includes cameras 420a-n for capturing
external images. Each of cameras 420a-n may comprise a sensor, lens assembly,
and a
primary and secondary reflective or refractive mirror surface for reflecting a
portion of a
target image to each sensor, as discussed above with respect to Figure 3. In
general, N
cameras 420a-n may be used, where N 2. Thus, the target image may be split
into N
portions in which each sensor of the N cameras captures one portion of the
target image
according to that sensor's field of view. It will be understood that cameras
420a-n may
comprise any number of cameras suitable for an implementation of the folded
optic
imaging device described herein. The number of sensors may be increased to
achieve
lower z-heights of the system or to meet the needs of other purposes, such as
having
overlapping fields of view similar to that of a plenoptic camera, which may
enable the
ability to adjust the focus of the image after post-processing. Other
embodiments may
have a field of view overlap configuration suitable for high dynamic range
cameras
enabling the ability to capture two simultaneous images and then merge them
together.
Cameras 420a-n may be coupled to the image processor 426 to communicate
captured
images to the working memory 428, the device processor 430, to the electronic
display
432 and to the storage (memory) 434.
[0150] The
image processor 426 may be configured to perform various
processing operations on received image data comprising N portions of the
target image
in order to output a high quality stitched image, as will be described in more
detail below.
Image processor 426 may be a general purpose processing unit or a processor
specially
designed for imaging applications. Examples of image processing operations
include
cropping, scaling (e.g., to a different resolution), image stitching, image
format
conversion, color interpolation, color processing, image filtering (for
example, spatial
image filtering), lens artifact or defect correction, etc. Image processor 426
may, in some
embodiments, comprise a plurality of processors. Certain embodiments may have
a
-46-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
processor dedicated to each image sensor. Image processor 426 may be one or
more
dedicated image signal processors (ISPs) or a software implementation of a
processor.
[0151] As
shown, the image processor 426 is connected to a memory 412 and
a working memory 428. In the illustrated embodiment, the memory 412 stores
capture
control module 414, image stitching module 416, operating system 418, and
reflector
control module 419. These modules include instructions that configure the
image
processor 426 of device processor 430 to perform various image processing and
device
management tasks. Working memory 428 may be used by image processor 426 to
store a
working set of processor instructions contained in the modules of memory
component
412. Alternatively, working memory 428 may also be used by image processor 426
to
store dynamic data created during the operation of device 410.
[0152] As
mentioned above, the image processor 426 is configured by several
modules stored in the memories. The capture control module 414 may include
instructions that configure the image processor 426 to call reflector control
module 419 to
position the extendible reflectors of the camera in a first or second
position, and may
include instructions that configure the image processor 426 to adjust the
focus position of
cameras 420a-n. Capture control module 414 may further include instructions
that
control the overall image capture functions of the device 410. For example,
capture
control module 414 may include instructions that call subroutines to configure
the image
processor 426 to capture raw image data of a target image scene using the
cameras 420a-
n. Capture control module 414 may then call the image stitching module 416 to
perform
a stitching technique on the N partial images captured by the cameras 420a-n
and output a
stitched and cropped target image to imaging processor 426. Capture control
module 414
may also call the image stitching module 416 to perform a stitching operation
on raw
image data in order to output a preview image of a scene to be captured, and
to update the
preview image at certain time intervals or when the scene in the raw image
data changes.
[0153] Image
stitching module 416 may comprise instructions that configure
the image processor 426 to perform stitching and cropping techniques on
captured image
data. For example, each of the N sensors 420a-n may capture a partial image
comprising
a portion of the target image according to each sensor's field of view. The
fields of view
may share areas of overlap, as described above and below. In order to output a
single
target image, image stitching module 416 may configure the image processor 426
to
combine the multiple N partial images to produce a high-resolution target
image. Target
image generation may occur through known image stitching techniques. Examples
of
-47-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
image stitching can be found in U.S. Patent Application No. 11/623,050 which
is hereby
incorporated by reference.
[0154] For
example, image stitching module 416 may include instructions to
compare the areas of overlap along the edges of the N partial images for
matching
features in order to determine rotation and alignment of the N partial images
relative to
one another. Due to rotation of partial images and/or the shape of the field
of view of
each sensor, the combined image may form an irregular shape. Therefore, after
aligning
and combining the N partial images, the image stitching module 416 may call
subroutines
which configure image processor 426 to crop the combined image to a desired
shape and
aspect ratio, for example a 4:3 rectangle or 1:1 square. The cropped image may
be sent to
the device processor 430 for display on the display 432 or for saving in the
storage 434.
[0155]
Operating system module 418 configures the image processor 426 to
manage the working memory 428 and the processing resources of device 410. For
example, operating system module 418 may include device drivers to manage
hardware
resources such as the cameras 420a-n. Therefore, in some embodiments,
instructions
contained in the image processing modules discussed above may not interact
with these
hardware resources directly, but instead interact through standard subroutines
or APIs
located in operating system component 418. Instructions within operating
system 418
may then interact directly with these hardware components. Operating system
module
418 may further configure the image processor 426 to share information with
device
processor 430.
[0156] The
image processor 426 can provide image capture mode selection
controls to a user, for instance by using a touch-sensitive display 432,
allowing the user of
device 410 to select an image capture mode corresponding to either the
standard FOV
image or a wide FOV image.
[0157] Device
processor 430 may be configured to control the display 432 to
display the captured image, or a preview of the captured image, to a user. The
display
432 may be external to the imaging device 410 or may be part of the imaging
device 410.
The display 432 may also be configured to provide a view finder displaying a
preview
image for a use prior to capturing an image, or may be configured to display a
captured
image stored in memory or recently captured by the user. The display 432 may
comprise
an LCD or LED screen, and may implement touch sensitive technologies.
[0158] Device
processor 430 may write data to storage module 434, for
example data representing captured images. While storage module 434 is
represented
-48-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
graphically as a traditional disk device, those with skill in the art would
understand that
the storage module 434 may be configured as any storage media device. For
example, the
storage module 434 may include a disk drive, such as a floppy disk drive, hard
disk drive,
optical disk drive or magneto-optical disk drive, or a solid state memory such
as a
FLASH memory, RAM, ROM, and/or EEPROM. The storage module 434 can also
include multiple memory units, and any one of the memory units may be
configured to be
within the image capture device 410, or may be external to the image capture
device 410.
For example, the storage module 434 may include a ROM memory containing system

program instructions stored within the image capture device 410. The storage
module
434 may also include memory cards or high speed memories configured to store
captured
images which may be removable from the camera.
[0159] Although
Figure 10 depicts a device having separate components to
include a processor, imaging sensor, and memory, one skilled in the art would
recognize
that these separate components may be combined in a variety of ways to achieve

particular design objectives. For example, in an alternative embodiment, the
memory
components may be combined with processor components to save cost and improve
performance. Additionally, although Figure 10 illustrates two memory
components,
including memory component 412 comprising several modules and a separate
memory
428 comprising a working memory, one with skill in the art would recognize
several
embodiments utilizing different memory architectures. For example, a design
may utilize
ROM or static RAM memory for the storage of processor instructions
implementing the
modules contained in memory component 412. The processor instructions may be
loaded
into RAM to facilitate execution by the image processor 426. For example,
working
memory 428 may comprise RAM memory, with instructions loaded into working
memory 428 before execution by the processor 426.
F. Overview of Example Imaging Capture Process
[0160] Figure
11 illustrates blocks of one example of a method 1100 of
capturing a wide field of view target image.
[0161] At block
1105, a plurality of cameras are provided and arranged in at
least a first set and a second set around a central optical element, for
example as
illustrated in Figures 7A and 7B. In some embodiments, greater or fewer than
the first
and second set of cameras can be provided. For example, the four camera
embodiment
described herein can include only a first ring of cameras.
-49-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0162] At block
1110, the imaging system captures a center portion of the
target image scene using the first set of cameras. For example, this can be
done using the
first ring of cameras 114a-d.
[0163] At block
1115, the imaging system captures an additional portion of
the target image scene using the second set of cameras. For example, this can
be done
using the second ring of cameras 116a-d. The additional portion of the target
image scene
can be, for example, a field of view or partial field of view surrounding the
center portion.
[0164] At
optional block 1120, the imaging system captures an additional
portion of the target image scene using the second set of cameras. For
example, this can
be done using a third ring of cameras, such as may be provided in a 12 camera
embodiment. The additional portion of the target image scene can be, for
example, a field
of view or partial field of view surrounding the center portion.
[0165] At block
1125, the center portion and any additional portions are
received in at least one processor. A stitched image is generated by the at
least one
processor that includes at least a portion of the center image and additional
portion(s).
For example, the processor can stitch the center portion captured by the first
set, the
additional portion captured by the second set, and any additional portions
captured by any
other sets, and then crop the stitched image to a desired aspect ratio in
order to form a
final image having a wide field of view.
G. Terminology
[0166]
Implementations disclosed herein provide systems, methods and
apparatus for multiple aperture array cameras free from parallax and tilt
artifacts. One
skilled in the art will recognize that these embodiments may be implemented in
hardware,
software, firmware, or any combination thereof
[0167] In some
embodiments, the circuits, processes, and systems discussed
above may be utilized in a wireless communication device. The wireless
communication
device may be a kind of electronic device used to wirelessly communicate with
other
electronic devices. Examples of wireless communication devices include
cellular
telephones, smart phones, Personal Digital Assistants (PDAs), e-readers,
gaming systems,
music players, netbooks, wireless modems, laptop computers, tablet devices,
etc.
[0168] The
wireless communication device may include one or more image
sensors, two or more image signal processors, a memory including instructions
or
modules for carrying out the CNR process discussed above. The device may also
have
-50-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
data, a processor loading instructions and/or data from memory, one or more
communication interfaces, one or more input devices, one or more output
devices such as
a display device and a power source/interface. The wireless communication
device may
additionally include a transmitter and a receiver. The transmitter and
receiver may be
jointly referred to as a transceiver. The transceiver may be coupled to one or
more
antennas for transmitting and/or receiving wireless signals.
[0169] The
wireless communication device may wirelessly connect to another
electronic device (e.g., base station). A wireless communication device may
alternatively
be referred to as a mobile device, a mobile station, a subscriber station, a
user equipment
(UE), a remote station, an access terminal, a mobile terminal, a terminal, a
user terminal,
a subscriber unit, etc. Examples of wireless communication devices include
laptop or
desktop computers, cellular phones, smart phones, wireless modems, e-readers,
tablet
devices, gaming systems, etc. Wireless communication devices may operate in
accordance with one or more industry standards such as the 3rd Generation
Partnership
Project (3GPP). Thus, the general term "wireless communication device" may
include
wireless communication devices described with varying nomenclatures according
to
industry standards (e.g., access terminal, user equipment (UE), remote
terminal, etc.).
[0170] The
functions described herein may be stored as one or more
instructions on a processor-readable or computer-readable medium. The term
"computer-
readable medium" refers to any available medium that can be accessed by a
computer or
processor. By way of example, and not limitation, such a medium may comprise
RAM,
ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, magnetic disk

storage or other magnetic storage devices, or any other medium that can be
used to store
desired program code in the form of instructions or data structures and that
can be
accessed by a computer. Disk and disc, as used herein, includes compact disc
(CD), laser
disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray
disc where disks
usually reproduce data magnetically, while discs reproduce data optically with
lasers. It
should be noted that a computer-readable medium may be tangible and non-
transitory.
The term "computer-program product" refers to a computing device or processor
in
combination with code or instructions (e.g., a "program") that may be
executed,
processed or computed by the computing device or processor. As used herein,
the term
"code" may refer to software, instructions, code or data that is/are
executable by a
computing device or processor.
-51-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
[0171] The
methods disclosed herein comprise one or more steps or actions
for achieving the described method. The method steps and/or actions may be
interchanged with one another without departing from the scope of the claims.
In other
words, unless a specific order of steps or actions is required for proper
operation of the
method that is being described, the order and/or use of specific steps and/or
actions may
be modified without departing from the scope of the claims.
[0172] It
should be noted that the terms "couple," "coupling," "coupled" or
other variations of the word couple as used herein may indicate either an
indirect
connection or a direct connection. For example, if a first component is
"coupled" to a
second component, the first component may be either indirectly connected to
the second
component or directly connected to the second component. As used herein, the
term
"plurality" denotes two or more. For example, a plurality of components
indicates two or
more components.
[0173] The term
"determining" encompasses a wide variety of actions and,
therefore, "determining" can include calculating, computing, processing,
deriving,
investigating, looking up (e.g., looking up in a table, a database or another
data structure),
ascertaining and the like. Also, "determining" can include receiving (e.g.,
receiving
information), accessing (e.g., accessing data in a memory) and the like. Also,

"determining" can include resolving, selecting, choosing, establishing and the
like.
[0174] The
phrase "based on" does not mean "based only on," unless
expressly specified otherwise. In other words, the phrase "based on" describes
both
"based only on" and "based at least on."
[0175] In the
foregoing description, specific details are given to provide a
thorough understanding of the examples. However, it will be understood by one
of
ordinary skill in the art that the examples may be practiced without these
specific details.
For example, electrical components/devices may be shown in block diagrams in
order not
to obscure the examples in unnecessary detail. In other instances, such
components, other
structures and techniques may be shown in detail to further explain the
examples.
[0176] Headings
are included herein for reference and to aid in locating
various sections. These headings are not intended to limit the scope of the
concepts
described with respect thereto. Such concepts may have applicability
throughout the
entire specification.
[0177] It is
also noted that the examples may be described as a process, which
is depicted as a flowchart, a flow diagram, a finite state diagram, a
structure diagram, or a
-52-

CA 02952470 2016-12-14
WO 2015/196050
PCT/US2015/036648
block diagram. Although a flowchart may describe the operations as a
sequential process,
many of the operations can be performed in parallel, or concurrently, and the
process can
be repeated. In addition, the order of the operations may be re-arranged. A
process is
terminated when its operations are completed. A process may correspond to a
method, a
function, a procedure, a subroutine, a subprogram, etc. When a process
corresponds to a
software function, its termination corresponds to a return of the function to
the calling
function or the main function.
[0178] The
previous description of the disclosed implementations is provided
to enable any person skilled in the art to make or use the present invention.
Various
modifications to these implementations will be readily apparent to those
skilled in the art,
and the generic principles defined herein may be applied to other
implementations
without departing from the spirit or scope of the invention. Thus, the present
invention is
not intended to be limited to the implementations shown herein but is to be
accorded the
widest scope consistent with the principles and novel features disclosed
herein.
-53-

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(86) PCT Filing Date 2015-06-19
(87) PCT Publication Date 2015-12-23
(85) National Entry 2016-12-14
Dead Application 2020-08-31

Abandonment History

Abandonment Date Reason Reinstatement Date
2017-06-19 FAILURE TO PAY APPLICATION MAINTENANCE FEE 2017-07-10
2019-06-19 FAILURE TO PAY APPLICATION MAINTENANCE FEE

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $400.00 2016-12-14
Reinstatement: Failure to Pay Application Maintenance Fees $200.00 2017-07-10
Maintenance Fee - Application - New Act 2 2017-06-19 $100.00 2017-07-10
Maintenance Fee - Application - New Act 3 2018-06-19 $100.00 2018-05-17
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
QUALCOMM INCORPORATED
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Abstract 2016-12-14 2 77
Claims 2016-12-14 5 199
Drawings 2016-12-14 16 420
Description 2016-12-14 53 2,984
Representative Drawing 2016-12-14 1 30
Cover Page 2017-01-19 2 59
Reinstatement / Maintenance Fee Payment 2017-07-10 2 83
Reinstatement / Maintenance Fee Payment 2017-07-10 2 82
Office Letter 2017-07-13 1 27
Maintenance Fee Correspondence 2017-07-25 1 24
Refund 2017-08-15 1 23
International Search Report 2016-12-14 2 55
National Entry Request 2016-12-14 3 61