Language selection

Search

Patent 3005452 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 3005452
(54) English Title: METHODS AND SYSTEMS FOR CONTAINER FULLNESS ESTIMATION
(54) French Title: PROCEDES ET SYSTEMES D'ESTIMATION DE REMPLISSAGE DE CONTENEUR
Status: Granted
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06T 5/50 (2006.01)
  • G06Q 50/28 (2012.01)
  • G06T 7/62 (2017.01)
  • G01F 17/00 (2006.01)
  • G06T 5/00 (2006.01)
  • G06T 7/00 (2017.01)
(72) Inventors :
  • ZHANG, YAN (United States of America)
  • WILLIAMS, JAY J. (United States of America)
  • O'CONNELL, KEVIN J. (United States of America)
  • TASKIRAN, CUNEYT M. (United States of America)
(73) Owners :
  • SYMBOL TECHNOLOGIES, LLC (United States of America)
(71) Applicants :
  • SYMBOL TECHNOLOGIES, LLC (United States of America)
(74) Agent: PERRY + CURRIER
(74) Associate agent:
(45) Issued: 2020-07-14
(86) PCT Filing Date: 2016-11-10
(87) Open to Public Inspection: 2017-05-26
Examination requested: 2018-05-15
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/US2016/061279
(87) International Publication Number: WO2017/087244
(85) National Entry: 2018-05-15

(30) Application Priority Data:
Application No. Country/Territory Date
14/944,860 United States of America 2015-11-18
14/978,367 United States of America 2015-12-22

Abstracts

English Abstract

A method and apparatus for receiving a depth frame from a depth sensor oriented towards an open end of a shipping container, the depth frame comprising a plurality of grid elements that each have a respective depth value, identifying one or more occlusions in the depth frame, correcting the one or more occlusions in the depth frame using one or more temporally proximate depth frames, and outputting the corrected depth frame for fullness estimation.


French Abstract

L'invention concerne un procédé et un appareil permettant de recevoir une trame de profondeur en provenance d'un capteur de profondeur orienté vers une extrémité ouverte d'un conteneur d'expédition, la trame de profondeur comprenant une pluralité d'éléments de grille ayant chacun une valeur de profondeur respective, à identifier une ou plusieurs occlusions dans la trame de profondeur, à corriger ladite occlusion dans la trame de profondeur à l'aide d'une ou plusieurs trames de profondeur proches temporellement, et à délivrer la trame de profondeur corrigée pour l'estimation de remplissage.

Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS
We claim:
1. A method comprising:
receiving a three-dimensional (3D) point cloud from a depth sensor that is
oriented towards an open end of a shipping container, the point cloud
comprising a
plurality of points that each have a respective depth value;
segmenting the received 3D point cloud among a plurality of grid elements;
calculating a respective Ioaded-container-portion grid-element volume for
each grid element based on at least a respective grid-element area and a
respective
loaded-container-portion grid-element depth value for each respective grid
element
that is determined based at least in part on the difference between (i) a
depth
dimension of the shipping container and (ii) an unloaded-container-portion
depth
value for the corresponding grid element;
calculating a loaded-container-portion volume of the shipping container by
aggregating the calculated respective loaded-container-portion grid-element
volumes;
calculating an estimated fullness of the shipping container based on the
loaded-container-portion volume and a capacity of the shipping container; and
outputting the calculated estimated fullness of the shipping container.
2. The method of claim 1, wherein the plurality of grid elements
collectively forms a two-dimensional (2D) grid image that corresponds to a
plane that
is parallel to the open end of the shipping container, each grid element
having a
respective grid-element area.
3. The method of claim 1, wherein determining the unloaded-cOntainer-
portion depth value for a given grid element comprises:
assigning a grid-element depth value to the given grid element based on the
depth values of the points in the point cloud that correspond to the given
grid element;
and
--24--

determining the unloaded-container-portion depth value for the given grid
element based at least in part on the difference between (i) the assigned grid-
element
depth value for the given grid element and (ii) an offset depth value
corresponding to
a depth between the 3D depth sensor and a front plane of the shipping
container.
4. The method of claim 3, wherein assigning the grid-element depth value
for the given grid element based on the depth values of the points in the
point cloud
that correspond to the given grid element comprises assigning as the grid-
element
depth value for the given grid element a minimum value from among the depth
values
of the points in the point cloud that correspond to the given grid element.
5. The method of claim 3, wherein assigning the grid-element depth value
for the given grid element based on the depth values of the points in the
point cloud
that correspond to the given grid element comprises assigning as the grid-
element
depth value for the given grid element an average value of the depth values of
the
points in the point cloud that correspond to the given grid element.
6. The method of claim 1, wherein the depth dimension of the shipping
container is a grid-element-specific depth dimension that is based on a
corresponding
grid element in a reference empty-container point cloud.
7. The method of claim 6, wherein the reference empty-container point
cloud reflects a back wall of the shipping container being a flat surface.
8. The method of claim 6, wherein the reference empty-container point
cloud reflects a back wall of the shipping container being a curved surface.
9. The method of claim 2, further comprising cleaning up the 2D grid
image prior to determining a respective loaded-container-portion grid-element
depth
value for each grid element,
--25--

10. The method of claim 1, wherein the depth sensor has an optical axis
and an image plane, the method further comprising:
prior to segmenting the received point cloud among the plurality of grid
elements, rotating the received 3D point cloud to align (i) the optical axis
with a
ground level and (ii) the image plane with an end plane of the shipping
container.
11. The method of claim 10, wherein the rotating the point cloud is based
on an offline calibration process using the ground level and the end plane as
reference.
12. The method of claim 1, further comprising determining the capacity of
the shipping container based at least in part on the received 3D point cloud.
13. The method of claim 1, further comprising:
receiving an optical image of the shipping container; and
determining the capacity of the shipping container based at least in part on
the
received optical image.
14. The method of claim 13, wherein determining the capacity of the
shipping container based at least in part on the received optical image
comprises:
determining at least one physical dimension of the shipping container from the

received optical image; and
determining the capacity of the shipping container based on the at least one
determined physical dimension.
15. The method of claim 13, wherein determining the capacity of the
shipping container based at least in part oil the received optical image
comprises:
using optical character recognition (OCR) on the at least one received optical

image to ascertain at least one identifier of the shipping container; and
using the at least one ascertained identifier of the shipping container to
determine the capacity of the shipping container.
¨26--

16. The method of claim 1, wherein each grid element has sides
substantially equal to 5 millimeters (min) in length.
17. The method of claim 1, wherein each grid:element is substantially
square in shape, and wherein a grid-element side length is an adjustable
parameter.
18. A system comprising:
a depth sensor that is oriented towards an open end of a shipping container;
a communication interface;
a processor; and
data storage containing instructions executable by the process& for causing
the system to carry out a set of functions, the set of functions including:
receiving a three-dimensional (3D) point cloud from the depth sensor,
the point cloud comprising a plurality of points that each have a respective
depth value;
segmenting the received 3D point cloud among a plurality of grid
elements;
calculating a respective loaded-container-portion grid-element volume
for each grid element based on at least a respective grid-element area and a
respective loaded-container-portion grid-element depth value for each
respective grid element that is determined based at least in part on the
difference between (i) a depth dimension of the shipping container and (ii) an

unloaded-container-portion depth value for the corresponding grid element;
calculating a loaded-container-portion volume of the shipping
container by aggregating the calculated respective loaded-container-portion
grid-element volumes;
calculating an estimated fi.illness of the shipping container based on the
loaded-container-portion volume and a capacity &the shipping container; and
outputting the calculated estimated fullness of the shipping container.
--27--

19. The system of claim 19, wherein:
the plurality of grid elements collectively forms a two-dimensional (2D) grid
image that corresponds to a plane that is parallel to the open end of the
shipping
container, each grid element having a respective grid-element area..
20. A method comprising:
receiving a three-dimensional (3D) point cloud from a depth sensor that is
oriented towards an open end of a shipping container, the point cloud
comprising a
plurality of points that each have a respective depth value, wherein the depth
sensor
has an optical axis and an image plane;
rotating the received 3D point cloud to align (i) the optical axis with a
ground
level and (ii) the image plane with an end plane of the shipping container;
segmenting the rotated 3D point cloud among a plurality of grid elements that
collectively form a two-dimensional (2D) grid image that corresponds to a
plane that
is parallel to the open end of the shipping container, each grid element
having a
respective grid-element area;
determining a respective loaded-container-portion grid-element depth value
for.each grid element at least in Part by comparing (i) the respective portion
of the
projected point cloud that is overlaid by the respective grid-element area of
the
respective grid element with (ii) a respective corresponding portion of a
reference
empty-container point cloud;
calculating a respective loaded-container-portion grid-element volume for
each grid element based on at least the respective grid-element area and the
respective
loaded-container-portion grid-element depth value for each respective grid
element;
calculating a loaded-container-portion volume of the shipping container by
aggregating the calculated respective loaded-container-portion grid-element
volumes;
determining a capacity of the shipping container based at least in part on the

received depth data
calculating an estimated fullness of the shipping container based on the
loaded-container-portion volume and the determined capacity of the shipping
container; and
outputting the calculated estimated fullness of the shipping container.
--28--

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
METHODS AND SYSTEMS FOR CONTAINER FULLNESS ESTIMATION
BACKGROUND
[0001] Efficient loading of containers is a key element to successful
distribution in
the transportation and logistics industry. Ensuring that each container is
loaded
efficiently throughout the loading process is vital to successful
distribution. However,
the inability to verify that each container meets this goal has been a problem
in the
industry.
[0002] There is a need for real-time monitoring or measurements of the
containers
during the loading process. This functionality could provide good business
value to
vendors through loading optimization.
[0003] Accordingly, there is a need for methods and systems for automatic
fullness
estimation of containers, and for detecting¨and correcting for¨occlusions to
maintain
an accurate fullness estimation of the containers.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0004] The accompanying figures, where like reference numerals refer to
identical
or functionally similar elements throughout the separate views, together with
the
detailed description below, are incorporated in and form part of the
specification, and
serve to further illustrate embodiments disclosed herein, and explain various
principles
and advantages of those embodiments.
[0005] FIG. 1 depicts a shipping container, in accordance with some
embodiments.
[0006] FIG. 2A depicts a flat back surface of a shipping container, in
accordance
with some embodiments.
[0007] FIG. 2B depicts a curved back surface of a shipping container, in
accordance
with some embodiments.
[0008] FIG. 3 depicts a loaded-container point cloud, in accordance with
some
embodiments.
[0009] FIG. 4 depicts a segmented loaded-container point cloud, in
accordance
with some embodiments.
[0010] FIG. 5 depicts an expanded-grid-element view of a segmented loaded-
container point cloud, in accordance with some embodiments.

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
[0011] FIG. 6 depicts an architectural view of an example computing
device, in
accordance with some embodiments.
[0012] FIG. 7 depicts a first example method, in accordance with some
embodiments.
[0013] FIG. 8 depicts a shipping container having an optically readable
identifier,
in accordance with some embodiments.
[0014] FIG. 9 depicts a second example method, in accordance with some
embodiments.
[0015] FIG. 10 depicts an example scenario for detecting occlusions in a
shipping
container, in accordance with some embodiments.
[0016] FIG. 11 depicts an example sub-process for detecting close
occlusions, in
accordance with some embodiments.
[0017] FIG. 12 depicts an example sub-process for detecting far
occlusions, in
accordance with some embodiments.
[0018] FIG. 13 depicts an example of temporal analysis, in accordance with
some
embodiments.
[0019] FIGs. 14A and 14B depict examples of graphed container-fullness-
estimation results without and with occlusion correction, respectively, in
accordance
with some embodiments.
[0020] Skilled artisans will appreciate that elements in the figures are
illustrated for
simplicity and clarity and have not necessarily been drawn to scale. For
example, the
dimensions of some of the elements in the figures may be exaggerated relative
to other
elements to help to improve understanding of embodiments disclosed herein.
[0021] The apparatus and method components have been represented where
appropriate by conventional symbols in the drawings, showing only those
specific
details that are pertinent to understanding the embodiments disclosed herein
so as not
to obscure the disclosure with details that will be readily apparent to those
of ordinary
skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0022] One embodiment takes the form of a process that includes (a)
receiving a
three-dimensional (3D) point cloud from a depth sensor that is oriented
towards an open

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
end of a shipping container, where the point cloud includes a plurality of
points that
each have a respective depth value, (b) segmenting the received 3D point cloud
among
a plurality of grid elements, (c) calculating a respective loaded-container-
portion grid-
element volume for each grid element, (d) calculating a loaded-container-
portion
volume of the shipping container by aggregating the calculated respective
loaded-
container-portion grid-element volumes, (e) calculating an estimated fullness
of the
shipping container based on the loaded-container-portion volume and a capacity
of the
shipping container; and (f) outputting the calculated estimated fullness of
the shipping
container.
[0023] Another embodiment takes the form of a system that includes a depth
sensor,
a communication interface, a processor, a data storage containing instructions

executable by the processor for causing he system to carry out at least the
functions
described in the preceding paragraph.
[0024] In at least one embodiment, the plurality of grid elements
collectively forms
a two-dimensional (2D) grid image that corresponds to a plane that is parallel
to the
open end of the shipping container, where each grid element has a respective
grid-
element area, and the method further includes determining a respective loaded-
container-portion grid-element depth value for each grid element, where
calculating the
respective loaded-container-portion grid-element volume for each grid element
is based
on at least the respective grid-element area and the respective loaded-
container-portion
grid-element depth value for each respective grid element.
[0025] In at least one embodiment, the method further includes determining
an
unloaded-container-portion depth value for each grid element, and determining
a
respective loaded-container-portion grid-element depth value for each grid
element is
based at least in part on the difference between (i) a depth dimension of the
shipping
container and (ii) the determined unloaded-container-portion depth value for
the
corresponding grid element.
[0026] In at least one embodiment, assigning the grid-element depth value
for the
given grid element based on the depth values of the points in the point cloud
that
correspond to the given grid element includes assigning as the grid-element
depth value
for the given grid element a minimum value from among the depth values of the
points
in the point cloud that correspond to the given grid element.

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
[0027] In at least one embodiment, assigning the grid-element depth value
for the
given grid element based on the depth values of the points in the point cloud
that
correspond to the given grid element includes assigning as the grid-element
depth value
for the given grid element an average value of the depth values of the points
in the point
cloud that correspond to the given grid element.
[0028] In at least one embodiment, the depth dimension of the shipping
container
is a grid-element-specific depth dimension that is based on a corresponding
grid
element in a reference empty-container point cloud. In at least one such
embodiment,
the reference empty-container point cloud reflects a back wall of the shipping
container
being a flat surface. In at least one other such embodiment, the reference
empty-
container point cloud reflects a back wall of the shipping container being a
curved
surface.
[0029] In at least one embodiment, the method further includes cleaning up
the 2D
grid image prior to determining a respective loaded-container-portion grid-
element
depth value for each grid element.
[0030] In at least one embodiment, the depth sensor has an optical axis
and an
image plane, and the method further includes, prior to segmenting the received
point
cloud among the plurality of grid elements, rotating the received 3D point
cloud to align
(i) the optical axis with a ground level and (ii) the image plane with an end
plane of the
shipping container.
[0031] In at least one embodiment, rotating the point cloud is based on an
offline
calibration process using the ground level and the end plane as reference.
[0032] In at least one embodiment, the method further includes determining
the
capacity of the shipping container based at least in part on the received 3D
point cloud.
[0033] In at least one embodiment, the method further includes (i)
receiving an
optical image of the shipping container and (ii) determining the capacity of
the shipping
container based at least in part on the received optical image. In at least
one such
embodiment, determining the capacity of the shipping container based at least
in part
on the received optical image includes (i) determining at least one physical
dimension
of the shipping container from the received optical image and (ii) determining
the
capacity of the shipping container based on the at least one determined
physical
dimension. In at least one other such embodiment, determining the capacity of
the

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
shipping container based at least in part on the received optical image
includes (i) using
optical character recognition (OCR) on the at least one received optical image
to
ascertain at least one identifier of the shipping container and (ii) using the
at least one
ascertained identifier of the shipping container to determine the capacity of
the shipping
container.
[0034] In at least one embodiment, each grid element has sides
substantially equal
to 5 millimeters (mm) in length.
[0035] In at least one embodiment, each grid element is substantially
square in
shape, and a grid-element side length is an adjustable parameter.
[0036] One embodiment takes the form of a method that includes receiving a
depth
frame from a depth sensor oriented towards an open end of a shipping
container, where
the depth frame is projected to a 2D grid map which includes a plurality of
grid elements
that each have a respective depth value; identifying one or more occlusions in
the depth
frame; correcting the one or more occlusions in the depth frame using one or
more
temporally proximate depth frames; and outputting the corrected depth frame
for
fullness estimation.
[0037] In some embodiments, the one or more occlusions includes a missing-
data
occlusion. In some embodiments, identifying the missing-data occlusion
includes
(i) generating a binarization map delineating between (a) grid elements for
which the
respective depth value is valid and (b) grid elements for which the respective
depth
value is not valid and (ii) identifying the missing-data occlusion as a
cluster of grid
elements in the binarization map for which the respective depth value is not
valid. In
some embodiments, identifying the missing-data occlusion further includes
confirming
that the identified cluster of grid elements exceeds a predetermined occlusion-
size
threshold. In some embodiments, identifying the missing-data occlusion further

includes performing edge detection on the cluster of grid elements. In some
embodiments, identifying the missing-data occlusion further includes
performing
contour identification on the cluster of grid elements.
[0038] In some embodiments, the one or more occlusions includes a moving
occlusion. In some embodiments, the moving occlusion is associated with a
single grid
element in the plurality of grid elements. In some embodiments, identifying
the moving
occlusion includes identifying a threshold depth change in the single grid
element

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
between the depth frame and at least one temporally proximate depth frame. In
some
embodiments, identifying the moving occlusion includes identifying that the
depth
value associated with the single grid element decreases with respect to
previous frames
and then increases in succeeding frames in less than a threshold amount of
time across
multiple depth frames.
[0039] In some embodiments, the one or more occlusions includes a
discontinuous
occlusion. In some embodiments, identifying the discontinuous occlusion
includes
identifying a cluster of grid elements having a collective depth value that is
more than
a threshold difference less than a depth value of a loaded-portion boundary of
the
shipping container. In some embodiments, identifying the discontinuous
occlusion
includes confirming that the identified cluster of grid elements exceeds a
predetermined
occlusion-size threshold. In some embodiments, identifying the discontinuous
occlusion further includes performing edge detection on the cluster of grid
elements. In
some embodiments, identifying the discontinuous occlusion further includes
performing contour identification on the cluster of grid elements.
[0040] In some embodiments, the grid elements are pixels. In some
embodiments,
the grid elements are groups of pixels.
[0041] In some embodiments, the one or more identified occlusions
corresponds to
an occlusion set of the grid elements in the depth frame, and correcting the
one or more
occlusions in the depth frame using one or more temporally proximate depth
frames
includes overwriting the occlusion set in the depth frame with data from
corresponding
non-occluded grid elements from one or more of the temporally proximate depth
frames.
[0042] In some embodiments, identifying the one or more occlusions
includes
analyzing a buffer of depth frames, where the buffer includes the received
depth frame.
[0043] One embodiment takes the form of a system that includes a depth
sensor
oriented towards an open end of a shipping container, a communication
interface, a
processor, and data storage containing instructions executable by the
processor for
causing the system to carry out a set of functions, where the set of functions
includes:
receiving a depth frame from the depth sensor, where the depth frame includes
a
plurality of grid elements that each have a respective depth value;
identifying one or
more occlusions in the depth frame; correcting the one or more occlusions in
the depth

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
frame using one or more temporally proximate depth frames; and outputting the
corrected depth frame for fullness estimation.
[0044] Moreover, any of the variations and permutations described herein
can be
implemented with respect to any embodiments, including with respect to any
method
embodiments and with respect to any system embodiments. Furthermore, this
flexibility
and cross-applicability of embodiments is present in spite of the use of
slightly different
language (e.g., process, method, steps, functions, set of functions, and the
like) to
describe and or characterize such embodiments.
[0045] Before proceeding with this detailed description, it is noted that
the entities,
connections, arrangements, and the like that are depicted in¨and described in
connection with¨the various figures are presented by way of example and not by
way
of limitation. As such, any and all statements or other indications as to what
a particular
figure "depicts," what a particular element or entity in a particular figure
"is" or "has,"
and any and all similar statements¨that may in isolation and out of context be
read as
absolute and therefore limiting¨can only properly be read as being
constructively
preceded by a clause such as "In at least one embodiment....." And it is for
reasons akin
to brevity and clarity of presentation that this implied leading clause is not
repeated ad
nauseum in this detailed description.
[0046] FIG. 1 depicts a shipping container, in accordance with some
embodiments.
In particular, FIG. 1 depicts (i) a shipping container 102 and (ii) a depth
sensor 104 that
is oriented towards an open end of the shipping container 102. In various
different
examples, the shipping container 102 could be designed for travel by truck,
rail, boat,
plane, and/or any other suitable mode or modes of travel. Moreover, as is more
fully
discussed herein, the shipping container 102 could have any of a number of
different
shapes; a substantially rectangular shape (i.e., a rectangular cylinder) is
depicted by way
of example in FIG. 1. As depicted in FIG. 1, the shipping container 102
contains objects
(e.g., boxes and/or other packages) 106. The shipping container 102 may have a
number
of different surfaces, perhaps flat, perhaps curved, among numerous other
possibilities
that could be listed here.
[0047] There are a number of types of depth sensor 104 that could be used,
perhaps
one that includes an RGB sensor, perhaps leap motion, perhaps Intel perceptual

computing, perhaps Microsoft Kinect, among numerous other possibilities that
could

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
be listed here. There are also a number of depth-sensing techniques that could
be
implemented by the depth sensor 104, perhaps using stereo triangulation,
perhaps using
time of flight, perhaps using coded aperture, among numerous other
possibilities that
could be listed here. As one example, the depth sensor 104 could be mounted to
a wall
or column or the like in a given shipping warehouse, and the shipping
container 102
could be positioned on the back of a truck, and then driven (e.g., backed)
into a position
such that the depth sensor 104 is oriented towards an open end of the shipping
container
102, as is depicted in FIG. 1.
[0048] As mentioned above, different shipping containers could have
different
shapes. Two examples are shown in FIGs. 2A and 2B. In particular, FIG. 2A
depicts
(i) a flat back wall (i.e., surface) 202 of a shipping container and (ii) a
depth sensor 204,
whereas FIG. 2B depicts (i) a curved back wall (i.e., surface) 206 of a
shipping
container and (ii) a depth sensor 208. And certainly numerous other examples
of
shipping-container shapes could be presented here.
[0049] FIG. 3 depicts a loaded-container point cloud, in accordance with
some
embodiments. In particular, FIG 3 depicts a 3D point cloud 302. As a general
matter,
the depth sensor that is oriented at the open end of the shipping container
may gather
depth information in a given field of view and transmit that information to a
system that
may be equipped, programmed, and configured to carry out the present systems
and
methods. That set of information (i.e., points) is referred to herein as being
a 3D point
cloud (or at times simply a point cloud); each point in such a cloud
corresponds to a
perceived depth at a corresponding point in the field of view of the depth
sensor.
[0050] Returning to FIG. 3, an outline 304 of a shipping container is
shown, as are
outlines 306A, 306B, and 306C of example packages in the example shipping
container.
These outlines 304 and 306A-C are intended to generally correspond to the
shipping
container 104 and the packages 106 that are depicted in FIG. 1, in order to
help the
reader to visualize an example real-world scenario from which the example
point cloud
302 could have been derived, gathered, or the like. Moreover, for purposes of
illustration, each point in the point cloud 302 is shown in FIG. 3 as having
an integer
number that corresponds to an example depth value (in, e.g., example units
such as
meters). In actual implementations, any number of points could be present in
the point

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
cloud 302, as the various points that are depicted in FIG. 3 as being part of
the point
cloud 302 are for illustration and are not meant to be comprehensive.
[0051] Moreover, as is more fully discussed below, in some embodiments the
depth
sensor that is oriented towards an open end of the shipping container has a
vantage point
with respect to the open end of the shipping container that is not aligned
with the center
of the open end of the shipping container in one or more dimensions. That is,
the depth
sensor and the shipping container might be relatively positioned such that the
depth
sensor is looking to some extent from one side or the other and could be
vertically off
center (e.g., elevated) as well. So, for example, the depth sensor may be
positioned
higher and to the right of the center of the plane that corresponds with the
open end of
the shipping container.
[0052] As is more fully described below, the present disclosure includes
segmentation and projection of the received point cloud into a number of grid
elements
in a 2D grid map that collectively correspond to the open end of the shipping
container.
In cases where the depth sensor happens to be positioned square to the open
end of the
shipping container and vertically centered on that open end as well, this
segmentation
and projection step can be proceeded to without first performing one or more
geometric
rotations. In other cases, however, prior to carrying out the below-described
segmentation step and the various other steps that are subsequent to that, the
present
systems and methods include a step of one or more geometric rotations in
accordance
with the relative positions of the depth sensor and the open end of the
shipping container.
Such relative position can be pre-programmed into the system, or could
otherwise be
determined using depth sensors, optical cameras, and/or other suitable
equipment.
[0053] FIG. 4 depicts a segmented loaded-container point cloud, in
accordance
with some embodiments. In particular, FIG. 4 depicts a segmented 3D point
cloud 402,
which may be generated in a number of different ways, such as edge-based
segmentation, surfaced-based segmentation, and/or scanline-based segmentation,

among numerous other possibilities that may be listed here. Moreover, it is
noted that
FIG. 4 depicts the segmented point cloud 402 after any necessary rotations
were
performed to account for the relative positions and alignments of the depth
sensor and
the open end of the shipping container.

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
[0054] As described above, in at least one embodiment, the point cloud 402
is
segmented among a plurality of grid elements, which collectively form a 2D
grid image
that corresponds to a plane that is parallel to the open end of the shipping
container.
Each grid element has a respective grid-element area. In FIG. 4, the grid
elements are
shown as being substantially square (e.g., 5 mm by 5 mm), though this is by
way of
example and not limitation, as any suitable dimensions and/or shapes could be
used as
deemed suitable by those of skill in the art for a given implementation.
Moreover, in
some embodiments, the side length of the grid elements is an adjustable
parameter. In
some cases, this parameter is set to be as small a value as the associated
depth sensor
allows and/or is capable of. Indeed, the resolution of the depth sensor plays
a role in
whether estimates of container fullness are overestimates or underestimates.
As can be
seen in FIG. 4, one example grid element 404 is highlighted by way of example.
The
grid element 404 is depicted as including ten total points from the segmented
point
cloud 402; four of those ten points have a depth value of 1 (e.g., 1 meter),
five of those
ten points have a depth value of 2 (e.g., 2 meters), and one of those ten
points has a
depth value of 3 (e.g., 3 meters). This number of points in grid element 404
and these
respective depth values are provided purely by way of example and for
illustration, and
in no way for limitation.
[0055] FIG. 5 depicts an expanded-grid-element view of a segmented loaded-
container point cloud, in accordance with some embodiments. In particular,
FIG. 5
depicts a segmented 3D point cloud 502 (though zoomed out too far to depict
individual
points) and an expanded grid element 504. The expanded grid element 504
includes, by
way of example only, the same set of ten points that are in the grid element
404 of
FIG. 4, albeit in a different arrangement; i.e., there are ten total points,
including four
points having a depth value of 1, five points having a depth value of 2, and
one point
having a depth value of 3.
[0056] In connection with various embodiments, the grid element 504 is
assigned
a characteristic depth value based on the depth values of the points in the
subsection of
the 3D point cloud that is found in the particular grid element 504. From
among those
depth values, the characteristic depth value for the grid element could be a
minimum
value, a mode (i.e., most commonly occurring) value, an average value, or some
other
possibility. Using the example data that is present in FIG. 5: if the minimum
value were
--10--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
used, then the characteristic depth value for the grid element 504 would be 1;
if the
mode value were used, then the characteristic depth value for the grid element
504
would be 2; if the average value were used, then the characteristic depth
value for the
grid element 504 would be 1.7 (or 2 if rounded to the nearest whole number).
And
certainly numerous other possible implementations could be listed here. As is
described
more fully below, the characteristic depth value that is assigned to a given
grid element
is then used, along with the area of that grid element, to calculate a loaded-
portion
volume for that particular grid element.
[0057] FIG. 6 depicts an architectural view of an example computing
device, in
accordance with some embodiments. The example computing device 600 may be
configured to carry out the functions described herein, and as depicted
includes a
communications interface 602, a processor 604, data storage 606 (that contains
program
instructions 608 and operational data 610), a user interface 612, peripherals
614, and a
communication bus 616. This arrangement is presented by way of example and not

limitation, as other example arrangements could be described here.
[0058] The communication interface 602 may be configured to be operable
for
communication according to one or more wireless-communication protocols, some
examples of which include LMR, LTE, APCO P25, ETSI DMR, TETRA, Wi-Fi,
Bluetooth, and the like. The communication interface 602 may also or instead
include
one or more wired-communication interfaces (for communication according to,
e.g.,
Ethernet, USB, and/or one or more other protocols.) The communication
interface 602
may include any necessary hardware (e.g., chipsets, antennas, Ethernet
interfaces, etc.),
any necessary firmware, and any necessary software for conducting one or more
forms
of communication with one or more other entities as described herein.
[0059] The processor 604 may include one or more processors of any type
deemed
suitable by those of skill in the relevant art, some examples including a
general-purpose
microprocessor and a dedicated digital signal processor (DSP).
[0060] The data storage 606 may take the form of any non-transitory
computer-
readable medium or combination of such media, some examples including flash
memory, read-only memory (ROM), and random-access memory (RAM) to name but
a few, as any one or more types of non-transitory data-storage technology
deemed
suitable by those of skill in the relevant art could be used. As depicted in
FIG. 6, the
--11--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
data storage 606 contains program instructions 608 executable by the processor
604 for
carrying out various functions described herein, and further is depicted as
containing
operational data 610, which may include any one or more data values stored by
and/or
accessed by the computing device in carrying out one or more of the functions
described
herein.
[0061] The user interface 612 may include one or more input devices
(a.k.a.
components and the like) and/or one or more output devices (a.k.a. components
and the
like.) With respect to input devices, the user interface 612 may include one
or more
touchscreens, buttons, switches, microphones, and the like. With respect to
output
devices, the user interface 612 may include one or more displays, speakers,
light
emitting diodes (LEDs), and the like. Moreover, one or more components (e.g.,
an
interactive touchscreen and display) of the user interface 612 could provide
both user-
input and user-output functionality.
[0062] The peripherals 614 may include any computing device accessory,
component, or the like, that is accessible to and useable by the computing
device 600
during operation. In some embodiments, the peripherals 614 includes a depth
sensor.
In some embodiments, the peripherals 614 includes a camera for capturing
digital video
and/or still images. And certainly other example peripherals could be listed.
[0063] FIG. 7 depicts a first example method, in accordance with some
embodiments. In particular, FIG. 7 depicts a method 700 that includes steps
702, 704,
706, 708, 710, and 712, and is described below by way of example as being
carried out
by the computing system 600 of FIG. 6, though in general the method 700 could
be
carried out by any computing device that is suitably equipped, programmed, and

configured.
[0064] At step 702, the computing system 600 receives a 3D point cloud
from a
depth sensor that is oriented towards an open end of a shipping container. The
point
cloud includes a plurality of points that each have a respective depth value.
As described
above, if necessary due to the respective positioning and alignment of the
depth sensor
and the open end of the shipping container, the computing system 600, upon
receiving
the 3D point cloud, may rotate the received 3D point cloud to align (i) an
optical axis
of the depth sensor with a ground level and (ii) an image plane of the depth
sensor with
an end plane of the shipping container. This rotating of the received point
cloud may
--12--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
be based on a calibration process (e.g., an offline calibration process) that
uses the
ground level and the end plane as reference.
[0065] At step 704, the computing system 600 segments the 3D point cloud
that
was received at step 702 among a plurality of grid elements. As described
above, those
grid elements could be substantially rectangular (e.g., square) in shape, and
they may
collectively form a 2D grid image that corresponds to a plane that is parallel
to the open
end of the shipping container, where each grid element has a respective grid-
element
area.
[0066] At step 706, the computing system 600 calculates a respective
loaded-
container-portion grid-element volume for each grid element. The computing
system
600 may do so by first determining a respective loaded-container-portion grid-
element
depth value for each grid element, and then determining each respective loaded-

container-portion grid-element volume for each grid element by multiplying the

particular grid element's area by the particular grid element's respective
loaded-
container-portion grid-element depth value. In some embodiments, the computing

system 600 cleans up the 2D grid image prior to determining a respective
loaded-
container-portion grid-element depth value for each grid element.
[0067] As to how the computing system 600 may determine a particular grid
element's respective loaded-container-portion grid-element depth value, in one

embodiment the computing system 600 determines an unloaded-container-portion
depth value for the particular grid element, and then determines the
respective loaded-
container-portion grid-element depth value for the particular grid element
based at least
in part on the difference between (i) a depth dimension of the shipping
container and
(ii) the determined unloaded-container-portion depth value for the
corresponding grid
element. Thus, for example, if the computing system 600 determined that the
unloaded-
container-portion depth value of a given grid element was 3 meters and knew
that the
depth dimension of the shipping container was 50 meters, the computing system
600
could determine that the loaded-container-portion depth value for the given
grid
element was 47 meters.
[0068] As to how the computing system 600 may determine the unloaded-
container-portion depth value for a given grid element, in some embodiments
the
computing system 600 assigns a characteristic grid-element depth value to the
given
--13--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
grid element based on the depth values of the points in the point cloud that
correspond
to the given grid element. As described above, some options for doing so
including
selecting a minimum value, a mode value, and an average value. A maximum value

could also be selected, though this would tend to lead to underloading of
containers by
overestimating their fullness, which would be less than optimally efficient.
[0069] Upon assigning a characteristic grid-element depth value to the
given grid
element, the computing system 600 may then determine the respective unloaded-
container-portion depth value for the given grid element based at least in
part on the
difference between (i) the assigned characteristic grid-element depth value
for the given
grid element and (ii) an offset depth value corresponding to a depth between
the 3D
depth sensor and a front plane of the shipping container. Thus, if the depth
sensor
registers an absolute value of, e.g., 7 meters as a depth value for a given
point or grid
element and it is pre-provisioned or runtime determined that the depth sensor
is 4 meters
from the front plane of the open end of the shipping container, the computing
system
600 may consider the unloaded-container-portion depth value for that grid
element to
be 3 meters. And certainly numerous other examples could be listed.
[0070] In some cases, the depth dimension of the shipping container that
is used to
derive a loaded-container-portion depth value from an unloaded-container-
portion
depth value for a given grid element is a grid-element-specific depth
dimension that is
based on a corresponding grid element in a reference empty-container point
cloud. As
described above, the back wall could be flat or curved, as depicted in FIGs.
2A and 2B,
and the grid-element-specific depth dimension for a given grid element could
accordingly reflect this. A reference point cloud could be gathered using an
empty
shipping container of the same type, and that reference point cloud could be
stored in
data storage and recalled, perhaps on a grid-element-by-grid-element basis to
perform
the herein-described calculations.
[0071] At step 708, the computing system 600 calculates a loaded-container-

portion volume of the shipping container by aggregating the respective loaded-
container-portion grid-element volumes that were calculated at step 706,
giving a result
that corresponds to what volume (in, e.g., cubic meters) of the shipping
container has
been loaded. It is noted that loaded in this context essentially means no
longer available
for loading. Thus, empty space that is now inaccessible due to packages being
stacked
--14--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
in the way would be counted as loaded right along with space in the shipping
container
that is actually occupied by a given package.
[0072] At step 710, the computing system 600 calculates an estimated
fullness of
the shipping container based on (i) the loaded-container-portion volume that
was
calculated at step 708 and (ii) a capacity of the shipping container. In
particular, the
estimated fullness of the shipping container may be calculated as the loaded-
portion
volume of the shipping container divided by the capacity of the shipping
container. The
capacity of the shipping container could be determined in multiple different
ways, some
of which are described below.
[0073] In one embodiment, the computing system 600 determines the capacity
of
the shipping container based at least in part on the received 3D point cloud.
Thus, the
3D point cloud may be indicative of the dimensions of the shipping container
such that
the capacity of the shipping container can be determined. In another
embodiment, the
computing system receives an optical image of the shipping container, and
determines
the capacity of the shipping container based at least in part on the received
optical image.
This could include determining actual dimensions of the shipping container
from the
optical image, and could instead or in addition include extracting an
identifier of the
shipping container from the optical image, perhaps using optical character
recognition
(OCR), and then querying a local or remote database using that identifier in
order to
retrieve dimension and/or capacity data pertaining to the particular shipping
container.
[0074] It is noted that, in some embodiments, the system may determine
that the
entire interior of the shipping container is not visible to the depth sensor,
perhaps due
to the relative location and arrangement of the depth sensor and the shipping
container.
In such instances, the system may define a volume of interest (VOI) as being
the part
of the interior of the container that is visible to the depth sensor. The
system may in
some such instances calculate the estimated fullness of the container to be
loaded
portion of the VOI divided by the capacity (i.e., total volume) of the VOI. In
other
embodiments, the system may simply assume that any internal portion of the
shipping
container that cannot be seen with the depth camera is loaded, and in such
cases may
still calculate the estimated fullness as the loaded portion of the entire
shipping
container divided by the total capacity of the entire shipping container. And
certainly
other example implementations could be listed here as well.
--15--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
[0075] At step 712, the computing system 600 outputs the calculated
estimated
fullness of the shipping container, perhaps to a display, perhaps to a data
storage,
perhaps using wireless and/or wired communication to transmit the calculated
estimated fullness of the shipping container to one or more other devices or
systems,
and/or perhaps to one or more other destinations.
[0076] FIG. 8 depicts a shipping container having an optically readable
identifier
in accordance with some embodiments. In particular, FIG. 8 depicts a container
802, an
indicia 804 (e.g., bar code or alphanumeric identifier), and an optical reader
806. There
are several different types of optical readers 806 that may be used, such as a
barcode
scanner, a camera, and/or the like. In one embodiment, the optical reader 806
acquires
an alphanumeric identifier of the container data using OCR. The computing
system may
then use that acquired alphanumeric identifier of the container to query a
database for
dimension data pertaining to the shipping container. And certainly other
example
implementations could be listed here as well.
[0077] In some instances, there may be one or more moving or stationary
occlusions (e.g., package loaders, stray packages, etc.) between the 3D depth
sensor
and the loaded portion of the container. Some occlusions cause underestimates
of
container fullness, perhaps by being so close to the 3D depth sensor so as to
create gaps
in the point-cloud data. Some occlusions cause overestimates of container
fullness,
perhaps by being so close to actually loaded packages so as to be confused for
(e.g.,
clustered with) those loaded packages. Thus, as a general matter, unless
properly
detected and corrected for, the presence of occlusions can result in erroneous
estimation
of the fullness of the shipping container.
[0078] FIG. 9 depicts a second example method, in accordance with some
embodiments. In particular, FIG. 9 depicts a method 900, which includes the
steps of
receiving, at step 902, a depth frame from a depth sensor oriented towards an
open end
of a shipping container, where the depth frame includes a plurality of grid
elements that
each have a respective depth value. The method 900 further includes
identifying, at step
904, one or more occlusions in the depth frame. In some instances, only one or
more
far occlusions are detected. In some instances, only one or more close
occlusions are
detected. In some instances, both far and close occlusions are detected. The
method 900
further includes correcting, a step 906, the one or more occlusions in the
depth frame
--16--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
using one or more temporally proximate depth frames, and outputting, at step
908, the
corrected depth frame for fullness estimation.
[0079] FIG. 10 depicts an example scenario for detecting occlusions in a
shipping
container, in accordance with some embodiments. In particular, FIG. 10 depicts
an
example scenario in which a depth sensor 1030 is configured to collect depth
data while
oriented towards a shipping container 1000. In the example moment of the
depicted
scenario, there are two occluding objects: a close occluding object 1005 and a
far
occluding object 1015. As shown, close occlusions may be caused by occluding
objects
close to depth sensor 1030 (e.g., loaders, unloaded packages, see FIG. 10
object 1005).
In some embodiments, close occlusions appear as gaps or holes (no data (or no
valid
data)) in the 3D depth data, which may result in underestimated fullness, as
less than a
complete set of 3D depth volume data is processed. As close occlusions often
present
as gaps in data, they may also be referred to as "missing-data occlusions." As
shown,
an object 1005 is within the depth sensor's minimum detection range, and the
depth
sensor may therefore not provide any data for areas blocked by object 1005.
This gap
in the 3D depth data may cause the system to omit the volume occupied by
region 1010
while calculating fullness, thus resulting in an underestimate of the shipping-
container
fullness. In other embodiments, some depth sensors 1030 may output the minimum

range distance for any object detected within the minimum range, which would
result
in an over-estimation, as the system may assume packages are loaded in region
1010.
And certainly other example scenarios could be listed here as well.
[0080] In some embodiments, detecting missing-data occlusions includes
carrying
out sub-process 1100 as shown in FIG. 11. As shown, sub-process 1100 includes
the
steps of receiving a projected 2D image of grid elements at step 1101 and
creating a
binarization map at step 1102, performing at least one morphological opening
at step
1104, performing edge detection of the at least one morphological opening at
step 1106,
and determining occlusion contours based on the detected edges at step 1108.
In some
embodiments, the binarization map delineates between (i) grid elements for
which the
respective depth value is valid and (ii) grid elements for which the
respective depth
value is not valid (i.e., a map of valid data points and invalid (e.g.,
missing) data points).
In some embodiments, performing the morphological opening in step 1104
includes
identifying a cluster of grid elements in the binarization map for which the
respective
--17--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
depth value is invalid. In some embodiments, the identified cluster of grid
elements
may need to exceed a predetermined occlusion-size threshold of grid elements
to be
determined to be a morphological opening, and thus a (suspected) close
occluding
obj ect.
[0081] In some embodiments, the edge detection performed at step 1106 may
be
Canny edge detection. In some embodiments, performing the edge detection at
step
1106 may include determining the set of grid elements in the identified
cluster that
define the edges of the 2D grid image after morphological opening is
performed. In
some embodiments, this is done on a grid element-by-grid element basis. In
some
embodiments, the grid elements are single pixels in the point cloud. In some
embodiments, the grid elements are groups of pixels, and may be averaged
together (or
otherwise characterized using a single number, as described above). In some
embodiments, determining the occlusion contour at step 1108 includes forming
an
occlusion mask (including occlusion location, contour length, and a mask
image, in
some embodiments) based on grid elements that have been identified as edges in
the
previous step. In some embodiments, the occlusion contours are based on
contour
length and aspect ratio. Lastly, the occlusion location, contour length, and
mask image
may be output for occlusion correction (e.g., in step 906 of FIG. 9).
[0082] The second type of occluding objects discussed herein are far
occluding
objects (see, e.g., FIG. 10, object 1015). In some embodiments, far occluding
objects
may include either or both of two different types of far occlusions: moving
occlusions
and discontinuous occlusions. Far occlusions may be caused by occluding
objects that
are further away from depth sensor 1030 (i.e., closer to the loaded packages
in the
container) as compared with occluding objects that present as what are
characterized in
this description as being near occlusions. Far occlusions may result in a
calculated
fullness that is overestimated. In some embodiments, the method for
calculating the
fullness assumes the shipping container has been loaded from the back to
front. Thus,
if there is a discontinuous occlusion (e.g., a loader or a package that has
not been packed
yet), the system may assume that there are packages in the region 1020 behind
the
occluding object 1015, when in reality some of the space behind the object
1015 may
be unoccupied.
--18--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
[0083] FIG. 12 illustrates a sub-process of detecting discontinuous
occlusions
(generally at 1205) and moving occlusions (generally at 1210).
[0084] As mentioned above, in some embodiments, it is assumed that
packages are
loaded from back to front in the shipping container; and accordingly, in some
embodiments, identifying a discontinuous occlusion includes identifying, at
step 1207,
a cluster of grid elements from a single frame of a 3D point cloud, where the
cluster of
grid elements has depth values that are more than a threshold difference less
than a
depth value of a loaded-portion boundary of the shipping container (i.e., the
depth
values for the loaded packages). In some embodiments, the cluster of
discontinuous
occluding points is identified using clustering techniques that are commonly
know to
those of ordinary skill in the art. Similar to identifying close occlusions,
in some
embodiments, identifying discontinuous occlusions may include finding clusters
with
location and geometric constraints such as cluster width, length, and aspect
ratio in step
1209, and may further involve confirming that the identified cluster of grid
elements
exceeds a predetermined occlusion-size threshold. In some embodiments,
identifying
the discontinuous occlusion includes performing edge detection on the cluster
of grid
elements. In some embodiments, identifying the discontinuous occlusion
includes
performing contour identification on the cluster of grid elements. In some
embodiments,
the grid elements are single pixels, while in other embodiments the grid
elements are
groups of pixels.
[0085] In some instances, objects (e.g., a loader) that are close to the
loaded
packages (i.e., far occlusions) may not be detected in single-frame analysis,
and
therefore temporal analysis (performed in step 1213) may be used to detect
moving
occlusions. In some embodiments, the transient nature of a given object may be
used in
identifying that object as being a moving occlusion. In some embodiments, this

transient nature may be perceived as depth values changing too much between
two
adjacent frames for a given grid element, which may indicate movement instead
of a
permanently loaded package in the corresponding location. In some embodiments
it
may be useful to know that, as packages are loaded in the shipping container
from back
to front, the depth values of the grid elements in the projected 2D image
should
progressively and consistently decrease from the point of view of the depth
sensor,
assuming that packages are loaded from back to front. In some embodiments, the
--19--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
temporal analysis step includes identifying that the depth value associated
with a single
grid element decreases with respect to previous frames and then increases in
succeeding
frames in less than a threshold amount of time across multiple depth frames,
consistent
with what would occur if a transient object passed through the field of view
of the 3D
depth sensor.
[0086] FIG. 13 depicts an example of temporal analysis, in accordance with
some
embodiments. In particular, FIG. 13 depicts a graph of depth values of an
example
individual grid element in five temporally proximate depth frames, depicted as

corresponding with time intervals tl -t5. In some embodiments without
limitation, each
time interval may be 1/10 of a second. As shown, the depth value at t3 has
exceeded an
example threshold depth change between at least one of the t2 depth value and
the t4
depth value, and therefore the grid element in depth frame t3 may be
determined to be
part of a moving occlusion. In some embodiments, detecting a moving occlusion
includes analyzing multiple grid elements in proximity to the detected moving
occlusion grid element in order to detect a full far occluding object. In some

embodiments, the upper limit of the fullness level may have a predetermined
change
threshold between adjacent depth frames, i.e., if a change of the estimated
fullness level
exceeds a predetermined limit, it may indicate the presence of a loader, for
example. In
other words, if a loader is relatively near the depth sensor in a region of
the shipping
container that hasn't been loaded yet (but not so near to the depth sensor as
to cause
missing data), there may be a large spike in shipping-container fullness
estimation if
that transient occlusion were not detected and corrected for.
[0087] In some embodiments, the one or more identified occlusions
corresponds to
an occlusion set of the grid elements in the depth frame, and correcting the
one or more
occlusions in the depth frame using one or more temporally proximate depth
frames
includes overwriting the occlusion set in the depth frame with data from
corresponding
non-occluded grid elements from one or more of the temporally proximate depth
frames.
In other words, the non-occluded grid elements of the most adjacent depth
frame may
be used to fill in the occlusion set of grid elements in the current occluded
depth frames.
[0088] FIGs. 14A and 14B depict examples of shipping-container-fullness
estimation without occlusion correction and with occlusion correction,
respectively. As
shown, the x-axis of FIGs. 14A and 14B represents time, while the y-axis
represents
--20--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
the current shipping-container fullness estimation. As shown, FIG. 14A
includes results
of uncorrected-for occlusions (such as between times ¨150-225). Methods
described
herein detect and correct for these occlusions, and a more accurate shipping-
container
fullness estimation over time is achieved, as shown by the smoothly increasing
curve
of FIG. 14B.
[0089] All occlusion scenarios cause the loss of valid 3D measurement of
the
packages occluded by the loaders or other transiting or stationary objects in
front of the
packages. This yields inaccurate fullness estimation (either under- or over-
estimation).
[0090] This disclosure proposes solutions for these two types of
occlusions
respectively. In cases of close occlusions (which result in underestimation of
container
fullness), gaps are detected from the 3D depth data. Several geometric
constraints
including contour length, aspect ratio of the gaps are used to identify true
occlusions.
In cases of far occlusions (which result in overestimation of container
fullness),
clustering and temporal analysis may be used to identify such occlusions.
[0091] Container-fullness level needs to be estimated reliably even when
occlusions are present. The 3D depth data are corrected based on temporal
analysis of
multiple loading frames after the occlusions are identified. Specifically,
each frame is
compared with its adjacent frames, and the occluded areas are "filled" with
data from
corresponding non-occluded areas from adjacent frames. The fullness level is
then
estimated from the corrected data.
[0092] In the foregoing specification, specific embodiments have been
described.
However, one of ordinary skill in the art appreciates that various
modifications and
changes can be made without departing from the scope of the disclosure as set
forth in
the claims below. Accordingly, the specification and figures are to be
regarded in an
illustrative rather than a restrictive sense, and all such modifications are
intended to be
included within the scope of present teachings.
[0093] The benefits, advantages, solutions to problems, and any element(s)
that
may cause any benefit, advantage, or solution to occur or become more
pronounced are
not to be construed as a critical, required, or essential features or elements
of any or all
the claims.
[0094] Moreover, in this document, relational terms such as first and
second, top
and bottom, and the like may be used solely to distinguish one entity or
action from
--21--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
another entity or action without necessarily requiring or implying any actual
such
relationship or order between such entities or actions. The terms "comprises,"

"comprising," "has", "having," "includes", "including," "contains",
"containing" or
any other variation thereof, are intended to cover a non-exclusive inclusion,
such that a
process, method, article, or apparatus that comprises, has, includes, contains
a list of
elements does not include only those elements but may include other elements
not
expressly listed or inherent to such process, method, article, or apparatus.
An element
proceeded by "comprises ...a", "has ...a", "includes ...a", "contains ...a"
does not,
without more constraints, preclude the existence of additional identical
elements in the
process, method, article, or apparatus that comprises, has, includes, contains
the
element. The terms "a" and "an" are defined as one or more unless explicitly
stated
otherwise herein. The terms "substantially", "essentially", "approximately",
"about" or
any other version thereof, are defined as being close to as understood by one
of ordinary
skill in the art, and in one non-limiting embodiment the term is defined to be
within
10%, in another embodiment within 5%, in another embodiment within 1% and in
another embodiment within 0.5%. The term "coupled" as used herein is defined
as
connected, although not necessarily directly and not necessarily mechanically.
A device
or structure that is "configured" in a certain way is configured in at least
that way, but
may also be configured in ways that are not listed.
[0095] It will be appreciated that some embodiments may be comprised of
one or
more generic or specialized processors (or "processing devices") such as
microprocessors, digital signal processors, customized processors and field
programmable gate arrays (FPGAs) and unique stored program instructions
(including
both software and firmware) that control the one or more processors to
implement, in
conjunction with certain non-processor circuits, some, most, or all of the
functions of
the method and/or apparatus described herein. Alternatively, some or all
functions
could be implemented by a state machine that has no stored program
instructions, or in
one or more application specific integrated circuits (ASICs), in which each
function or
some combinations of certain of the functions are implemented as custom logic.
Of
course, a combination of the two approaches could be used.
[0096] Moreover, an embodiment can be implemented as a computer-readable
storage medium having computer readable code stored thereon for programming a
--22--

CA 03005452 2018-05-15
WO 2017/087244
PCT/US2016/061279
computer (e.g., comprising a processor) to perform a method as described and
claimed
herein. Examples of such computer-readable storage mediums include, but are
not
limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic
storage device,
a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an
EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically
Erasable Programmable Read Only Memory) and a Flash memory. Further, it is
expected that one of ordinary skill, notwithstanding possibly significant
effort and
many design choices motivated by, for example, available time, current
technology,
and economic considerations, when guided by the concepts and principles
disclosed
herein will be readily capable of generating such software instructions and
programs
and ICs with minimal experimentation.
[0097] The Abstract of the Disclosure is provided to allow the reader to
quickly
ascertain the nature of the technical disclosure. It is submitted with the
understanding
that it will not be used to interpret or limit the scope or meaning of the
claims. In
addition, in the foregoing Detailed Description, it can be seen that various
features are
grouped together in various embodiments for the purpose of streamlining the
disclosure.
This method of disclosure is not to be interpreted as reflecting an intention
that the
claimed embodiments require more features than are expressly recited in each
claim.
Rather, as the following claims reflect, inventive subject matter lies in less
than all
features of a single disclosed embodiment. Thus the following claims are
hereby
incorporated into the Detailed Description, with each claim standing on its
own as a
separately claimed subject matter.
--23--

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date 2020-07-14
(86) PCT Filing Date 2016-11-10
(87) PCT Publication Date 2017-05-26
(85) National Entry 2018-05-15
Examination Requested 2018-05-15
(45) Issued 2020-07-14

Abandonment History

There is no abandonment history.

Maintenance Fee

Last Payment of $210.51 was received on 2023-10-19


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if standard fee 2024-11-12 $277.00
Next Payment if small entity fee 2024-11-12 $100.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Request for Examination $800.00 2018-05-15
Application Fee $400.00 2018-05-15
Maintenance Fee - Application - New Act 2 2018-11-13 $100.00 2018-10-23
Maintenance Fee - Application - New Act 3 2019-11-12 $100.00 2019-11-05
Final Fee 2020-08-27 $300.00 2020-05-06
Maintenance Fee - Patent - New Act 4 2020-11-10 $100.00 2020-10-21
Maintenance Fee - Patent - New Act 5 2021-11-10 $204.00 2021-10-20
Maintenance Fee - Patent - New Act 6 2022-11-10 $203.59 2022-10-24
Maintenance Fee - Patent - New Act 7 2023-11-10 $210.51 2023-10-19
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
SYMBOL TECHNOLOGIES, LLC
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Final Fee 2020-05-06 1 46
Cover Page 2020-06-30 1 34
Representative Drawing 2018-05-15 1 5
Representative Drawing 2020-06-30 1 3
Abstract 2018-05-15 2 68
Claims 2018-05-15 9 296
Drawings 2018-05-15 10 142
Description 2018-05-15 23 1,179
Representative Drawing 2018-05-15 1 5
Patent Cooperation Treaty (PCT) 2018-05-15 1 40
International Search Report 2018-05-15 5 127
Declaration 2018-05-15 1 29
National Entry Request 2018-05-15 5 138
Voluntary Amendment 2018-05-15 8 260
Claims 2018-05-16 6 220
Cover Page 2018-06-14 1 37
Change to the Method of Correspondence 2019-01-02 3 160
PCT Correspondence 2019-01-02 3 159
PCT Correspondence 2019-03-01 3 148
Examiner Requisition 2019-03-26 6 295
Amendment 2019-09-25 9 360
Claims 2019-09-25 5 195