Language selection

Search

Patent 2862421 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2862421
(54) English Title: METHOD OF RECLAIMING PRODUCTS FROM A RETAIL STORE
(54) French Title: PROCEDE DE REPRISE DE PRODUITS D'UN MAGASIN DE DETAIL
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06Q 30/08 (2012.01)
  • G06Q 30/00 (2012.01)
(72) Inventors :
  • BONNER, BRENT BRACEWELL (United States of America)
  • CARMACK, ROBERT EDWARD (United States of America)
  • JONES, TITUS ARTHUR (United States of America)
  • MEISER, DOUGLAS STEVEN (United States of America)
  • NORRIS, CHRISTOPHER DALE (United States of America)
  • WELLIKOFF, NATHANIEL (United States of America)
  • ZETTLER, MICHAEL JOHN (United States of America)
(73) Owners :
  • SUNRISE R&D HOLDINGS, LLC. (United States of America)
(71) Applicants :
  • SUNRISE R&D HOLDINGS, LLC. (United States of America)
(74) Agent: OYEN WIGGS GREEN & MUTALA LLP
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2013-01-09
(87) Open to Public Inspection: 2013-07-18
Examination requested: 2017-10-19
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/US2013/020854
(87) International Publication Number: WO2013/106446
(85) National Entry: 2014-06-27

(30) Application Priority Data:
Application No. Country/Territory Date
13/346,335 United States of America 2012-01-09

Abstracts

English Abstract

A method of reclaiming products from a retail store includes passing a plurality of products through an object identification system, capturing images of the plurality of products with object sensors as the products pass through the object identification system, processing the images to identify an indicium for each product, transmitting the indicium for each product through a wireless communication network to a logic engine, sorting the plurality of products based on the indicia to produce a bundled lot of products, producing a unique identifier for the bundled lot, the unique identifier having a machine readable code, and communicating the unique identifier of the bundled lot and the indicia of the products in the bundled lot to an auctioneer for initiation of a direct-to-consumer auction for auctioning of the bundled lot directly to a group of bidders.


French Abstract

L'invention concerne un procédé de reprise de produits, d'un magasin de détail, qui consiste à faire passer une pluralité de produits devant un système d'identification d'objet, à capturer des images de la pluralité de produits avec des capteurs d'objet lorsque les produits passent devant le système d'identification d'objet, à traiter les images pour identifier un indice pour chaque produit, à transmettre l'indice pour chaque produit sur un réseau de communication sans fil à un moteur logique, à trier la pluralité de produits sur la base des indices pour produire un lot groupé de produits, à produire un identificateur unique pour le lot groupé, l'identificateur unique ayant un code lisible par machine, et à communiquer l'identificateur unique du lot groupé et les indices des produits dans le lot groupé à un commissaire-priseur pour initier une vente aux enchères s'adressant directement aux consommateurs pour vendre aux enchères le lot groupé directement à un groupe d'enchérisseurs.

Claims

Note: Claims are shown in the official language in which they were submitted.


WHAT IS CLAIMED IS:
1. A method of reclaiming products from a retail store, the method
comprising:
passing a plurality of products through an object identification system;
capturing images of the plurality of products with object sensors as the
products pass
through the object identification system;
processing the images to identify an indicium for each product;
transmitting the indicium for each product through a wireless communication
network to
a logic engine;
sorting the plurality of products based on the indicia to produce a bundled
lot of products;
producing a unique identifier for the bundled lot, the unique identifier
having a machine
readable code; and
communicating the unique identifier of the bundled lot and the indicia of the
products in
the bundled lot to an auctioneer for initiation of a direct-to-consumer
auction for auctioning of
the bundled lot directly to a group of bidders.
2. The method of claim 1, further comprising storing the indicium of each
product in
a database, and associating the indicium of each product with the unique
identifier of the
associated bundled lot in the database.
3. The method of claim 2, further comprising saving auction end details in
the
database for each product sold in the direct-to-consumer auction.
4. The method of claim 3, wherein the auction end details comprise the
unique
identifier, a total cost of the plurality of products in the bundled lot of
products, a winning bid
amount, and a date and time the direct-to-consumer auction closed.
5. The method of claim 1, wherein the indicium comprises a bar code and a
plurality
of characters, and the processing the images comprises identifying the
characters using an
algorithm selected from the group consisting of an optical character
recognition algorithm and a
matching algorithm that is based on a comparison between character shape and a
library
comprising selected possible character shapes.
73

6. The method of claim 1, further comprising transmitting at least one
image for each
product in the bundled lot through the wireless communication network to the
logic engine, and
communicating the at least one image with the unique identifier and the
indicia of products in the
bundled lot.
7. The method of claim 1, wherein a purchaser of the bundled lot is
selected from a
group of bidders in the direct-to-consumer auction,
8. The method of claim 7, wherein the purchaser is a best bidder of the
group of
bidders.
9. A method of reclaiming products from a retail store, the method
comprising:
scanning a plurality of products, one at a time, with an object identification
system;
producing a bar code for each product based on said scanning; =
transmitting the bar code for each product through a wireless communication
network to a
logic engine;
sorting the plurality of products based on the bar codes to produce a bundled
lot of
products;
producing a unique identifier for the bundled lot, the unique identifier
having a machine
readable code; and
communicating the unique identifier of the bundled lot and the bar codes of
the products
in the bundled lot to an auctioneer for initiation of a direct-to-consumer
auction for auctioning of
the bundled lot directly to a group of bidders.
10. The method of claim 9, further comprising storing the bar code of each
product in
a database, and associating the bar code of each product with the unique
identifier of the
associated bundled lot in the database.
11. The method of claim 10, further comprising saving auction end details
in the
database for each product sold in the direct-to-consumer auction.
12. The method of claim 11, wherein the auction end details comprise the
unique
identifier, a total cost of the plurality of products in the bundled lot of
products, a winning bid
amount, and a date and time the direct-to-consumer auction closed.
74

13. The method of claim 9, further comprising:
capturing at least one image of each product with object sensors as the
products are
scanned with the object identification system;
transmitting the at least one image for each product in the bundled lot
through the
wireless communication network to the logic engine; and
communicating the at least one image with the unique identifier and the bar
codes of the
products in the bundled lot.
14. The method of claim 9, wherein a purchaser or the bundled lot is
selected from a
group of bidders in the direct-to-consumer auction.
15. The method of claim 14, wherein the purchaser is a best bidder of the
group of
bidders.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
METHOD OF RECLAIMING PRODUCTS FROM A RETAIL STORE
CROSS-REFERENCE TO RELATED APPLICATIONS
100011 This application claims the benefit of priority from U.S.
Application Serial No.
13/346,335, filed January 9, 2012, which is a continuation-in-part of U.S.
Application Serial No.
13/032,086, filed February 22, 2011 and issued as U.S. Patent No. 8,108,265 on
Janaury 31,
2012, which is a divisional of U.S. Application Serial No, 12/732,750, filed
on March 26, 2010
and issued as U.S. Patent No. 7,917,405 on March 29, 2011, which claims
priority to, cross-
references and incorporates by reference in full U.S. Provisional Patent
Application 61/163,644 -
filed on March 26, 2009. U.S. Application Serial No. 12/732,750 is a
continuation-in-part of and
claims priority to U.S. Application Serial No. 12/408,581, filed March 20,
2009 and issued as
U.S. Patent No. 7,742,952 on June 22, 2010, which is a continuation-in-part of
U.S. Application
Serial No. 12/353,817, filed January 14, 2009 and issued as U.S. Patent No.
7,734,513 on June 8,
2010, and a continuation-in-part of U.S. Application Serial No. 12/353,760,
filed January 14,
2009 and issued as U.S. Patent No, 7,739,157 on June 15, 2010, each of which
are continuation-
in-part applications of U.S, Application Serial No. 12/172,326, filed July 14,
2008 and issued as
U.S. Patent No. 7,672,876 on March 2, 2010. This application is also a
continuation-in-part of
U.S. Application Serial No. 13/047,532, filed March 14,.2011 and currently
pending, which
claims priority to U.S. Provisional Application No. 61/430,804 filed January
7, 2011 and U.S.
Provisional Application No. 61/313,256, filed March 12, 2010. All of the above-
referenced
applications are incorporated by reference herein in their entireties.
FIELD
100021 The present invention generally relates to a method of reclaiming
product from a
retail store. More specifically, the present invention relates to a method of
reverse logistics using
an auction to sell products directly to consumers from a store instead of
using a third party
reclamation company.
BACKGROUND
100031 Typically, a product moves through each step of a supply chain to
bring the product
closer to an end consumer. Products usually move from a manufacturer to a
warehouse, to a
distributor, to a retail store, and finally to a consumer. Sometimes a product
must move at least
one step backwards in the supply chain for a number of reasons. In some cases,
the product
shipped to the store or purchased by the consumer is the wrong product, size,
shape, color, type,
and/or kind Also, a consumer may not be satisfied with a product once it is
purchased and wants
to return it. Regardless of the reason, there are many occasions when products
are returned to
1

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
retailers, wholesalers, or manufacturers through reverse logistics. It is
estimated that reverse
logistics costs account for almost one percent of the total United States
gross domestic product.
[0004] In addition to the above examples of products that are returned by
consumers to the
store from which they purchased the product, there are many products that
reach the store, but are
never displayed for a potential consumer's purchase. Moreover, some products
are displayed in a
store for potential purchase, but for a variety of reasons, the products are
never sold, which is the
final step of the supply chain. There are a plethora of reasons a product is
sent backwards through
the supply chain without having been purchased and then returned to the store
and/or distributor.
These reasons include without limitation the following: products that are
defective, products
nearing an expiration date, products that are damaged, products that are
discontinued, surplus
products, products on recall, and products delivered as part of a promotion.
The process of
moving products backward through the supply chain at least one step in the
chain is commonly
known as "reverse logistics".
[0005] Generally, the goal of a reverse logistics process is to move these
unsold or returned
products through the supply chain in reverse order to recover residual product
value. Typically,
retailers can return products to suppliers for a small credit, Today,
retailers are often forced to
bear the cost and financial loss of products that cannot be sold to a
secondary retailer and cannot
be returned to the supplier for a small credit. Businesses utilizing reverse
logistics processes are
concerned about the frequency that the reverse logistics process must be used
because of the high
cost and attendant profit loss.
[0006] In today's marketplace, many retailers treat the return of products
and reverse logistic
processes as individual disjointed transactions. The challenge for retailers
is to recover the
greatest amount of value spent on unsalable products in a manner that provides
quick, efficient,
and cost-effective collection, reclamation, and resale.
[0007] Many retailers make use of a third-party reverse logistic company to
assist with the
reverse logistics process. Third-party reverse logistics providers see that up
to seven percent (7%)
of an enterprise's gross sales are captured by return costs. Third-party
reverse logistics providers
can realize between about twelve percent (12%) to about fifteen percent (15%)
profit on its
business.
[0008] While embodiments of the invention disclosed herein extend to a wide
variety of
retail applications, embodiments of the present invention is particularly well-
suited but not
limited to retail grocery stores. In the grocery environment, reverse
logistics is typically not
applied on a small scale, due to the relatively low cost of individual items.
When a grocery
product is damaged or discontinued, on recall, or approaching or past an
expiration date, it is
2
=

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
removed from the shelf and checked out of the store's inventory system, which
begins the reverse
logistics process.
[0009] Under the
typical reverse logistics process, the unique identifiers or barcodes of
products unfit for sale are scanned out of the store's inventory system and
the products are sorted
into a collection of unsalable products. Eventually, these unsalable products
are placed into boxes
or onto skids, which are shipped via truck to the nearest distribution center
of the store. At the
distribution center, similar shipments of unsalable products are received from
multiple stores
throughout the region, and consolidated onto pallets.
[0010] Next, the
pallets of boxes of unsalable products are shipped to a reclamation center.
The reclamation center does not know the identity of the unsalable products it
is receiving
because in the typical reclamation process, neither the retailer nor the
distribution center tracks
the identity of each product placed onto the pallet. Similarly, in the typical
reclamation process,
neither the retailer nor the distribution center tracks the condition of each
unsalable product
placed onto the pallet. The reclamation center processes consolidated
shipments of unsalable
products from various companies, including the store, and handles them
appropriately. For
instance, upon initial arrival at the reclamation center, unsalable products
are examined. Leaking
and otherwise heavily damaged products are disposed of by the reclamation
center. The
remaining unsalable products are sorted according to the disposition service
requested by the
product manufacturer. While some unsalable products may be returned to the
product
manufacturer for a small amount of credit, other unsalable products might be
donated to charity
services, disposed of at a loss to the store, destroyed if on recall, or be
grouped with similar
unsalable products (coffee products, for example) and then sold off by the
pallet to secondary
retailers.
[0011] Current
methods use an ad-hoc approach to the reverse logistics processes. In other
words, no standardized method exists for packaging, locating packages during
the process, or
shipping unsalable products. What is needed therefore is a method for
retailers to standardize
packaging, locating unsalable products, and shipping unsalable products all
from the store, rather
than shipping the unsalable products to one or more third parties.
[0012] Regardless of the specific form of disposition, reclamation centers
typically charge
retailers handling and storage fees for each item handled. Such fees typically
range from about
twenty-five cents to about thirty-five cents, depending upon the associated
agreements. The fees
vary according to the disposition path of the unsalable products, and a
penalty fee is charged if an
undamaged unsalable product is delivered to the reclamation center. Typically,
a fee is assessed
even if the unsalable products is ultimately destroyed or disposed of at the
reclamation center,
3

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
Much inefficiency exist in the typical reverse logistics procedure including
loss of product value,
theft, loss of product, inefficient boxing and packaging, and inefficiencies
caused by several
instances of shipping between retailers, warehouses, distribution centers,
reclamation centers, and
manufacturers.
[00131 Despite the substantial financial stake companies have in the
current reverse logistics
process, the processes stiffer from a number of defects and inefficiencies
that are addressed by
the present invention. The first common inefficiency in contemporary reverse
logistics processes
is an inefficient use of resources. Many companies use reverse logistic
methods which are not
standardized and suffer from excessive costs associated with shipping, boxing,
excessive product
handling, and inventory management. Second, buyers of these pallets of
unsalable products
typically have no way of knowing the contents of the pallets with any level of
precision. Third,
many unsalable products are exempted from the current method of reselling
items packaged in
large pallets.
[0014] Current reverse logistics methods often severely limit the market of
potential buyers
of unsalable products, because the unsalable products are sold only in large
pallets, which are
only useful for large entities like discount stores and such, which have the
means and demand for
products in bulk. Accordingly, current reclamation methods are not able to be
downsized in scale
to the sale of a box to a single customer. What is therefore needed is a
scalable reclamation
method to sell directly to individuals and which may eliminate the use of
third party reclamation
companies.
[0015] Also, in current reverse logistics processes, many unsalable
products are ineligible for
reclamation by the original product manufacturers. While some product
manufacturers do not
participate at all, others opt for up-front negotiation of reclamation credit.
Furthermore, some
unsalable products are exempted from the reclamation process because they are
hazardous, other
products are exempted from the reclamation process because they are
perishable, and some
private label products effectively offer no return for unsalable products.
What is therefore needed
is a reverse logistics method that can easily categorize the various types of
unsalable products to
determine which ones are auctionable direct to the consumer from the retail
establishment itself.
[0016] Current reverse logistics processes also make it extremely
difficult, if not impossible,
to record and access certain information in compliance with record keeping
requirements of FDA
regulations and other government requirements, such as The Bioterrorism Act.
For example, if a
store knowingly sells products to wholesalers or other businesses, then the
store is required to
maintain certain records including the name and address of the firm buying the
products,
telephone and fax numbers (as well as email addresses, if available) of the
purchaser, type of
4

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
food (including brand name and specific product name), date of sale, quantity
and type of
packaging (e.g., 12 oz. cans), immediate transporter to buyer, and lot codes
from the
manufacturer. Because existing reverse logistics processes are unable to
record this information,
stores are limited to selling to, end users (consumers), donating unsalable
products, or failing
meet the government regulations. Therefore, what is needed is a reverse
logistics method that can
maintain and access suitable records to demonstrate compliance with such
federal regulations.
10017] In a variety of environments, it may be useful to identify objects
and to read coded
information related to those objects. For example, point-of-sale (POS) systems
make use of bar
code readers to identify products to be purchased. Likewise, shipping,
logistics and mail sorting
operations may make use of automated identification systems. Depending on the
context, coded
information may include, prices, destinations, or other information relating
to the object on which
the code is placed. In general, it is useful to reduce a number of errors or
exceptions that require
human intervention in the operation.
SUMMARY
[0018] Accordingly, the present invention relates to one or more methods of
reverse logistics.
More specifically, the present invention is in the technical field of handling
and recovering at
least some of the value of unsalable products in a store. Under the present
invention a large
number of unsalable products will be sold through an electronic, virtual
auction or through some
other, more rudimentary auction means. The methods disclosed herein provide
ways for retailers
to standardize packaging, locate unsalable products, and ship unsalable
products from the store
directly to consumers, rather than shipping the unsalable products to one or
more third parties or
middlemen.
[0019] According to an aspect of the present invention, there is provided a
method of
reclaiming products from a retail store. The method includes passing a
plurality of products
through an object identification system; capturing images of the plurality of
products with object
sensors as the products pass through the object identification system;
processing the images to
identify an indicium for each product; transmitting the indicium for each
product through a
wireless communication network to a logic engine; sorting the plurality of
products based on the
indicia to produce a bundled lot of products; producing a unique identifier
for the bundled lot, the
unique identifier having a machine readable code; and communicating the unique
identifier of the
bundled lot and the indicia of the products in the bundled lot to an
auctioneer for initiation of a
direct-to-consumer auction for auctioning of the bundled lot directly to a
group of bidders,

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[0020] In an embodiment, the method further comprises storing the indicium
of each product
in a database, and associating the indicium of each 'product with the unique
identifier of the
associated bundled lot in the database.
[0021] In an embodiment, the indicium comprises a bar code and a plurality
of characters,
and the processing the images comprises identifying the characters using an
algorithm selected
from the group consisting of an optical character recognition algorithm and a
matching algorithm
that is based on a comparison between character shape and a library comprising
selected possible
character shapes.
[0022] In an embodiment, the method further comprises transmitting at least
one image for
each product in the bundled lot through the wireless communication network to
the logic engine,
and communicating the at least one image with the unique identifier and the
indicia of products
in the bundled lot.
[0023] According to an aspect of the present invention, there is provided a
method of
reclaiming products from a retail store. The method includes scanning a
plurality of products,
one at a time, with an object identification system; producing a bar code for
each product based
on said scanning; transmitting the bar code for each product through a
wireless communication
network to a logic engine; sorting the plurality of products based on the bar
codes to produce a ,
bundled lot of products; producing a unique identifier for the bundled lot,
the unique identifier
having a machine readable code; and communicating the unique identifier of the
bundled lot and
the bar codes of the products in the bundled lot to an auctioneer for
initiation of a direct-to-
consumer auction for auctioning of the bundled lot directly to a group of
bidders.
[0024] In an embodiment, the method further comprises storing the bar code
of each product
in a database, and associating the bar code of each product with the unique
identifier of the
associated bundled lot in the database.
[0025] In an embodiment, the method further comprises capturing at least
one image of each
product with object sensors as the products are scanned with the. object
identification system;
transmitting the at least one image for each product in the bundled lot
through the wireless
communication network to the logic engine; and communicating the at least one
image with the
unique identifier and the bar codes of the products in the bundled lot.
[0026] In an embodiment of methods according to aspects of the invention,
the methods
further comprise saving auction end details in the database for each product
sold in the direct-to-
consumer auction.
6

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[0027] In an embodiment of methods according to aspects of the invention,
the auction end
details comprise the unique identifier, a total cost of the plurality of
products in the bundled lot of
products, a winning bid amount, and a date and time the direct-to-consumer
auction closed.
[0028] In an embodiment of methods according to aspects of the invention, a
purchaser of the
bundled lot is selected from a group of bidders in the direct-to-consumer
auction.
[0029] In an embodiment of methods according to aspects of the invention,
the purchaser is a
best bidder of the group of bidders.
[0030] The above summary section is provided to introduce a selection of
concepts in a
simplified form that are further described below in the detailed description
section. The
summary is not intended to identify key features or essential features of the
claimed subject
matter, nor is it intended to be used to limit the scope of the claimed
subject matter.
Furthermore, the claimed subject matter is not limited to implementations that
solve any or all
disadvantages noted in any part of this disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] These and other features will become better understood with regard
to the following
description, pending claims and accompanying drawings, in which like reference
numerals
identify like elements and in which:
[0032] Figure 1 provides a flowchart representing an embodiment of a method
of collecting
and profiting from auctionable products according to the present invention;
[0033] Figure 2 provides a flowchart of an embodiment of a pre-auction
process;
[0034] Figure 3 provides a flowchart of an embodiment of an auction method;
[0035] Figure 4 provides a flowchart of an embodiment of a post-auction
method;
[0036] Figure 5 schematically illustrates an embodiment of a system for
item identification;
[0037] Figure 6A is an oblique view of an embodiment of a system for item
identification;
[0038] Figure 6B is an oblique view of the system of Figure 6A;
[0039] Figure 7A is an oblique right side view of an embodiment of a system
for item
identification;
[0040] Figure 7B is a top plan view of an embodiment of a system for item
identification;
[0041] Figure 7C is a right elevation view of an embodiment of a system for
item
identification;
[0042] Figure 8A is a left elevation view of an embodiment of a system for
item
identification;
[0043] Figure 8B is an oblique left side view of an embodiment of a system
for item
identification;
7

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[0044] Figure 9A is an oblique cutaway left side view of an embodiment of a
system for item
identification;
[0045] Figure 9B is a cutaway left elevation view of an embodiment of a
system for item
identification;
[0046] Figure 10A is a cutaway left elevation view of an embodiment of a
system for item
identification;
[0047] Figure 10B is an oblique cutaway top view of an embodiment of a
system for item
identification;
[0048] Figure 11A is an oblique cutaway left side view of an embodiment of
a system for
item identification;
[0049] Figure 11B is a cutaway left elevation view of an embodiment of a
system for item
identification;
[0050] Figures 12-16 are data flow diagrams illustrating data flow through
an embodiment of
a system for item identification and its subsystems;
[0051] Figure 17 is a timing diagram illustrating output of certain sensors
in an embodiment
of a system for item identification;
[00521 Figure 18 is a data flow diagram illustrating data flow through an
embodiment of a
subsystem of a system for item identification; and
[0053] Figure 19 is a data flow diagram illustrating data flow through an
embodiment of a
subsystem of a system for item identification
DETAILED DESCRIPTION
[0054] The embodiments disclosed herein provide reverse logistics methods
for retailers to
standardize packaging, locate unsalable products, sell auctionable products to
consumers directly
from the retail establishment, and ship substantially unsalable products from
a store, rather than
ship the substantially unsalable products to one or more middlemen,
consequently making
reverse logistics more efficient. Thus, as used herein the term "unsalable"
refers to items for
consumer use typically sold in a retail channel that for one or more reasons
are no longer salable
in the retail channel and/or are eligible for sale outside of the retail
channel. An example of an
unsalable product is a product that is close to or has passed its due date for
retail sale. Another
example of an unsalable product is a product that may continue to be sold in
its original retail
channel but that its manufacturer or retailer determines is a suitable
candidate for treatment or re-
sell through reverse logistics.
[0055] In certain embodiments, individual consumers purchase unsalable
products directly
from the store at discounted prices through typical forward logistics means.
The store therefore
8

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
sells its products to a wider target audience and recovers a larger percentage
of its investment in
the unsalable products. The disclosed method(s) cut out one or more entities
acting as middlemen
in the reverse logistics supply chain. Selected embodiments additionally
reduce the cost of
reverse logistics, generate additional revenue, and are environmentally
friendly.
[0056] Some embodiments reduce expenses associated with the transport and
handling of
unsalable products. Due to the nature of auctions, and specifically that the
bidders assume
responsibility for the cost of shipping, stores benefit by no longer paying to
transport unsalable
products back to distribution centers and reclamation centers with the
possibility of zero return
on the residual value of the unsalable products. Many businesses do not
currently profit from
unsalable products moving through the traditional reverse logistics system
because
reimbursements from manufacturers and donation tax credits simply alleviate
financial burdens
associated with traditional reverse logistics.
[0057] In addition to lessening the expense associated with traditional
reverse logistics,
embodiments have the potential to generate increased revenue through real-time
virtual auctions
of typically substantially unsalable products. Revenue is increased by tapping
previously
untapped markets in the current reverse logistics processes. Additionally,
under current
reclamation methods, for many unsalable products, such as private-label brands
and corporate-
brand products, the return on capital expended is effectively zero. Indeed,
some manufacturers do
not participate in reimbursement schemes for the return of their unsalable
products. The
unsalable products of these manufacturers can be sold through the methods
disclosed herein at a
profit, or at least at a larger return of capital, in contrast to the severe
losses typically generated,
[00581 In some embodiments, the method reuses boxes already present in the
store, making
such embodiments particularly environmentally friendly. For example, banana
boxes, which are
used to deliver large quantities of bananas to grocery stores, are plentiful.
Reusing these
previously used boxes reduces or even eliminates the expense associated with
destroying or
disposing of these used boxes, as well as reducing or even eliminating the
expense of purchasing
new boxes for the sole purpose of packaging the auctionable products that will
be going through
the reverse logistics system. Moreover, these embodiments are also more
environmentally
friendly than current reverse logistics methods because they require fewer
instances of shipping
between the retailer, distribution center, reclamation center, and
manufacturer,
[0059] The embodiments disclosed herein can be used in conjunction with
components of
other reverse logistics methods. Also, stores can retain alternate reclamation
procedures for
products, such as hazardous products, which should not be sold through the
reverse logistics
process. Items not suitable for sale through the reverse logistics process can
include spoiled items
9
=

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
or items which otherwise pose a hazard to the consumer and require disposal of
or reclamation
through some other process, Unsalable products that are eligible to be
auctioned off are referred
to herein as "auctionable products."
[0060] In an embodiment, the store uses a reverse logistics company. The
store and the
reverse logistics company exchange data during several key points of the
reverse logistics
procedure. For example, key communications between the store and the reverse
logistics
company occur prior to an auction, during an auction, and after an auction,
though
communication can take place at other times as determined by the specific
application.
[0061] Prior to an auction, the store transmits auction start details to
the logistics company. In
an embodiment, details that the store communicates to the reverse logistics
company prior to the
auction include one or more unique box identifiers (UBI's), the weight of each
box, a contents
list, and photographs (actual or stock) of auctionable Products contained in
the box.
100621 During an auction, the reverse logistics company obtains specific
information,
included in the auctionable product data, about the auctionable products in
the auction. In one
embodiment, the reverse logistics company uses and decodes universal product
codes (UPC's) to
access a product image and information database. The information database is
stored on a central
computer, or logic engine, that is part of a communications multi-network of
the store. For each
item, the exemplary information database includes, but is not limited to, the
following: the
auctionable product's bar code, a description or title for the item, the
expected or actual item
weight, photos of the item from an image database, the cost of an item, and a
sale price of the
item. Still other information that can be stored in the information database
includes a categorical
type of the auctionable product, including the brand name and specific product
name (e.g., ABC
Brand Chicken Noodle Soup), the quantity and type of packaging (e.g., 12 oz.
can), the
expiration date (if any), and the manufacturer's lot code of the auctionable
product. Other
information can be included in the information database, and some of the
stated information can
be excluded from the database, depending on the specific application
contemplated.
[0063] After an auction, the reverse logistics company communicates
information regarding
the conclusion of the auction to the store. In an embodiment, the reverse
logistics company
transmits auction end details to the store. The auction end details include,
but are not limited to,
the following: the UBI, the date and time the auction closed (date of sale),
the name and shipping
address of the winning bidder, telephone and fax numbers of the winning
bidder, email address
of the winning bidder, the shipping preference of the winning bidder, the
winning bid amount,
the total cost of auctionable products in the lot, and the total sales value
of items in the lot. In
some embodiments, the auction end details are saved into the product
information database for

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
each product sold in the auction. This information is useful, for example, to
show compliance
with certain FDA regulations and The Bioterrorism Act. In addition, the
information in the
product information database may be analyzed to determine the best assortment
of auctionable
products in a given lot.
[0064] Figure 1 provides an overview of an embodiment of a reverse
logistics method 1100
according to the present invention. As described in Step 1102, the reverse
logistics process
begins when the store, and more specifically a store associate, identifies
unsalable products as
auctionable products to be sold in the reverse logistics process. All
unsalable products are
evaluated (by software on a computer, a handler, an associate, or a manager,
for example) to
determine if they are suitable to be processed through the reverse logistics
system. For example,
unsalable items that are hazardous are unsuitable for the reverse logistics
system, though other
qualities could make unsalable products unsuitable as well. Examples of
unsalable products are
products that are approaching or passed their due dates for retail sale,
although other products
could be deemed as auctionable products, such as those with dented cans or
torn packaging. Such
unsalable products can be located on store shelves positioned around the
retail store, or they can
be located at a customer service counter, for example. The term "store
shelves" as used herein is
intended to be broad and includes but is not limited to actual shelves in a
store that hold products
for purchase, stand-alone displays that contain products, for example,
positioned at the end of a
row of a store shelf, refrigeration units, bakery displays, pharmaceutical
displays, fresh vegetable
displays and any other product-bearing displays typically used in a retail
store wherever
positioned in a store and/or on a store's premises. Furthermore, unsalable
products can be
identified at any point of the forward supply chain including in the store or
at a warehouse before
the items even arrive at the store. Selected unsalable products that are
eligible to be auctioned are
identified by retailers as auctionable products, and are sorted into one or
more bundled lots. Each
bundled lot is assigned to a box.
[0065] In Step 1104, the store assigns each box a unique box identifier
(UBI). In Step 1106, a
store associate scans the UBI associated with a particular box and bundled lot
using a product
scanning device associated with the store associate's wireless end device.
Alternatively, Step
1106 could be performed automatically. In Step 1108, the scanned UBI is
transmitted from the
wireless end device through a communications multi-network to a central
computer, or logic
engine.
[0066] The communications multi-network comprises: (1) at least two mesh
communication
networks; (2) at least two non-mesh communication networks, such as at least
two server
networks; (3) at least one non-mesh communication network and at least one
mesh
11

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
communication network through which the location tracking device operates; or
(4) two or more
other types of communication networks known to persons with skill in the art.
In other words, the
communications multi-network comprises two or more dissimilar types of
communication
networks or two or more similar type of communication networks. In
embodiments, at least one
of the at least two communication networks operates as a ZIGBEE (a registered
service mark to
the ZigBee Alliance for communication networking) communication network. In
selected
embodiments, the communications multi-network is a single network
architecturally, but
functionally operates as two or more differently functioning networks. For
example, there may be
a single communications multi-network that functions as a star communication
network and a
mesh communication network at the same time,
[0067] Alternatively, the method operates using a wireless communication
network. In
embodiments, the wireless communication network operates according to the
802.11 wireless
communication protocol. In embodiments the wireless communication network
operates
according to the 802.15 wireless communication protocol. Such wireless
communication
networks are typically found in and useful for communication within large
structures such as
warehouses, hotels, hospitals, and stores. It is to be understood that
embodiments which are
described in accordance with the use of a communications multi-network can
also function using
a wireless communication network.
[0068] The communications multi-network is managed by a logic engine. To be
clear, the
term "logic engine" as used herein means one or more electronic devices
comprising a switch and
a server. Though the embodiments described herein reference "a logic engine,"
it is contemplated
that multiple logic engines can be used to perform the same function within
the communications
multi-network, A logic engine is also used in embodiments operating with a
wireless
communication network, The logic engine includes hardware such as one or more
server-grade
computers, including without limitation the location tracking server, but also
includes the ability
to perform certain computational functions through software. The term
"computational functions"
as used herein means any and all microprocessor or microcontroller based
computational tasks or
routines commonly known in the art to occur in a computer or computer-like
device that
comprises software, memory, and a processor, Mechanisms known in the art other
than software
can be used provided that the mechanism allows the logic engine to go through
logic functions to
provide location calculations, evaluations, conduct timing, etc.
[0069] In addition to managing the communications multi-network, the logic -
engine also
routes, organizes, manages, and stores data received from other members of the
communications
multi-network. In embodiments, the communications multi-network includes at
least one star
12

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
communication network through which non-location data is transferred to the
logic engine and at
least one mesh communication network through which location data is
transferred to the logic
engine. The logic engine locates auctionable products, which are those
products for purchase that
are eligible to be auctioned and located on store shelves, in the retail
store. Auctionable product
data is produced as a result of the shopper using a scanning device to scan a
product code of each
said auctionable product with a scanning device. In an embodiment, auctionable
product data is
transmitted over the at least one star communication network portion of the
communications
multi-network to the logic engine,
10070] The logic engine is additionally capable of performing the functions
of the switch,
gateway server, and other store servers. Other store servers include associate
task managing
servers, computer assisted ordering system computers, in-store processors (ISP
server), location
tracking servers, commerce servers, or other store computers. Further, the
logic engine serves as
the retail establishment's main database, including product description
databases and shopper
profile databases. The logic engine also provides network notification, data
prioritization, event
prioritization, and other functions, Referring back to Step 1108 of Figure 1,
the central computer
or logic engine opens an electronic file for the UBI.
[0071] In Step 1110, the auctionable products are scanned and organized
into boxes to create
a contents list. Each auctionable product is scanned before or as it is placed
into a box to create
an electronic record to track the auctionable products. More specifically, a
product code of
auctionable product, e.g., a barcode or UPC code, is scanned to produce
auctionable product data.
In some embodiments, the scanning is performed automatically, e.g. using a
scan tunnel system,
such as the object identification system 25 discussed in further detail below.
In some
embodiments, the products are scanned by the store associate using his
scanning device as the
products are organized into boxes. In embodiments in which store associates
scan the auctionable
products, store associates may use product scanning devices as they locate
auctionable products
on store shelves and gather those auctionable products into bundled lots. The
product scanning
devices, in some embodiments, are associated with wireless end devices, and
are in
communication with the logic engine through the communications multi-network.
100721 In some embodiments, location tracking devices are associated with
product scanning
devices provide improved location data to the store through the communications
multi-network.
In these embodiments, the communications multi-network includes at least one
mesh
communication network for the communication of location data regarding
scanning devices
throughout the store, and at least one star communication network for
communicating non-
location data, e.g. auctionable product data, to the scanning devices. The
location data is
13

CA 02862421 2014-06-27
WO 2013/106446 PCT/1JS2013/020854
transmitted through the mesh communication network to the logic engine and the
non-location
data is transmitted through the star communication network between the logic
engine and other
members of the communications multi-network, such as for example the scanning
devices and
wireless end devices.
(00731 In embodiments, the logic engine performs ray tracing calculations
and location
calculations to determine the location of a location tracking device
associated with a product
scanning device in relation to information routers of the mesh communication
network of the
communications multi-network. In some embodiments, the logic engine stores
location data on
products (product location data and auctionable product data) and operators
within a store
(operator location data). The location tracking device can be tracked
continuously as the store
associate moves through the store gathering auctionable items, or location
data can be produced
each time the store associate scans the product code of an auctionable
product. In either case, the
location data is transmitted to the logic engine through the mesh
communication network of the
communications multi-network, and the scanning device is said to be tracked
through the store.
Further details relating to the tracking of a location tracking device through
a communications
multi-network are found in U.S. Application Serial No. 12/172,326 filed on
July 14, 2008 and
issued as U.S, Patent No. 7,672,876 on March 2, 2010, U.S. Application Serial
No. 12/408,581
filed on March 20, 2009 and issued as U.S. Patent No. 7,742,952 on June 22,
2010, U.S.
Application Serial No. 12/353,817 filed on January 14, 2009 and issued as U.S,
Patent No.
7,609,140 on October 27, 2009, and U.S. Application Serial No. 12/353,760
filed on January 14,
2009 and issued as U.S. Patent No, 7,739,157 on June 15, 2010, the relevant
disclosures of each
of which are fully incorporated by reference.
[00741 In other embodiments, the store, specifically through the logic
engine, is aware of the
location of each product or each group of products, known herein as
auctionable product
locations, because the store employees have recorded the locations of each
group of products in a
product database as they stocked the items in the store. The locations of the
product groups are
given coordinates on a product location map, just as nearly all other physical
elements of the
store are assigned coordinates on a two-dimensional X and Y grid positioned
over, or juxtaposed
on top of, the store map. In an embodiment, the store, through a logic engine,
is aware of the
precise location of over about eighty percent of the products on display in
said retail
establishment. In alternative embodiments, the store is aware of the majority
of product locations,
the precise locations of the products on display in said retail establishment.
Thus, with the
knowledge of the auctionable product locations, the logic engine creates
auctionable product
14

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
location data when each auctionable product is scanned using the product
scanning device, and
the store can track the operator of the product scanning device as he moves
throughout the store.
[0075] Regardless of whether the auctionable products are scanned by an
associate or
automatically, the auctionable product data is transmitted to the store's
central computer. In
embodiments, the auctionable product data is transmitted through the star
communication
network of the communications multi-network. The logic engine organizes the
auctionable
product data into a contents list for each bundled lot. Embodiments of the
contents list created in
Step 1110 include additional pieces of information regarding the auctionable
products. The
additional information regarding the auctionable products is stored in an
information database,
which could be part of the logic engine, or could be another server connected
to the
communications multi-network.
[0076] Certain additional details are intended for store purposes only
while other details are
provided to the auctioneer and bidding parties. For instance, the contents
list including stock
photographs and descriptions of each product, would be appropriate to share
with the auctioneer
and bidding parties while other details, such as the weight of the item,
original price for
consumers, the price the store paid for the item, the original supplier, and
the reason why the
unsalable product did not go forward through the supply chain would remain
confidential to the
store. Then, in Step 1112, the contents list(s) and additional details, if
any, are associated with the
box's UBI and the electronic file created for the UBI on the logic engine or
central computer.
[0077] In Step 1114, photographs of each auctionable product in each box
are captured for
use in an Internet-based auction. In some embodiments, the photographing is
conducted by an
associate, while in other embodiments, the photographing can be accomplished
automatically. In
still other embodiments, photographing each item is not included in the
method. Therefore, the
step of photographing each item is entirely optional, providing a visual aid
to the bidding party.
The use of photographs is particularly beneficial during resale of damaged
products, allowing a
potential purchaser to easily assess the extent of the damage. Step 1114 can
take place before or
during Step 1110, when the barcodes of each auctionable product are scanned
and the
auctionable products are placed into the boxes.
[0078] In Step 1116, total weight of the box is calculated for use in
calculating the shopping
costs to be paid by the best bidder. The total weight is sent through the
communications multi-
network to the logic engine where the logic engine associates the total weight
with the electronic
file containing all information regarding the 13131 of the box just weighed.
In embodiments, the
total weight of the box is transferred through the star communication network
of the
communications multi-network.

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[0079] In Step
1118, additional details such as the date and time the box was assembled are
calculated and associated with the electronic file. The date and time are
automatically generated
by the logic engine when the total weight of the box is taken and are
associated with the
electronic file containing all information regarding the UBI of the box just
weighed.
[00801 In Step
1120, the handler seals each completed box and places each box into
temporary storage where it remains during the auction. In some embodiments,
the storage facility
is at the same location the boxes are filled. In alternative embodiments, the
boxes are temporarily
stored at a third party facility or the reverse logistics company is
responsible for the temporary
storage of the filled boxes,
[0081] In Step
1122, the store communicates the UBI and appropriate associated details to
the auctioneer. For example, in some embodiments, the UBI, details of the box
contents, and the
total weight of each completed box of auctionable products are sent to the
auctioneer, although
the actual information sent to the auctioneer varies. In embodiments, the
store sends at least the
UBI and the box weight which is necessary to calculate the best bidder's
shipping costs. In
embodiments the auctioneer is the store, while in other embodiments the
auctioneer is software
or a third party auctioneering entity, e,g, a reverse logistics company.
Further, the term
"auctioneer" is interchangeable with the term "third party reverse logistics
company",
[0082] In Step 1124, the
auctioneer receives the UBI and associated box information needed
to conduct the Internet-based auction in order to sell the lot, and in Step
1126, conducts the
auction. The auctioneer creates and/or maintains an interactive auction
website and adds the lot
to the website. This website may be on the Internet or on an in-store
Intranet, which is accessible
to shoppers in a store via the communications multi-network. In embodiments,
the website
includes the contents list, any additional details, the total weight of the
box, and an estimate of
the anticipated shipping costs. The information included on the website will
vary depending on
the specific application contemplated. The lot is made available for a
specified amount of time,
such as hours or days, depending on the contents of the lot and the specific
application, and
interested bidders place bids. Upon the expiration of the allotted time for
the auction, the highest
bidder or best bidder, is identified and notified.
[00831 In Step 1128,
the auctioneer sends the best bidder the total cost, including shipping. In
embodiments, the best bidder is responsible for paying for the box, shipping
fees, and any
associated costs, although any of these costs can be shifted to another party
depending on the
specific application contemplated.
[00841 In Step 1130,
the auctioneer makes shipping arrangements, sends shipping details to
the store, specifically the central computer (logic engine), and notifies the
store that the auction is
16

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
closed and that the box is ready for shipment. The store uses the UBI to
locate the particular box
to be shipped. The shipping step varies widely from embodiment to embodiment.
For example,
the shipping can be done by the store, the reverse logistics company, or the
third party storage
facility. Also, the reverse logistics company or the store can arrange to
either have the box
delivered to the buyer or the buyer can pick up the box at the store. In Step
1132, the box is
delivered to the buyer.
[0085] In some embodiments, the auctioneer sends a summary for each lot
sold through the
Internet-based auction to the retailer for post-sale analysis (step not
shown), For example, the
retailer compares the summary received with the additional details that
remained with the store as
confidential information to determine the profit or loss on the lot.
[0086] Figure 2 provides a flowchart of an embodiment of the method of
preparing for an
auction. To begin, the store or warehouse collects items that cannot be sold
through traditional
forward logistics. Steps 1202-1208 demonstrate several types of items that the
store will
routinely consider selling via the reverse logistics process. Items that the
store will typically
consider for processing through reverse logistics include but are not limited
to returned items
(collected in Step 1202), discontinued items (collected in Step 1204),
refurbished items (collected
in Step 1206), and damaged items (collected in Step 1208).
[00871 As previously discussed, some unsalable products are not suitable
for reverse logistics
for a number of reasons. For instance, hazardous products and products that
have passed their
expiration date normally will not be identified as auctionable products and
will not be sent
through reverse logistics. Therefore, in Step 1210, the store determines the
suitability of each
item for reverse logistics. This determination can be made by a store
associate, manager, or
computer program, and should be made on a case-by-case basis. Items deemed
unsuitable for the
reverse logistics process enter an alternative reclamation or disposal
procedure at the store's
election, as seen in Step 212. Products suitable for reverse logistics, or
auctionable products, are
set aside and grouped into bundled lots.
[0088] The store prepares storage units for the lot of items to be sold
through the reverse
logistics process. In some embodiments, banana boxes or the like are used to
store the
auctionable products to be sold through the reverse logistics process. As
shown in Step 1214,
each box is prepared for auction by having a unique box identifier (UBI)
assigned to it prior to
being loaded with products. In some embodiments, the UBI is assigned to the
box after it is
loaded. In some embodiments, each store has a unique system for identifying
boxes in the reverse
logistics process, while in other embodiments, a universal system is used to
identify boxes. In its
simplest form, the UBI is a number or mark by which the particular box can be
specifically
17

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
identified, though the UBI may be a, more complex code containing details such
as those
describing the contents of the box.
[0089] In Step
1216, items set aside after being deemed appropriate for reverse logistics are
designated as auctionable products, or reclaimed items. In Step 1218, the
product codes of the
auctionable products are scanned as they are placed in boxes. In Step 1220,
once the box is full,
the total box weight of each lot is calculated. In some embodiments, the box
need not actually be
full, but can be deemed complete at any point during the packing, depending on
the contents of
the box, the number of auctionable products available, and other factors that
will be obvious to
one having skill in the art. Then, in Step 1222, each box is sealed and
temporarily stored until the
auction results are available. It should be noted, however, that in
alternative embodiments, the
box is sealed before the total weight is calculated. The particular order of
these steps depends on
the specific application contemplated,
[0090] In Step
1224, the store communicates the UBI and any additional details regarding the
box and its contents to the auctioneer for the Internet-based auction. To be
clear, the auctioneer
and the reverse logistics company can be one entity or separate entities. In
embodiments, the
store communicates to the auctioneer the UBI of the box to be sold, the box's
actual weight, a
contents list of items in the box, and one or more photographs of item in the
box, The auctioneer
initiates the auction process (shown in Figure 3).
[0091] Figure 3
provides a flowchart of an embodiment of an auction process, The Internet-
based auction begins with the receipt of information (including the box's UBI
and other useful
information) by the third party reverse logistics company, as shown in Step
1226. In
embodiments the third party reverse logistics company is owned by the retailer
or is a division of
the retailer, and is often located onsite in the retail establishment. In
other embodiments, the third
party reverse logistics company is truly a third party to the retailer, and
may or may not be
located onsite at the store. Further, the term "third party reverse logistics
company" is
interchangeable with the term "auctioneer."
10092] In Step 1228, the reverse logistics company initiates the Internet-
based auction. In the
selected embodiment, the reverse logistics company uses bar codes or product
codes to access
product images and information contained in an information database. The
information in the
database includes, but is not limited to, the auctionable product's product
code, a description or
title for each auctionable product, each auctionable product's weight, a
photograph of each
auctionable product, the retail cost of each auctionable product, and the sale
price of each
auctionable product, for example. The information database, in embodiments, is
stored on the
logic engine or central computer of the communications multi-network.
18

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
[0093] In Step
1230, the auction runs for a predefined and reasonable amount of time. The
specific amount of time that the auction runs depends on factors including web
traffic, the receipt
of bids from customers, the time of day the auction is taking place, and the
particular products
being offered for purchase. The reverse logistics company should have
experience in conducting
such internet auctions in order to ensure that the store attains the maximum
profit from each
auction.
[0094] In Step
1232, after the predetermined amount of time for the auction has elapsed, the
best bidder is deemed the winning bidder and buyer of the box and is contacted
by the reverse
logistics company with the total cost. The total cost passed onto the buyer
includes the cost of the
box, the shipping costs, and any other additional fees, although this can vary
depending on the
specific application contemplated. For example, in some embodiments, the
shipping could be
paid by the store or the reverse logistics company in order to facilitate
consumer purchasing.
Then, in Step 1234, the reverse logistics company communicates the result of
the auction to the
store. In some embodiments, the auction results include contact information
for the buyer and the
final sale price. In other embodiments, the auction results include other
details such as the
auction's start and end times, number of bidders in the auction, and the total
run-time of the
auction. After the auction is closed (Step 1235), the post-auction process, an
embodiment of
which is detailed in Figure 4, is initiated.
[0095] Figure 4 provides a flowchart of an example embodiment of a post-
auction process.
The post-auction process begins in Step 1236 when the store receives the
auction results from the
reverse logistics company. In the selected embodiment, the auction results
received from the
reverse logistics company include the UBI, date and time the auction closed,
the shipping address
of the winning bidder in the form of a shipping label, the winning bid amount,
the total cost of
items in the box, and the total sales value of the items in the box. This
information will vary
depending on the specific application contemplated, and can include more or
less information,
[0096] Then in Step 1238, the store uses the UBI to retrieve the box from
temporary storage.
In Step 1240, the store contacts the buyer to arrange shipping. In some
embodiments, the reverse
logistics company allows the best bidder to choose at shipping method on-line
when the auction
closes. As shown in Steps 1242 and 1244 respectively, the process is complete
when the buyer
picks up the box from the store or the store delivers the box to the buyer.
These methods of
delivery are merely exemplary, and almost any shipping method known could be
used in the

.
reverse logistics process, depending on the specific application contemplated.
Certain buyers
might prefer retrieving the box from the store saving him the shipping costs,
particularly if the
buyer intended on going to the store anyway. Stores might also prefer this
because it not only
19

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
brings the buyer into the store, creating an opportunity for him to make
additional purchases, but
shipping responsibility is alleviated from the store.
100971 In some embodiments, a reverse logistics tracking database is
created and used to
track shipping trends at the store. For example, the database can trigger
pickups to be scheduled
whenever the inventory of "ready-to-ship" boxes at a store reaches a
predefined level, or once per
predefined period of time, whichever comes first. In other words, if only one
box were ready for
shipment, a shipment would not be scheduled until the end of the week,
allowing other boxes to
accumulate and maximizing the productivity of the shipper pickup.
[0098] In some embodiments, the method of direct-to-consumer reverse
logistics as
described in detail above may include shoppers physically located within a
retail establishment
undergoing typical shopping, where products are flowing forward through the
supply chain, In
such embodiments, shoppers access the communications multi-network of the
store through a
mobile phone or a wireless end device, both of which must be installed with
applicable hardware
or software to provide it access to the communications multi-network. Shoppers
participate in the
auction via their mobile phone or wireless end device which accesses the
information through the
star communication network of the communications multi-network. Certain
auctions may be
available only to shoppers in the store and certain auctions may be available
to shoppers in the
store and to viewers accessing the auction via a website on the Internet. When
an in-store
shopper is identified as the best bidder of an auction, the transaction is
closed within the physical
boundaries of the store, In these instances the best bidder has the
opportunity to check out from
the store with his traditional shopping purchases as well as the bundled lot
he purchased and
avoid shipping costs. In some embodiments, stores may provide lounges and
entertainment areas
where shoppers relax within the store while bidding via the communications
multi-network on
auctionable products offered in bundled lots via these auctions.
[0099] In some embodiments of the method of direct-to-consumer reverse
logistics, the logic
engine of the communications multi-network tracks location data pertaining to
products,
including those identified as unsalable products, through the mesh
communication network. The
logic engine collects, routes and analyzes the location data regarding
unsalable products to
provide information to the store Which can be used to minimize the number of
unsalable products
in the future, In other words, if a large number of unsalable products are
identified as coming
from a particular location in the store, the store may be identify the source
of the reason that
these typically purchasable products are now being identified as unsalable.
For example, if meat
or cheese from a particular coffin-style refrigerator is continually being
labeled as unsalable,
store personnel may learn that a sign above the refrigerator is redirecting
air currents from the

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
store's HVAC vents and causing the refrigerator not to operate properly,
causing spoilage. Thus,
the method of direct-to-consumer reverse logistics may benefit the store in
many unexpected
ways.
[00100] Figure 5 schematically illustrates an object identification system
25 that may be used
in embodiments of the method of direct-to-consumer reverse logistics described
above in which
the scanning of the auctionable products is automatic. One or more auctionable
products or items
20 to be identified and sorted are placed on a transport system to be carried
through a sensing
volume 240. In the notional embodiment shown here, the transport system is a
conveyor belt 31.
As a practical matter, the transport system may be made up of more than one
conveyor belt to
allow for additional control over item flow through the sensing volume. In an
embodiment, as
illustrated in Figure 7A, three belts are used: an in-feed conveyor belt, onto
which the items to be
identified are loaded; a sensing volume conveyor belt, which moves the items
through the
sensing volume 240; and an out-feed conveyor belt, which takes items away from
the sensing
volume 240 for further processing. In, for example, a retail environment,
"further processing"
may include sorting, reverse logistics processing, and other processing that
known to those
having skill in the art. In some embodiments, the transport system includes
only the sensing
volume conveyor belt. Other belts, such as the in-feed conveyor belt or the
out-feed conveyor
belt can be added depending on the specific application contemplated.
[00101] As illustrated in the schematic diagram of Figure 5, the transport
system may be
treated as if it were an infinite transport path. As will be described in
detail below, in an
embodiment, the item identification system may be designed in such a way that
the processing
algorithms treat each segment of belt as if it were a unique location and any
item associated with
that segment is consistently treated as if it were at that location. In this
regard, the item
identification system 25 may have no information regarding how or when items
are placed on the
belt and no information regarding what happens to them after they leave the
sensing volume 240.
In an embodiment, system 25 may assign linearly increasing location values to
each segment of
the essentially endless conveyor belt 31 as it enters sensing volume 240,
analogous to a street
address, and the system may act as though the street has an unbounded length.
An item
associated with a particular street address may be assumed to remain there.
[00102] Alternately, instead of moving objects through a fixed sensing volume,
the volume
could be scanned along fixed locations. That is, rather than a conveyor belt
31 moving objects,
the sensing volume could be driving down the street looking at the items
distributed at the ever
increasing street address. For example, this could be applied in a warehouse
environment in
which a sensing device is driven along aisles and senses items arrayed on
shelves.
21

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[001031 The conveyor belt 31 is equipped with a transport location physical
sensor 122.
Transport location physical sensor 122 measures the position of the conveyor
belt 31 relative to a
fixed reference location in the sensing volume of the system 25. In some
embodiments, the
transport location physical sensor 122 is an encoder associated with a roller
of the sensing
volume conveyor belt. The transport location physical sensor 122 produces a
pulse every time
the essentially endless conveyor belt 31 moves by a fixed incremental distance
relative to the
sensing volume 240.
100104] By way of example, a rotary encoder may include delineations
corresponding to 1 mil
incremental movements of the conveyor belt 31. In principle, each delineation
produces a single
count in an ever-increasing accumulation, but in an embodiment, a number of
counts may be
aggregated for each system count. As an example, each system count may
correspond to five
nominal detector counts. Additionally, it may be useful to be able to account
for slippage or
other events that can cause a reverse movement of the belt. In this regard,
one such approach
would employ a quadrature encoder in which a pair of encoder outputs are out
of phase with each
other by 90 . In this approach, a direction may be assigned to the belt motion
on the basis of a
determination as to which of the two outputs occurred first.
[001051 The sensing volume 240 is the volume of space through which the
transport system
carries the items 20, and is delineated by the combined sensing regions/fields-
of-view of a
number of item parameter sensors 220, including, but not limited to, the item
isolator 140.
1001061 Sensing volume 240 includes a number of parameter sensors 220 for
sensing items 20
traveling through it. Some embodiments have at least two different parameter
sensors 220: an
item isolator and an indicia reading system which includes one or more indicia
sensors. In
embodiments, additional parameter sensors, such as a dimension sensor and/or a
weight sensor
may be included. Parameter sensors may be understood as being the physical
sensors, which
convert some observable parameter into electrical signals, or the physical
sensor in combination
with an associated parameter processing function, which transforms raw data
(initial sensing
data) into digital values used in further processing. The parameter processors
can be co-located
and/or embedded with the physical sensors or can be software modules running
in parallel with
other modules on one or more general purpose computers.
[001071 In an embodiment, the output values measured by parameter sensors 220
are
transferred to other software modules in the processors. This transfer may be,
in an embodiment,
asynchronous. Data from the parameter sensors 220 are associated with location
information
provided by the transport system location sensor and sent to two processing
modules: the item
description compiler 200, which performs the process of matching all parameter
values collected
22

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
for a particular item to create an item description, and the item
identification processor 300,
which queries a product description database to try to find a match between
the item description
and a product, and outputs either a product identification or an exception
flag. Optionally, the
system 25 may include an exception handler (shown in Figure 19).
[001081 An embodiment of an item identification system 25 is illustrated in
Figure 6A. As
shown, a sensing volume is within an upper housing 28. A lower housing 26 acts
as a structural
base for support of the sensing volume conveyor belt (as shown in Figure 7A),
the transport
location physical sensor 122, and many of the optical and mechanical
components of the system
25, including without limitation an upward looking line-scan camera 88. As
will be appreciated,
a line-scan camera has a substantially planar field of view, though it is not
strictly planar in the
mathematical sense, but rather is essentially a thin rectangle having a low
divergence.
[00109] In embodiments, the sensing volume 240 may be partially enclosed
such that the
enclosing walls form a tunnel structure. As illustrated in Figure 6A, a tunnel
structure is formed
by the upper housing 28, providing convenient locations onto which elements of
the various
sensors may be attached, as well as reducing the possibility of undesirable
intrusions into the
sensing volume 240 by miscellaneous hands and objects. In the embodiment shown
in Figure
6A, the upper housing 28 is used as a structural base for support of the laser
stripe generator 119,
the area camera 152, the first area camera mirror 48, the second area camera
mirror 49,
illumination sources 40, the load cells 175, a light curtain generator 12, and
various other optical
and mechanical components.
[00110] The area camera 152 is aimed to observe the path of a line of laser
light, a laser stripe,
projected downward towards the transport system and any items thereon in its
field of view.
There is a known angle between the laser stripe generator 119 and the area
camera 152 which
causes the image of the laser stripe in the field of view of the area camera
152 to be displaced
perpendicular to the laser stripe in proportion to the height of the item on
which the laser stripe is
projected.
[001111 As illustrated in Figure 6B, a first load cell 175A, a second load
cell (not seen from
this perspective), a third load cell 175C and a fourth load cell (not seen
from this perspective) are
positioned to measure a load on the belt. Six line-scan cameras, including but
not limited to a
lower right out-feed end line-scan camera 80 and an upward looking line-scan
camera 88, are
shown mounted on the lower housing 26 in Figure 6B. In an embodiment, the
system 25
includes eleven line-scan cameras arranged at various positions and various
attitudes to fully
cover the sensing volume within the upper housing. In an embodiment, each
camera has a
position and attitude that are sufficiently well-known that a location of a
detected item can be
23

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
determined to within less than about 1/4 in, (i.e., less than about 1 degree
of arc). In this regard,
the cameras may be precision mounted within a structural module such that
mounting the
structural module to a frame member of the system provides precise information
regarding the
direction ht which the camera is pointed. In an embodiment, some or all of the
cameras may
include a polarizing filter to reduce specular reflection from packaging
materials that can tend to
obscure bar codes. In this configuration, it may be useful to increase light
output from the light
sources in order to compensate for light loss due to the polarizing filters.
[00112] The line-scan cameras are structured and arranged such that they have
a field of view
that includes line-scan camera mirrors. A first lower right out-feed end line-
scan mirror 92 is
shown in Figure 6B, as an example of a line-scan mirror. The first lower right
out-feed end line-
scan mirror 92 reflects light from other line-scan mirrors (shown in Figure
7A) into the lower
right out-feed end line-scan camera 80, so that the lower right out-feed end
line-scan camera 80
produces line-scan data about the item when it arrives within its field of
view on the sensing
volume conveyor belt 32 (not visible in Figure 6B, see Figure 7A). Also shown
in Figure 6B is a
right-side downward looking illumination source 128.
[00113] In an embodiment, the conveyor belt may be about 20 inches wide and
travel at a
speed of about eighty feet per minute, or about sixteen inches per second. As
will be appreciated,
the speed of travel may be selected in accordance with the further processing
operations to be
performed on items after identification. For example, a grocery application
may require a
relatively slow belt speed to allow for a clerk to perform bagging tasks while
a package sorting
application may allow for a higher belt speed as sorted packages may be
mechanically handled.
[00114] As illustrated in Figure 6B, the upper housing may be used as a
structural base for
support of the area camera 152, the first area camera mirror 48, the second
area camera mirror
49, illumination sources 40, and various of the optical and mechanical
components of the system
25.
100115] Figure 7A illustrates right side camera optics usable to create
images of a first item
20A and a second item 20B. The first item 20A is shown having a front side 21,
a top side 22
and a left side 23. While not shown in Figure 7A, the first item 20A also has
a bottom side, a
back side and a right side. While illustrated as a grocery product box in the
Figures, the first item
20A could take the form of any item suitable for passage through the sensing
volume in
accordance with a selected application.
[00116] In the illustrated embodiment, first item 20A and the second item 20B
are transported
into the sensing volume by an in-feed conveyor belt 30 in the direction of
motion toward the exit
end of the in-feed conveyor belt 30 and toward the in-feed end of the sensing
volume conveyor
24

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
belt 32. The first item 20A and the second item 20B are transported through
the sensing volume
by sensing volume conveyor belt 32 in the direction of motion toward the exit
end of the sensing
volume conveyor belt 32 and toward the in-feed end of the out-feed conveyor
belt 34,
[00117] Upon entering the sensing volume, objects to be identified pass
through a light curtain
generated by light curtain generator 12 as best seen in Figure 8B. In the
illustrated
embodiment, the light curtain 10 is projected down towards a gap 36 between
the sensing volume
conveyor belt 32 and the in-feed conveyor belt 30 and is reflected by a mirror
14 to a detector 16,
The light curtain generator may be, for example, a bar including a linear
array of LEDs, arranged
to provide a substantially planar sheet of light. The light curtain detector
16 may include a linear
array of photodetectors that detect the light curtain projected by the LEDs.
In order to improve
the spatial resolution and reduce false negative readings at the
photodetectors, the LEDs and
detectors are sequentially activated in pairs. This approach tends to reduce
the effects of
potential stray light from one LED entering the detectors despite the presence
of an object in the
viewing field,
100118] When an object passes through the curtain, it casts a shadow on the
photodetectors,
providing information on a width of the object passing through the light
curtain. A series of
measurements of this type can be used as one set of parameters for identifying
the object. In an
embodiment, the spatial resolution of the light curtain generator/detector set
will be on the order
of a few mm, though in principle, finer or coarser measurements may be useful,
depending on the
application. For the grocery application, a finer resolution may be required
in order to
distinguish similar product packages.
[00119] As seen in Figure 7A, illumination sources 40 illuminate the
sensing volume
conveyor belt 32. A lower right out-feed end line-scan camera 80 has a field
of view focused on
a first lower right out-feed end line-scan mirror 92. The first lower right
out-feed end line-scan
mirror 92 reflects light from a second lower right out-feed end line-scan
mirror 93, which reflects
light from a third lower right out-feed end line-sean mirror 94. The third
lower right out-feed end
line-scan mirror 94 reflects light from the sensing volume conveyor belt 32,
Thus, the lower
right out-feed end line-scan camera 80 focuses its field of view on the
sensing volume conveyor
belt 32, capturing line-scan data about the first item 20A and the second item
20B as it is
transported in the direction of motion along the sensing volume conveyor belt
32. Also shown is
upper right in-feed end line-scan camera 83, which likewise images the sensing
volume conveyor
belt 32,
[00120] The lower right out-feed end line-scan camera 80 is operatively
connected to an
image processor, collecting the line-scan data. The image processor determines
a parameter

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
value of the first item 20A and a parameter value of the second item 20B being
transported
through the sensing volume.
[00121] In an embodiment, the image processor is the indicia reader. After
the indicia reader
collects the line-scan data corresponding to the first item 20A, it attempts
to identify the first
item's indicium 24A on the front side 21 of the first item 20A. In the
illustrated case, there is no
identification code on the front side of the item, so in operation the indicia
reader will fail to
identify the first item's indicium 24A based on the front side image. However,
the indicia reader,
receiving line-scan data from either the lower right out-feed end line-scan
camera 80 or the upper
right out-feed end line-scan camera 81, may successfully capture and identify
the second item's
indicium 24B.
[00122] A lower right in-feed end line-scan camera 82 has a field of view
focused on a first
lower right in-feed end line-scan mirror 95. The first lower right in-feed end
line-scan mirror 95
reflects light from a second lower right in-feed end line-scan mirror 96,
which reflects light from
a third lower right in-feed end line-scan mirror 97. The third lower right in-
feed end line-scan
mirror 97 reflects light off of the sensing volume conveyor belt 32, Thus, the
lower right in-feed
end line-scan camera 82 focuses its field of view on the sensing volume
conveyor belt 32,
capturing line-scan data about the first item 20A and the second item 20B
being transported in
the direction of motion along the sensing volume conveyor belt 32. After the
indicia reader
collects the line-scan data corresponding to the first item 20A, it identifies
an indicium 24A on
the left side 23 of the first item 20A.
[00123] In an embodiment, the line-scan cameras may be triggered by signals
derived from a
transport location physical sensor to capture a line-scan datum once for every
five thousandths of
an inch of travel of the conveyor belt 32. That is, when using an encoder
having a 1 mil interval,
each five intervals will constitute one system count, and one line scanned
image will be captured.
[00124] Turning to Figure 7B, right side camera optics are illustrated and
include, but are not
limited to, the lower right in-feed end line-scan camera 82 and the lower
right out-feed end line-
scan camera 80, The right side camera optics capture light from the
illumination source 40
reflected back into the field of view of the right side camera optics on one
or more line-scan
mirrors. The line-scan mirrors shown in Figure 7B include the second lower
right out-feed end
line-scan mirror 93, the third lower right out-feed end line-scan mirror 94,
the second lower right
in-feed end line-scan mirror 96, and the third lower right in-feed end line-
scan mirror 97, though
more or fewer mirrors can be included depending on the specific application
contemplated.
[00125] Figure 7B also shows the upper right out-feed end line-scan camera 81
and the upper
right in-feed end line-scan camera 83 imaging the sensing volume conveyor belt
32, and when
26

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
the in-feed conveyor belt 30 delivers the first and second items 20A and 20B
to the sensing
volume conveyor belt 32, these line-scan cameras will image the items as well.
Eventually, the
first and second items 20A and 20B will be out of sight of the upper right out-
feed end line-scan
camera 81 and the upper right in-feed end line-scan camera 83 when they are
passed along to the
out-feed conveyor belt 34,
[00126] In an embodiment, the line-scan cameras may be mounted horizontally to
reduce dust
build-up on the camera lenses. Folding mirrors may be used to provide selected
field of view
geometries to allow these horizontally mounted cameras to observe the sensing
volume from
different angles.
[00127] To achieve a desired depth of focus for each line-scan camera along
with a fine image
resolution to read indicia, the optical path for each line-scan camera should
be several feet from
each item 20 in the sensing volume. To allow for long optical paths without
unduly expanding
the size of the system 25, each line-scan camera's optical path may be folded,
for example by
line-scan mirrors 93, 94, 96, and 97.
[00128] Because the width of the field of view for each line-scan camera
expands linearly as
the optical distance from the line-scan camera increases, line-scan mirrors
that are optically
closer to the first item 20A and second item 20B may be wider than the belt
width in the line scan
direction. As will be appreciated, for an imaging field at a 45 degree angle
to the belt, the field
width is -\i2 times the belt width, and the mirror must be sufficiently wide
to subtend that field.
However, because each line-scan camera only images a narrow line sensing
volume, about five
thousandths of an inch in certain embodiments, each line-scan mirror can be
very short in the
perpendicular direction. In some embodiments, each line-scan mirror is just a
fraction of an inch
tall. The line-scan mirrors are made of glass about one quarter of an inch
thick and about one
inch tall. In a device having a 20 inch wide sensing volume, the line scan
mirrors may have
widths from about eight inches to about thirty inches wide, depending on for
what portion of the
sensing volume that scan is responsible. The line-scan mirrors allow the
optical paths for the
bottom, top, and side perspectives of the fields of view of the line-scan
cameras to be folded,
while maintaining relatively narrow top and side walls, about seven inches
thick in an
embodiment.
[00129] Each line-scan camera produces line-scan data from light reflected off
of the items 20
traveling through the sensing volume. In an embodiment, with the nominal speed
of all of the
conveyor belts and imaging resolution, the line-scan cameras operate at about
three thousand two
hundred lines per second, corresponding to exposure times of about three
hundred microseconds.
With typical line-scan camera technology, these short exposure times
necessitate fairly bright
27

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
illumination to yield high-contrast images. For reasonable energy and
illumination efficiencies,
an illumination source 40 may be selected to provide intense illumination with
low divergence,
focused along each line-scan camera's optical perspective.
[001301 Figure 7C illustrates the right side camera optics. The right side
camera optics
include, but are not limited to, the lower right out-feed end line-scan camera
80, the upper right
out-feed end line-scan camera 81, the lower right in-feed end line-scan camera
82, and the upper
right in-feed end line-scan camera 83, which are each connected to the lower
housing 26 of the
system 25. The right side camera optics are shown focused using line-scan
mirrors. In this
embodiment, the first lower right out-feed end line-scan mirror 92 reflects
light from the second
lower right out-feed end line-scan mirror 93, which reflects light from the
third lower right out-
feed end line-scan mirror 94, which reflects light from the sensing volume
conveyor belt 32,
Furthermore, the first lower right in-feed end line-scan mirror 95 reflects
light from the second
lower right in-feed end line-scan mirror 96, which reflects light from the
third lower right in-feed
end line-scan mirror 97, which reflects light from the sensing volume conveyor
belt 32. Light
falls on the sensing volume conveyor belt 32 from the illumination source 40
mounted on the
upper housing 28.
[001311 When the first item 20A and the second item 20B exit the out-feed end
of the in-feed
conveyor belt 30, they enter the in-feed end of the sensing volume conveyor
belt 32 and pass
through the fields of view of the right side camera optics, line-scan data is
generated which
corresponds to the first item 20A and the second item 20B. The first item 20A,
bearing the
indielum 24A, and the second item 20B, bearing the indieium 24B, exits the
sensing volume
when they are transported from the sensing volume conveyor belt 32 and onto
the in-feed end of
the out-feed conveyor belt 34. The multiple line-scan cameras, each with its
own perspective,
capture multiple images of the first item 20A and the second item 20B before
they exit the
sensing volume. The line-scan data generated is used by the system 25 to
recognize parameters
for each item as discussed further below.
1001321 An upward looking line-scan camera 88 is mounted on the lower housing
26, as
illustrated in Figure 8A. In this Figure, the item 20 travels from left to
right along the in-feed
conveyor belt 30 through the sensing volume 240. A belt gap 36 is provided
between the in-feed
conveyor belt 30 and the sensing volume conveyor belt 32. Upward looking line-
scan camera
illumination source 41 provides an intense illumination of the belt gap 36
with low divergence,
allowing upward looking line-scan camera 88 to yield a high-contrast image.
[00133] The upward looking line-scan camera 88 produces images from light,
traveling
through the belt gap 36, and onto the upward looking line-scan mirror 98. The
light is generated
28
=

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
by the upward looking line-scan camera illumination source 41 and is reflected
off of item 20 as
it travels from in-feed conveyor belt 30 over belt gap 36 and onto the sensing
volume conveyor
belt 32.
[00134] In addition
to providing an image of item 20 for later analysis by the indicia reader,
the upward looking line-scan camera 88 provides unobstructed images of the
bottom of item 20.
While analysis by the indicia reader can identify an indicium on the bottom of
item 20, the
dimensioning sensor uses the unobstructed images of the bottom of item 20 to
help refine the
measurements of item 20. Thus, in embodiments including upward looking line-
scan camera 88,
items of disparate heights (such as first item 20A and second item 20B shown
in Figures 7A and
7C) can be placed adjacent to one another on the in-feed conveyor belt 30
without the item
isolator treating the items of disparate heights as a single item having a
more complex geometry.
[00135] As shown in Figure 8B, the upward looking line-scan camera optical
components,
including upward looking line-scan camera illumination source 41, upward
looking line-scan
mirror 98, and upward looking line-sean camera 88, are located within the
lower housing 26 of
the system 25. In the illustrated embodiment, the optical path of upward
looking line-scan
camera 88 is folded only once, off of upward looking line-scan mirror 98. In
other words, light
reflected off of item 20 as the light crosses through the belt gap 36 is
reflected off of upward
looking line-scan mirror 98 to upward looking line-scan camera 88. As
described previously, the
item 20 is positioned over the belt gap 36 when the item 20 is transferred
from the in-feed
conveyor belt 30 to the sensing volume conveyor belt 32.
[00136] As will be
appreciated, the upward looking camera is a dark field detector. That is, in
the absence of an object in its measurement area, it will receive little or no
reflected light, and the
image will be dark, When an object is present in the measurement area,
reflected light from the
illumination source 41 will be reflected back to the camera. In contrast, the
light curtain
described above is a bright field detector. When no object is present, the
image is bright, while
when an object is present, the image field is shaded by the object, causing it
to appear as a dark
object in the detector.
[00137] Working in conjunction with each other, the two systems allow for
detection and
measurement of objects that may be difficult to detect with one or the other
approach. For
example, an object that is relatively dark, and/or a poor reflector may be
difficult for the upward
looking camera to distinguish from the dark background field. Similarly, an
object that is
relatively transparent may not produce sufficient contrast to be detected by
the light curtain. The
inventors have determined that a good rate of object singulation can be
obtained when using the
two sensors in combination with the laser stripe generator 119 described
below.
29

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00138] As seen in Figure 9A, a transport location sensor includes, but is
not limited to, an in-
feed conveyor belt 30, a sensing volume conveyor belt 32, an out-feed conveyor
belt 34, and a
transport location physical sensor 122. =
[00139] A weight sensor, also seen in Figure 9A, includes, but is not
limited to, at least one
load cell (175A-D in Figure 16), previously mentioned in the context of Figure
6B. In an
embodiment, the weight sensor includes four load cells. The set of four load
cells supports the
sensing volume conveyor belt 32 and its associated mechanical structure
(motor, rollers, the belt,
etc.). In some embodiments, the weight sensor also includes three object
sensors, shown herein
as an in-feed conveyor belt object sensor 173A, a sensing volume entrance
object sensor 173B,
and a sensing volume exit object sensor 173C. In some embodiments, each object
sensor is
placed about two tenths of an inch above the transport location sensor 122, In
some
embodiments, the object sensors are light sources and photodetector pairs in
which the optical
path between the light source and the photodetector is interrupted in the
presence of an object,
such as item 20. Other object sensors are well known in the art, and can be
used depending on
the specific application contemplated.
[00140] Item 20 is transported toward the sensing volume along the in-feed
conveyor belt 30
of the transport location sensor. In an embodiment, as the item 20 approaches
the sensing
volume, the in-feed conveyor belt object sensor 173A detects that item 20 is
about to enter the
sensing volume. Item 20 passes over belt gap 36 as it is transferred from in-
feed conveyor belt
30 to sensing volume conveyor belt 32, and the sensing volume entrance object
sensor 173B
ascertains that the item 20 has entered the sensing volume. Similarly, the
sensing volume exit
object sensor 173C detects when item 20 exits the sensing volume and is
transferred from
sensing volume conveyor belt 32 to out-feed conveyor belt 34. However, the
existence and
particular location of each object sensor varies depending on the specific
application
contemplated.
[00141] When, as in Figure 9A, no items are located on sensing volume conveyor
belt 32, the
load cells measure the total weight of the sensing volume conveyor belt 32.
Then, as one or more
items 20 are transferred to the sensing volume conveyor belt 32, the load
cells measure the
weight of the sensing volume conveyor belt 32 and the weight of the one or
more items 20. Each
load cell converts the force (weight) into a measurable electrical signal,
which is read out as a
load cell voltage. Since the electrical signal output of each load cell is on
the order of millivolts,
the signals of the load cells are amplified and digitized by load cell
amplifiers (not shown),
[00142] As seen in Figure 9B, the weight sensor includes, but is not
limited to, the set of
object sensors (173A, 173B, and 173C) and the load cells. The sensing volume
entrance object

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
sensor 173B is located just inside the upper housing 28 of the sensing volume
and above the belt
gap (indicated in Figure 8A by reference number 36) between in-feed conveyor
belt 30 and
sensing volume conveyor belt 32. Similarly, the sensing volume exit object
sensor 173C is
located just inside the upper housing 28 of the sensing volume and above the
out-feed conveyor
belt 34. The in-feed conveyor belt object sensor 173A is located above the in-
feed conveyor belt
30 upstream of the sensing volume. While Figure 9B depicts the in-feed
conveyor belt object
sensor 173A as being close to the sensing volume, the distance between the in-
feed conveyor belt
object sensor 173A and the sensing volume can vary depending on the specific
application
contemplated.
1001431 Figure 9B also shows that load cells 175A and 175C are located
inside the lower
housing 26 of the sensing volume. Load cells 175B and 175ll (as depicted in
Figure 16) are not
visible in this view as they are blocked by load cells 175A and 175C,
respectively. The load cells
support sensing volume conveyor belt 32 and its associated mechanical parts,
enabling the set of
load cells to measure the weight of the sensing volume conveyor belt 32 and
items thereon, if
any,
[001441 As seen in Figure 9B, the transport location physical sensor 122,
in the illustrated
embodiment a rotary encoder, is located close to a load cell 175C. The
transport location
physical sensor 122 is connected to the sensing volume conveyor belt 32 and a
digital counter in
one of the system processors. As the sensing volume conveyor belt 32 is
rotated by the motor,
the encoder wheel turns, allowing the transport sensor processor to record the
movement of the
sensing volume conveyor belt 32. The displacement of the conveyor belt from an
arbitrary
starting location is defined as the transport system location. The transport
sensor processor
generates the transport system location on the conveyor belt for each
transport sensor pulse
generated by the transport locations physical sensor 122, though as mentioned
above, in practice
a number of sensor pulses may together constitute a system count, in order to
provide appropriate
intervals. The signals from the transport location physical sensor 122 are
also used to trigger the
line-scan cameras described herein to take images. In an embodiment, the
transport system
location is the along-track co-ordinate of the item, wherein the along-track
co-ordinate system is
established in keeping with a virtual sensing volume conveyor belt that is
infinitely long. When
the system 25 receives the object position of the item 20 from the in-feed
conveyor belt object
sensor 173A it generates the transport system location corresponding with the
along-belt co-
ordinate of the item 20.
[00145] As illustrated in Figures 10A and 10B, an embodiment of the dimension
sensor
includes, but is not limited to, a laser stripe generator 119, at least one
laser mirror (shown herein
31

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
as a first laser mirror 99, a second laser mirror 100 and a third laser mirror
101), an area camera
152, one or more area camera mirrors (shown herein as first area camera mirror
48 and second
area camera mirror 49), an upward looking line-scan camera (shown with
reference number 88 in
Figures 8A and 8B), and at least one parameter processor (not shown) for
processing the
parameter values generated from the area-camera images from the area camera
152 and line-scan
data from the upward looking line-scan camera.
[00146] Laser stripe generator 119 projects a laser stripe upward to the
first laser mirror 99.
As will be appreciated, a number of types of optical elements are capable of
converting a laser
beam into a stripe, including, for example, a cylindrical lens, a prism, conic
mirrors, or other
elements may be used. The laser stripe is reflected from the first laser
mirror 99 to the second
laser mirror 100 and onto the third laser mirror 101. The third laser mirror
101 projects the laser
stripe downward from the top of the sensing volume onto the sensing tunnel
conveyor belt 32. In
a particular embodiment, laser stripe generator 119 uses a holographic optical
element and a laser
diode to generate the laser stripe. In an embodiment, the laser diode is an
infrared laser diode,
and the area camera 152 is a CCD camera configured to detect infrared
radiation. In a particular
embodiment, a low pass filter or a band pass filter configured to
preferentially allow infrared
radiation to pass while attenuating an amount of visible light is placed over
the CCD.
[00147] Item 20 is transported through the system from left to right along the
transport system
in the direction of motion from the in-feed conveyor belt 30 to the sensing
volume conveyor belt
32 to the out-feed conveyor belt 34. It is transferred from in-feed conveyor
belt 30 to sensing
volume conveyor belt 32, which transports it through the sensing volume. Area
camera 152 has a
pyramid-shaped field of view which looks down on sensing tunnel conveyor belt
32 after it is
folded by first area camera mirror 48 and second area camera mirror 49. While
the field of view
of area camera 152 is depicted in Figures 10A and 10B as being folded by first
and second area
camera mirrors 48 and 49, the number of mirrors used to fold the field of view
of area camera
152 is merely by way of example, and can vary depending on the specific
application
contemplated. The laser stripe is projected onto the sensing volume conveyor
belt 32 within the
field of view of area camera 152, Item 20 is transported through sensing
volume on sensing
volume conveyor belt 32, passing through the point at which the laser stripe
is projected onto the
sensing volume conveyor belt 32 from above, At that point, the area camera
captures area-
camera images of item 20 and the laser stripe reflecting off of the item.
[00148] In the embodiment illustrated in Figure 11A, the system 25 includes
a left-side
downward looking line-scan camera 89 and a right-side downward looking line-
scan camera 90.
The field of view of left-side downward looking line-scan camera 89 is folded
by left-side
32

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
downward looking line-scan camera mirrors (first left-side downward looking
line-scan camera
mirror 105, second left-side downward looking line-scan camera mirror 106,
third left-side
downward looking line-scan camera mirror 107, and fourth left-side downward
looking line-scan
camera mirror 108) before being projected down onto sensing volume conveyor
belt 32 at an
angle that captures the top side of item 20 and the back-side of item 20 as
the item 20 passes
through the sensing volume front-side first from the in-feed conveyor belt 30
to the sensing
volume conveyor belt 32 to the out-feed conveyor belt 34, as shown in the
illustrated
embodiment.
[00149] The field of view of right-side downward looking line-scan camera 90
is folded by
right-side downward looking line-scan camera mirrors (first right-side
downward looking line-
scan camera mirror 123, second right-side downward looking line-scan camera
mirror 124, third
right-side downward looking line-scan camera mirror 125, and fourth right-side
downward
looking line-scan camera mirror 126) before being projected down onto sensing
volume
conveyor belt 32 at an angle that captures images of the top side of item 20
and the front-side of
item 20 as the item 20 passes through the sensing volume front-side first.
1001501 Right-side downward looking illumination source 128 provides an
intense
illumination of the sensing volume conveyor belt 32 with low divergence,
allowing right-side
downward looking line-scan camera 90 to yield a high-contrast image.
Similarly, left-side
downward looking illumination source (not shown in Figure 11A) provides an
intense
illumination of the sensing volume conveyor belt 32 with low divergence,
allowing left-side
downward looking line-scan camera 89 to yield a high-contrast image.
[00151] As shown in Figure 11B the field of view of left-side downward looking
line-scan
camera 89 is folded first by first left-side downward looking line-scan camera
mirror 105, then
by second left-side downward looking line-scan camera mirror 106. It is then
further folded by
third left-side downward looking line-scan camera mirror 107 and fourth left-
side downward
looking line-scan camera mirror 108. Fourth left-side downward looking line-
scan camera
mirror 108 projects the field of view of left-side downward looking line-scan
camera 89 down
onto the sensing volume conveyor belt 32. Item 20 is transported along in-feed
conveyor belt 30
onto sensing volume conveyor belt 32 which will transport the item 20 through
the sensing
volume after it completes its journey over the in-feed conveyor belt 30. As
item 20 is transported
through the sensing volume, it is brought into the field of view of left-side
downward looking
line-scan camera 89, and the left-side downward looking line-scan camera 89
captures images in
the form of line-scan data of the item 20.
33

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00152] Similarly, the field of view of right-side downward looking line-
scan camera is folded
first by first right-side downward looking line-scan camera mirror, then by
second right-side
downward looking line-scan camera mirror, It is then further folded by third
right-side
downward looking line-scan camera mirror 125 and fourth right-side downward
looking line-
scan camera mirror 126. Fourth right-side downward looking line-scan camera
mirror 126
projects the field of view of right-side downward looking line-scan camera
down onto the
sensing volume conveyor belt 32. As item 20 is transported through the sensing
volume, it is
brought into the field of view of right-side downward looking line-scan
camera, and the right-
side downward looking line-scan camera captures images, line-scan data, of the
item. Once the
item 20 has completed its journey over the sensing volume conveyor belt, it
passes onto the out-
feed conveyor belt 34. In some embodiments, some parameter sensors are able to
continue
sensing the item 20 as it travels on the out-feed conveyor belt 34.
[00153] Figure 12 illustrates a dataflow for use in an embodiment of a system
25, organized as
moving from top horizontal slices to bottom horizontal slices of an
asynchronous, data driven
architecture of the system. That is, in the embodiment, there may be no
universal clock within
the system, sensors and processors output their results as soon as the data is
available, and the
data flows are, in general, unidirectional. In an embodiment, information is
conveyed between
processes by TCP/IP network messages, and within processes via shared memory.
[00154] As will be discussed in greater detail below, Figure 13 illustrates
the same elements
grouped in parallel, sensing sensors/processes, namely a transport location
sensor 120, one or
more indicia reader(s) 130, a dimension sensor 150, an item isolator 140, and
a weight sensor
170, to emphasize that each physical sensor and associated parameter processor
may operate
autonomously from the other physical sensors and parameter processors. Figure
12, on the other
hand, is organized so that data flows from the data source level to the
parameter processor level
to the geo-parameter matching level to the final stage, product
identification, which is the stage
where the items that have been sensed in the sensing volume are either
identified as products or
flagged as exceptions. Each level in the hierarchy of an embodiment will be
addressed in turn
below.
[00155] The first data source is a transport system location sensor 120,
typically comprising a
transport system location physical sensor 122 and a transport sensor processor
127, as shown in
Figure 13. In one embodiment, transport system location physical sensor 122 is
a rotary encoder
attached to a belt roller. As shown in Figure 13, the initial sensing data
from transport system
location physical sensor 122 is a count increment, the transport sensor pulse
D147 (each of which
may represent more than one sensor pulse), which is sent to a transport sensor
processor 127.
34

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
Transport sensor processor 127 performs a simple summation and scaling process
to convert
transport sensor pulses D147 into transport system location values D148.
Transport system
location values are distributed to each of the other parameter processors so
that the parameter
processors can associate a transport system location with each measured
parameter value. In
some embodiments, transport sensor processor 127 also uses the transport
sensor pulses D147 to
generate line-scan camera trigger signals D142 and area camera trigger signals
D151 for the
various line-scan cameras 132 and an area camera 152 respectively. By
triggering the cameras
based on transport system movement, rather than at fixed time intervals, the
system may avoid
repeatedly recording images of the same field.
1001561 The second data source illustrated in Figure 12 is area camera 152.
Area camera 152
is positioned to observe the path of a line of laser light projected downward
towards sensing
volume conveyor belt and any items thereon. As described previously, there is
a known angle
between the laser projector and the area camera which causes the image of the
line of laser light
in the camera to be displaced perpendicular to the line, in proportion to the
height of the item on
which the line is projected. The data from area camera 152 is sent to item
isolating parameter
processor 144 and dimension estimator 154.
[00157] The third data source illustrated in the system illustrated in
Figure 12 is a set of line-
scan cameras 132. The primary function of the line-scan cameras 132 is to
provide input to
indicia parameter processor(s) 134. In an embodiment, there are eleven line-
scan cameras 132,
which have been determined by the inventors to provide full coverage of the
sensing volume,
with adequate imaging resolution. Other embodiments can be implemented with
fewer or greater
numbers of line-scan cameras, depending on the performance goals of the
designer, the size and
shape of the sensing volume, the resolution of the cameras and other factors.
[00158] The fourth illustrated data source is an in-motion scale 172
comprising, in an
embodiment, three object sensors 173A, 173B and 173C (shown in at least Figure
9B) and four
analog load cells 175A, 175B, 1750, and 175D (shown in at least Figure 16).
The load cells are
disposed in the load path supporting the sensing volume conveyor belt. Each
load cell generates
an electrical signal in proportion to the compression force applied the load
cell. The signals from
all the load cells and all the object sensors are sent to weight generator
174.
[00159] The data sources described above are included in one particular
embodiment and
should not be construed as exhaustive. Other data sources can easily be
included in a system of
this type, depending on the parameters to be monitored. For example, infrared
sensors could
provide measurements of item temperature or color imagers could be used as
data sources to
measure a spatial distribution of colors on package labels.

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00160] Returning to Figure 12, the second stage of the data flow
architecture contains the
parameter processors, Each data source has one or more associated parameter
processor(s) to
transform the initial sensing data into a parameter value, which are then used
by an item
identification processor to identify the item. In an embodiment, these
parameter processors
comprise an item isolating parameter processor 144, a dimension estimator 154,
an indicia
parameter processor 134, and a weight generator 174. In Figure 12, an optional
image processor
183 is depicted as a parameter processor.
[00161] The first processor shown in Figure 12 is the item isolating
parameter processor 144.
Functionally, item isolating parameter processor 144 includes an item
distinguishing system, an
item locator and an, item indexer. The item isolating parameter processor 144
allows the system
to operate on multiple items in close proximity to each other in the sensing
volume. The item
isolating parameter processor 144, in some embodiments, uses data collected
near the entrance to
the sensing volume and performs four functions: first, the item isolating
parameter processor 144
recognizes that an object (which may be one or more items) has entered the
sensing volume;
second, the item distinguishing system determines how many distinct items make
up the object
that entered the sensing volume; third, the item indexer assigns a Unique Item
Index value (UII)
to each distinct item (the Ull is simply a convenient name for the particular
item); and fourth, the
item locator associates a two-dimensional location in the plane of the bottom
of the sensing
volume (for example, the plane of the conveyor belt) with each item that has
been identified and
assigned a UR
1001621 If all items entering the sensing volume are well separated in the
along-transport
direction (i.e., they are singulated), there may be no need for the item
isolating parameter
processor 144, as all parameter values will be associated with the only item
in the sensing
volume, When items are not singulated, however, the item isolating parameter
processor 144
determines how many items are in close proximity to each other and assigns
each item a UII
associated with its transport system location.
[00163] Item isolating parameter processor 144 outputs a Ull and transport
system location
D148 when it has isolated an item, The unique item index (UII) value, as its
name suggests, may
simply be a sequentially generated index number useful for keeping track of
the item. This data
is provided to dimension estimator 154 and an item description compiler 200.
[00164] Although item isolation may be a separate logical function in the
system, the
computer processing embodiment of item isolating parameter processor 144 in
particular
embodiments may work in close conjunction with dimension estimator 154, with
internal data
being transferred back and forth between the functions. The item isolating
parameter processor
36

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
144 in this approach functions as part of the dimension estimator 154
processing to recognize the
difference between one large item and an aggregation of multiple smaller close
together items,
and to instruct the dimension estimator 154 to estimate the dimensions of the
one or more than
one item respectively.
[00165] The dimension estimator 154 receives data from area camera 152, from a
selected
line-scan camera 132 (the upward-looking camera in one embodiment) and from
the transport
sensor processor, which includes the transport system location sensor 120. In
addition, working
in conjunction with the item isolating parameter processor 144, dimension
estimator 154 receives
information about how many items are in the area camera's field of view and
where they are. It
will be understood that while isolation and dimensioning may be logically
distinct functions, they
may share a number of processing operations and intermediary results and need
not be entirely
distinct computer processes.
[00166] In one embodiment, dimension estimator 154 estimates the length,
height, and width
of the dimensions of the item, ignoring the fact that the item may have a
complex (non-
rectangular) shape. That is, in this approach, estimator 154 calculates a
smallest rectangular box
into which the item would fit. The dimension estimator 154 can be configured
to estimate
parameter values regarding the general shape of the item (cylindrical,
rectangular solid, necked
bottle shape, etc.), the item's orientation on the transport system, and
details concerning the
item's three-dimensional coordinates in the sensing volume. The calculated
parameter values,
along with the transport system location of the item to which they apply, are
sent to the item
description compiler 200 as soon as they are calculated.
[00167] There is one indicia parameter processor 134 associated with each
line-scan camera
132. Together they form an indicia reader 130, as shown in greater detail in
Figure 14, As will
be appreciated, the indicia parameter processors may be individual devices or
may be virtual
processors, for example respective modules running on a common processor.
Indicia parameter
processor 134 examines the continuous strip image produced by line-scan camera
132 until it
identifies the signature of an indicium (typically a bar code such as a UPC).
Furthermore, the
indicia parameter processor 134 attempts to convert the indicia image into the
underlying code,
which can later be compared by the item description processor with the product
description
database to determine a product code that uniquely identifies the product. In
addition to
outputting the product code to the item description compiler 200, the indicia
parameter processor
134 outputs the apparent location of the indicia in camera-centric
coordinates.
1001681 As will be appreciated, additional methods are available for
determining an indicia
parameter. For example, many bar codes include numerical indicia in addition
to the coded
37

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
numbers that make up the code. In this regard, optical character recognition
(OCR) or a similar
approach may be used to recognize the numbers themselves, rather than decoding
the bars. In the
case where the indicia are not bar codes at all, but rather written
identifying information, again
OCR may be employed to capture the code. In principle, OCR or other word
recognizing
processes could be used to read titles or product names directly as well.
[00169] Where, as with bar codes, there are a limited number of possible
characters and a
limited number of fonts expected to be encountered, simplifying assumptions
may be made to
assist in OCR processes and allow for a character matching process. A library
may be built,
incorporating each of the potential characters or symbols and rather than a
detailed piece-by-
piece analysis of the read character shape, the shape can be compared to the
library members to
determine a best match.
[00170] Furthermore, because in a typical environment there are fewer
likely combinations
than there are possible combinations, it is possible that a partially readable
code can be checked
against likely codes to narrow down the options or even uniquely identify the
code. By way of
example, for a retailer stocking tens of thousands of items, each having a 10
digit UPC, there are
1010 possible combinations but only 104 combinations that actually correspond
to products in the
retailer's system. In this case, for any given partially read code, there may
be only one or a few
matches to actual combinations. By comparing the partial code to a library of
actually-in-use
codes, the system may eliminate the need to generate an exception, or it may
present an operator
with a small number of choices that can be evaluated, which may be ranked by
order of
likelihood based on other parameters or other available information.
Alternately, the partial
match information may be passed as a parameter to the product identification
module and
evaluated along with other information to determine the correct match. In an
embodiment, more
than one bar code reader software module may be employed using different
processing
algorithms to process the same read data, and the results from each module can
be compared or
otherwise integrated to arrive at an agreed upon read, or on a most-likely
read where there is no
agreement.
[00171] For weight parameters, the in-motion scale 172 generates a signal
proportional to the
sum of the weights of the items on the scale. For singulated items, where only
one item is in the
active sensing volume at a time, the weight generator 174 may sum the signals
from the in-
motion scale 172, the load cells in the illustrated embodiment, and apply a
transformation to
convert voltage to weight. For non-singulated items, where more than one item
can be in the
sensing volume simultaneously (i.e., closely spaced along the sensing volume
conveyor belt),
weight generator 174 has two opportunities to estimate the weight of
individual items:
38

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
immediately after the item enters the sensing volume, and immediately after
the item exits the
sensing volume. The object sensors of the in-motion scale 172 are provided to
inform weight
generator 174 when items have entered or exited the in-motion scale 172. The
object sensors are
incorporated into the in-motion scale 172 so its operation may be conducted
independently of
other parameter sensors.
1001721 As with the data sources, this list of parameter processors listed
above is by way of
example, not an exhaustive listing. For instance, Figure 12 includes an
optional image processor
183. Furthermore, it should be appreciated that any one of the parameter
processors described
herein may be omitted in particular embodiments. For example, where the size,
shape and
indicia parameters are sufficient to identify objects in the sensing volume,
there may be no need
to include weight parameters.
[00173] Geometric-parameter matching is the process of using the known
geometry of the
various physical sensors and the fields-of-view at which they collected their
initial sensing data
to match the measured parameter values with the item to which the parameter
values apply. The
item description compiler 200 is the processor that collects all the
asynchronous parameter data
and makes the association with the appropriate item. As the name suggests, the
output of the
item description compiler 200 may be referred to as an item description
associated with the item.
The item description is a compilation of parameter values collected by
parameter processors for
an item measured in the sensing volume.
[00174] After the item description compiler 200 has built an item
description for a particular
item, the item description may be passed to an item identification processor
300, which performs
the product identification function. In practice, while there may be a number
of available item
description fields, it is possible to identify items without completing every
field of the item
description. For example, if a weight measurement was too noisy or the
indicium was hidden
from view, smudged, or otherwise unreadable, the item description may still be
sent to the item
identification processor 300 rather than being stuck at the geometric-
parameter matching level at
the item description compiler 200. The item description compiler 200 can
decide, for example,
that having only the digital indicia data is enough data to pass on to the
item identification
processor 300, or it can determine that the item has moved out of the sensing
volume and no
more parameter values will be forthcoming from the parameter processors.
1001751 By way of example, item identification processor 300 may receive an
item description
from item description compiler 200. Using the parameter values data in the
item description, the
item identification processor forms a query to a product description database,
which in turn
39

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
returns a product identification and a list of the expected. parameter values
for that product, along
with any ancillary data (such as standard deviations on those parameter
values),
[001761 Item identification processor 300 decides if the item matches the
product with a high
enough degree of certainty. If the answer is yes, the product identification
datum D233 is output;
if the answer is no, the item may be identified with an exception flag D232.
The
identification/exception decision logic can vary from simple to complex in
various embodiments,
At the simple end of the logic scale the item identification processor could
flag any item for
which the weight did not match the weight of the product described by the UPC,
At the complex
end of the logic scale the item identification processor can incorporate fuzzy
logic which is a
form of non-Boolean algebra employing a range of values between true and false
that is used in
decision-making with imprecise data, as in artificial intelligence systems.
1001771 Optionally, various exception handling routines 320 can be invoked.
These routines
can be as rudimentary as doing nothing or lighting a light for a human to
observe, or they can be
more complex. For example, item identification processor 300 could be
instructed to act as
though the read indicium is in error by one or more digits and to re-query the
product description
database with variations on the read indicium,
[001781 Optionally, each successful product identification can be used to
update the product
description database. That is to say, every successful identification
increases the statistical
knowledge of what a product looks like to the system 25. Also optionally,
information relating to
exception flags D232 can also be added to the history database 350 for
improvement of the
system 25.
[001791 Figure 13 illustrates an embodiment of a data flow for the same
elements as shown in
Figure 12, with a slightly different notional grouping and arrangement. The
illustrated data
sources are a transport location sensor 120, one or more indicia reader(s)
130, a dimension sensor
150, an item isolator 140, and a weight sensor 170, to emphasize that each
physical sensor and
associated parameter processor operates autonomously from the other physical
sensors and
parameter processors.
1001801 The transport system location sensor 120, in some embodiments,
includes the
transport system location physical sensor 122 and a transport sensor processor
127. In some
embodiments, such as the one shown in Figure 13, the transport location
physical sensor 122
takes the form of a rotary encoder associated with a belt roller. The initial
sensing data from
transport system location physical sensor 122 is a count increment, the
transport sensor pulse
D147, which is sent to the transport sensor processor 127. The transport
sensor processor 127
then performs a summation and scaling process to convert transport sensor
pulses D147 to

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
transport system location values D148, As described above, the system may
treat the conveyor
belt as being essentially continuous and the transport system location is
essentially the distance
along the (continuous) conveyor belt from some arbitrary starting point,
1001811 In a
particular embodiment, this distance is measured in increments of about five-
thousandths of an inch, and may be referred to as an x-coordinate. In an
embodiment, transport
sensor processor 127 also uses the transport sensor pulses D147 to generate
line-scan trigger
signals D142 and area camera trigger signals D151 for the various line-scan
cameras and an area
camera respectively. By triggering the cameras based on transport system
movement, rather than
at fixed time intervals, the system 25 may avoid repeatedly recording images
of the same field.
Thus, the output of the transport sensor processor 127 includes the line-scan
trigger D142, the
area camera trigger D151, and the transport system location D148.
[00182] Aside from a
set of conventional dedicated motor controllers, transport sensor
processing includes converting input belt commands D50 (e.g., stop, start,
speed) received from
the weight sensor 170, into motor controller signals; converting the transport
system sensor
pulses D147 into a transport sensor location values D148; and transmitting
that value to the
various parameter processors, including without limitation the item isolating
parameter processor
144, the dimension estimator 154, the indicia parameter processor 134, the
weight generator 174,
and, optionally, the image processor 183, wherein each parameter process may
be as illustrated
and described in relation to Figure 12, above,
[001831 It will be
noted that transport sensor processor 127 may communicate directly with
the various cameras to send them frame triggers,
[001841 The transport system location D148 output from the transport system
location sensor
120 is provided to the item isolator 140, the dimension sensor 150, the
indicia reader 130, the
weight sensor 170, any optional image processors 183 (shown in Figure 12), and
the item
description compiler 200.
[00185] A set of one
or more line-scan cameras, which are included in the indicia reader 130,
are triggered by the line-scan trigger D142. As shown in Figure 13, the line-
scan trigger D142,
triggers the line-scan cameras to produce line-scan data which initiates
activity within the item
isolator 140, the dimension sensor 150, and the indicia reader 130. The
activity initiated by the
line-scan trigger D142 will be fully described below in the descriptions of
Figure 14, which
describes the indicia reader 130, and Figure 15, which describes the item
isolator 140 and the
dimension sensor 150. Similarly, the area camera trigger D151 may trigger
activity in the area
cameras which output area camera data to item isolator 140 and the dimension
sensor 150, which
is described in more detail in accordance with Figure 15.
41

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00186] In an embodiment, there is one indicia reader 130 associated with
each line-scan
camera, which may be a virtual indicia reader. Indicia reader 130 examines the
continuous strip
image produced by its line-scan camera until it identifies the signature of a
pre-determined
indicium (typically a bar code such as a UPC) at which time it decodes the
indicia image into a
digital indicia value D159. Additionally, indicia reader 130 outputs the
apparent location D236
of the indicia in camera-centric co-ordinates. The digital indicia data D159,
item location on the
transport system D148 and indicia location in camera-centric co-ordinates D236
are transferred
from the indicia reader 130 to the item description compiler 200.
[00187] In some embodiments, indicia reader 130 may, on occasion, receive
image retrieval
requests D149 from the item description compiler 200, whereby indicia reader
130 extracts an
image subframe D234 containing the indicia from the continuous strip image.
The extracted
images of the identified indicia are transferred to a history database 350.
The history database
350 is an optional element of the system that may be used for post-analysis,
and image retrieval
is similarly optional.
[00188] Note that each of the line-scan cameras may detect indicia at
different times, even for
a single item. For example, items lying on the sensing volume conveyor belt
with an indicium
pointing up are likely to have at least two line-scan cameras record the image
of the indicium (for
example, the left-side and right-side downward looking line-scan cameras),
possibly at different
times. These two images of the UPC will be processed as each datum arrives at
its respective
indicia reader, with the two UPC values and associated camera-centric co-
ordinates being sent to
the item description compiler 200 asynchronously,
[00189] Returning to Figure 13, item isolator 140 receives the line-scan
camera trigger D142
and the transport system location 1)148 from the transport system location
sensor 120. Item
isolator 140 outputs a unique item index (UII) value D231 with the associated
item's transport
system location D148 to the item description compiler 200 only when it has
isolated an item.
The Ull value is provided internally to the dimension estimator 154 (shown in
Figures 12 & 15)
and externally to the item description compiler 200 as soon as they are
available.
[001901 Although a separate logical function in the system, the item isolator
140 computer
processing in embodiments of the system may work in conjunction with the
dimension sensor
150 and/or the light curtain assembly. Essentially, the item isolator A)
assists the dimension
estimator 154 (shown in Figures 12 & 15) processing to recognize the
difference between one
large item and more than one item positioned close together in the sensing
volume, and B)
instructs the dimension estimator 154 to estimate the dimensions of the one or
more than one
item respectively.
42

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00191] The dimension sensor 150 receives the area camera trigger D151, and
the transport
system location D148 from the transport system location sensor 120. The area
camera, which is
part of the dimension sensor 150, upon receipt of the area camera trigger
D151, generates area
camera image data and provides the area camera image data to the dimension
estimator 154. In
addition, working in conjunction with item isolator 140, the dimension sensor
150 collects
information about the number of items in the area camera's field of view and
where the items are.
The dimension sensor 150, specifically the dimension estimator, combines
multiple frames from
area camera 152 to estimate the locus of points that form the surfaces of each
item using a
triangulation process. The dimension sensor 150, including the processing of
the dimension
estimator is described in greater detail in accordance with Figure 15.
[00192] The dimension sensor 150 further transforms the estimated item
surfaces to determine
a bounding box for each individual item. That is, it calculates a smallest
rectangular volume that
would hold each item. In an embodiment, the length, height, and width of this
bounding box are
considered to be the dimensions of the item, ignoring any non-rectangular
aspects of its shape.
Similarly, a more complex bounding box may be calculated, treating respective
portions of the
item as bound by respective bounding boxes. In this approach, each object is
rendered as an
aggregation of parameters representing box structures, but the overall shape
of the item is
somewhat preserved. Collateral parameters, such as the item's orientation and
3-dimensional co-
ordinates on the sensing volume conveyor belt, are also calculated in one
embodiment. Further,
the dimension sensor 150 can, at the discretion of the user, estimate
parameter values regarding
the general shape of the item (cylindrical, rectangular solid, necked bottle
shape, etc.) by
calculating higher order image moments. These parameter values, along with the
transport
system location of the item to which they apply, are the dimensioning data
D166 transmitted to
item description compiler 200. As an optional step, the dimension sensor 150
outputs some
intermediate data, such as closed height profiles D247, to history database
350.
[00193] In an embodiment, a disambiguation functionality may be included
that provides
additional approaches to handling closely spaced items that are identified by
the system as a
single object. In this regard, for each object profiled by the dimension
sensor, in addition
to providing a master profile for each item, multiple subordinate height
profiles may be
generated. The subordinate profiles can be generated, for example, by running
a blob detection
operation over the master profile to determine whether subordinate regions
exist. Where
subordinate profiles are detected, both the master and subordinate profiles
may be published with
the item description for use by other subsystems. If no subordinate profiles
are detected, only the
master profile is published.
43

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00194] For eases in
which subordinate profiles are detected, and multiple indicia are read for
the object having subordinate profiles, a disambiguation process based on the
subordinate
profiles may be run. In this process, the subordinate profiles are used along
with a limited
universe of potential item identifications. In
particular, only those item identifications
corresponding to the indicia read for the object are used. Once the universe
of potential matches
is limited in this way, matching can proceed in accordance with the approaches
described in
relation to the several embodiments described herein. If the result of this
matching process yields
subordinate items that are all uniquely identifiable, the subordinate items
are published in place
of the multi-read and the master item is discarded. If unique reads are not
obtained, the multiple
read object may be published for further analysis by the system as is.
[00195] Weight
sensor 170 is the last sensor shown in Figure 13. As previously discussed, an
embodiment of the weight sensor 170 includes the in-motion scale 172 and
weight generator 174
(shown in Figure 12), which sums the signals from the in-motion scale and
applies a
transformation to convert voltage to weight data. For non-singulated items,
where more than one
item can be in the sensing volume simultaneously (i.e., closely spaced along
the sensing volume
conveyor belt), weight sensor 170 has two opportunities to estimate the weight
of individual
items: immediately after the item enters the sensing volume, and immediately
after it exits the
sensing volume. The object sensors of the in-motion scale provide the weight
sensor 170 with
information on when items have entered or exited the in-motion scale, which is
used by the
weight generator to determine the weight data D191 corresponding with
individual items when
there are multiple items located on the sensing volume conveyor belt at the
same time. When
multiple items overlap as they enter or exit the sensing volume, the weight
sensors produce an
aggregate weight for the overlapping items. The weight sensor 170 transfers
weight data D191,
which is the item weight and item's location on the transport system, to the
item description
compiler 200. Optionally, the continuous stream of weight data 191 is sent to
the history database
350 in Step D190, The weight sensor 170 also delivers belt control commands
D50 to the
transport system motor controllers, as will be described below.
[00196] As indicated in the descriptions of Figures 12 and 13, in one
embodiment, the item
description compiler 200 receives data from all the various parameter sensors.
Item description
compiler 200 conducts geometric-parameter matching, which is the process of
using the known
geometry or the various physical sensors and their fields-of-view to match the
measured
parameter values with the item that was in their fields-of-view at the
moment(s) the
measurements were made.
44

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00197] An item description (the output of item description compiler 200)
is compiled by
matching the measured parameter value with the item known to be in the
particular sensor's
field-of-view. As described above, where each sensor's field of view is known,
for example
relative to a fixed reference point in the transport system, it is possible to
associate an instance of
item detection with a particular location. From time to time it may be useful
to calibrate the
system by imaging an item having known geometry and/or indicia, for example an
open box of a
known size and having indicia located at known locations thereon.
1001981 As an example, a line-scan camera looking straight down on the belt
might have a
field of view described as a straight line across the sensing volume conveyor
belt, with the center
of the line at the center of the sensing volume conveyor belt in the across
motion dimension and
six inches downstream from a reference point defined for the item description
compiler 200.
[00199] In this example, indicia reader 130 determines that UPC 10001101110
was read
starting at 200 pixels from the left end of the line scan camera's field of
view, at the instant that
the transport system location was 20,500 inches from its initialization point.
Using known
information regarding the camera parameters and the camera's geometric
relationship to the
sensing volume conveyor belt, item description compiler 200 can determine that
the UPC was
observed 1 inch from the left of the sensing volume conveyor belt and at a
transport system
location of 20,494 inches. The item description compiler 200 then associates
this UPC with the
item (with an arbitrary UR, 2541 as an example) that was observed to be
closest to transport
system location 20,494 inches, Similarly, when the weight sensor, specifically
the weight
generator, reports a weight data D191 for an item was loaded onto the in-
motion scale at
transport system location 20,494, item description compiler 200 associates
that weight data D191
with item UII 2541.
[00200] The geo-parameter matching process is generally more complex than this
simple
example, and makes use of knowledge of the full three-dimensional field of
sensing of each
physical sensor. In one embodiment, the full three-dimensional geometry of all
of the sensor's
respective fields of sensing may be compiled into a library for use by the
item description
compiler 200. The library is used by the description compiler 200 to associate
items and sensed
parameters. Thus, in an embodiment, it is the full three-dimensional location
of each item (for
example a set of transverse, longitudinal, and rotational coordinates of the
item) combined with
the item's height, width, and depth that are used in the compilation of a
complete item
description of each item. Because no two items can exist in the same physical
space, transport
system location D148 and the bounding box description of each item may be used
by the item
description compiler 200 for matching parameter values to the correct item.

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00201] The item identification proceeds as described above in the sections
labeled
Geometric-Parameter Matching and Product Identification. In the example of a
reverse logistics
environment, once the product is identified, the item identification processor
300 transfers the
product identification data D233 to a logic engine 400, where the data may
ultimately be
forwarded to an auctioneer, a distribution center, a manufacturer, or other
entity. Alternative
uses for the system are contemplated other than in reverse logistics retail
systems and processes.
For instance, the system could be employed in forward logistics, where product
identifications
are sent to a point of sale (POS) system, as described in U.S. Application
Serial No. 13/032,086,
filed February 22, 2011.
[00202] In an embodiment, a configuration and monitoring process keeps track
of and updates
system calibration data while continually monitoring the activity of each
software process. Each
process can be configured to issue a regular heartbeat signal. If the
heartbeat from a particular
parameter processor or subsystem is not received after a period of time, the
configuration and
monitoring process can kill and restart that particular parameter processor.
In embodiments
employing an asynchronous datatlow architecture, killing and restarting any
one process does not
generally affect any other process or require re-synchronizing with a clock
signal. However,
some items passing through the system during the re-boot might not be
identified, in which case .. =
they may be handled by the normal exception procedures.
[00203] The file transfer process is responsible for moving lower-priority,
generally large,
data files across the network from the various parameter sensors to the
history database 350,
when this optional database is included. The file transfer process manages the
transfer of large
files including, but not limited to, the line-scan images produced as part of
the indicia reader
processing, the height profiles generated by the dimension estimator, and
weight transducer data
streams. If file transfers took place indiscriminately, high-priority, real-
time data transfers such
as line-scan data streaming could be interrupted by lower-priority data
transfers. The file transfer
process manages those potential conflicts.
[00204] In an embodiment, each real-time file transfer process, which is
used for large, low-
priority (LLP) data sets/files, first stores the LLP data locally on the hard
drive of the parameter
processor where the data sets are created. On a regular basis, approximately
every three hundred
milliseconds, the file, transfer process running on the one or more computers
hosting that
parameter processor checks for newly-deposited LLP data and sends the data
over the network to
the history database, which may be associated with the item identification
processor for
convenience. Data is transmitted in a metered fashion, with limited packet
sizes and enforced
packet-to-packet transmission delays, so average network bandwidth is
minimally reduced.
46

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00205] The configuration parameters for the file transfer process reside in a
configuration
database. Qonfiguration information such as packet sizes, transmission delays,
and IP and
destination server addresses are saved in the database. The file transfer
process uses standard file
transfer protocol, and is implemented in an embodiment using the cURL open-
source library.
[00206] Figure 14 is an information flow diagram for an embodiment of the
indicia reader. In
an embodiment of the system 25, there are eleven line-scan cameras, and as
previously noted,
there is one (virtual) indicia reader 130 logically associated with each line-
scan camera, even
though all of the indicia reader processing in practice may occur on the same
physical processor.
The indicia reader 130 performs three functions: identifying and decoding any
captured indicia
and, optionally, extracting indicia images from the continuous strip image
collected by the line-
scan camera 132. Thus, each indicia reader 130 in the embodiment effectively
operates as a bar
code decoder. In the embodiment, the eleven indicia sensors together define a
four pi steradian
indicia reading system. Each indicia reader 130 comprises a parameter
processor programmed to
identify indicia in the line-scan data captured by each of the line-scan
cameras 132 and to
interpret the indicia into digital indicia data. As previously described, each
line-scan camera 132
receives a line-scan trigger D142 based on the motion of the transport system.
[00207] A line-scan datum is the output from a single field of a line-scan
camera array 131.
Each line-scan datum D181 collected by the line-scan camera array 131 is
transferred to a line-
scan camera buffer 133, which is internal to line-scan camera 132. The line-
scan camera buffer
133 compiles the line-scan data D181 together into packages of two hundred
line-scan data,
which may be referred to as image swaths D237.
[00208] In an embodiment, the nominal imaging resolution at the item for each
4,096 pixel
line-scan camera 132 is approximately two hundred dpi. Thus, an image swath of
two hundred
line-scan data corresponds to an approximately one inch by twenty inch field-
of-view. Each line-
scan camera may be configured to transfer individual image swaths from the
camera to a circular
acquisition buffer 135 in the indicia parameter processor 134. It should be
noted that image
swaths D237 are used to transfer data between the line-scan camera 132 and the
indieia
parameter processor 134 for communication efficiency only; the data processing
in indicia
parameter processor 134 is performed on a line-by-line basis. Further, it
should be noted that
line-scan camera buffer 133 collects and saves line-scan data every time the
transport system has
moved by the defined trigger increment, independent of the presence of an the
item in the sensing
volume.
[00209] As discussed above, each image swath D237 is tagged with a relevant
transport
system location D148 value, where, generally, one location value is all that
is needed for each
47

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
200 line swath. Image swaths D237 are concatenated in the circular acquisition
buffer 135 to re-
form their original, continuous strip image format. Consequently, even if an
item or an indicium
on an item spans multiple image swaths D237, the item or the indicium can be
processed in its
entirety after additional image swaths D237 are received by the circular
acquisition buffer 135.
In an embodiment, the circular acquisition buffer 135 is configured to hold
20,000 lines of
camera data.
[00210] Indicia reader 130 extracts data from buffer 135 and examines line-
scan data D181
line by line, in a signature analysis process 136, in both the "cross-track"
(within each line) and
the "along track" (one line to the next) directions, to find the signature
characteristics of a
predetermined indicia format. For example, UPC bar codes can be recognized by
their high-low-
high intensity transitions. During signature analysis 136, identified indicia
are extracted from the
line-scan data and the extracted indicia D158 transferred to a decoding logic
process 137.
Decoding logic process 137 converts image data into a machine-readable digital
indicia value
D159. OMNIPLANAR software (trademark registered to Metro logic Instruments,
Inc.;
acquired by Honeywell in 2008) is an example of software suitable to perform
the indicia
identification and decoding in the indicia reader. As will be appreciated,
multiple parallel or
serial logic processes may be employed to allow for redundant identification.
In this regard,
where a first approach to identification and decoding of a code is
unsuccessful, a second
approach may prove fruitful.
[00211] In an embodiment, items are generally marked with indicia wherein
the indicia
conform to various pre-determined standards. Examples of indicia capable of
being read by the
decoding logic process 137 include but, are not limited to, the following; EAN-
8, EAN-13, UPC-
A and UPC¨E one-dimensional bar codes that capture 8-, 12- and 13-digit Global
Trade Item =
Numbers (GTIN).
[002121 It will be understood that the indicia reader 130 may operate
continuously on the line-
scan data. In the context of a bar code reader, when a high-low pattern is
observed in the line
scan the software attempts to identify it as an indicium. If it is identified
as such, the software
then decodes the full indicium into a digital indicia value. In particular
embodiments, the line-
scan data presented to the decoding logic process 137 is monochromatic, so the
decoding logic
process 137 relies on lighting and other aspects of the optical configuration
in the line-scan data
to present information with sufficient contrast and resolution to enable
decoding indicia printed
according to UPC/EAN standards.
1002131 The output from decoding logic process 137 contains three data: the
digital indicia
value D159, the transport system location D148 corresponding to the one or
more line-scan data
48

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
in which the indicia was identified, and the indicia location in camera-
centric coordinates D236.
In this regard, the camera-centric coordinates could describe a two
dimensional area occupied by
the entire indicium. Alternately, a particular X-Y location, for example a
centroid of the
indicium image, a particular corner, or an edge, could be assigned to that
indicium.
[00214] Besides identifying and decoding indicia, a second, optional,
function of the indicia
reader 130 is to extract images of individual items as requested by the item
description compiler
200, and to transfer these images, the extracted image subframes D234, to the
history database
350. The item description compiler 200 issues an image retrieval request D234,
along with the
transport system location describing where the item bearing the indicia was
located in the field of
view of the line-scan camera 132, causing a region extract process 138 to send
out the image
retrieval request D149 to retrieve the appropriate subframe D234 from the
circular acquisition
buffer 135. Region extract process 138 then performs JPEG compression of the
extracted
subframe D234, and transmits it via the file transfer process to history
database 350.
[00215] Turning to Figure 15, an information flow diagram of an embodiment of
the
dimension sensor 150 and the item isolator 140 is provided. The dimension
sensor 150 functions
primarily for item dimensioning, or measuring the spatial extent of individual
items, while the
item isolator 140 functions primarily for item isolation, or sorting out or
distinguishing the items
entering the sensing volume. For example, if two boxes enter the sensing
volume in close
proximity, the item isolator 140 informs the rest of the system that there are
two items to identify,
and the dimension sensor 150 informs the system of the size, orientation, and
location of each of
the two items. As has been mentioned, these two processes operate in close co-
ordination
although they are performing distinctly different functions. Since the
dimensioning process
actually starts prior to the item being fully identified by the item isolator
140, the dimension
sensor 150 will be addressed before the item isolator 140. In an
embodiment, both the
dimension sensor 150 and the item isolator 140 utilize the output of one of
the line-scan cameras
132A and the area camera 152.
[00216] In an embodiment, dimension sensor 150 includes the area camera 152
and upward-
looking line-scan camera 132A. The dimension estimator 154 (the parameter
processor portion
of dimension sensor 150) receives data from area camera 152, upward-looking
line-scan camera
132A, and transport system location sensor 120 (shown in Figure 12).
[00217] The main function of dimension sensor 150 is item dimensioning. During
the height
profile cross-section extraction process 153 and the aggregation process 155,
the dimension
sensor 150 combines multiple frames from area camera 152 to estimate the locus
of points that
form the surfaces of each item using a triangulation process, As implemented
in one
49

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
embodiment, a laser line generator continuously projects a line of light onto
sensing volume
conveyor belt (and any item thereon). The line is projected from above and
runs substantially
perpendicular to the belt's along-track direction. In operation, the line of
light will run up and
over any item on the belt that passes through its field of view, Triggered by
the area camera
trigger D151, area camera 152 records an image of the line of light. There is
a known, fixed
angle between the laser line generator projection axis and the area camera's
optical axis so the
image of the line of light in area camera 152 will be displaced perpendicular
to the length of the
line by an amount proportional to the height of the laser line above the
reference surface, which
may conveniently be defined as the upper surface of the conveyor belt. That
is, each frame from
area camera 152 is a line of light, apparently running from one edge of the
belt to the other, with
wiggles or sideways steps, the wiggles and steps indicative of a single height
profile of the items
on the belt,
[00218] Triggered by
the area camera trigger D151, the area camera 152 provides an area
camera image datum (a single image) every time the transport sensing volume
conveyor belt
moves by the selected count interval. In some embodiments, the contrast of
this height profile
may be enhanced through the use of an infrared laser and a band pass filter
selected to
preferentially pass infrared light positioned in front of area camera 152.
With the filter in place,
the output of the area camera 152 is area camera image data D46, which
contains a two-
dimensional image showing only the displacement of the laser stripe as it
passes over the item.
[00219] The area
camera 152 takes snapshots of the laser stripe that is projected across the
sensing volume conveyor belt (edge to edge) by the laser stripe generator. The
area camera
image data D46 and the transport system location D148 value when the area
camera image data
D46 was recorded, are distributed to item isolating parameter processor 144
and dimension
estimator 154, which operate in close coordination.
[00220] A height
profile cross-section extraction process 153 extracts a height profile cross-
section D257 from the area camera image data D46 by determining the lateral
displacement of
the laser stripe, which was projected by the laser line generator over the
item. When there is an
angle between the laser stripe projection direction and the viewing angle of
area camera 152, the
image of the stripe is displaced laterally whenever the stripe is intercepted
by a non-zero height
item. The dimension estimator 154 uses a triangulation algorithm to calculate
height profile
cross-section D257 of the item along the original (undisplaced) path of that
linear stripe. Note
that the height profile cross-section D257 is a height map of everything on
the belt at the
locations under the laser stripe,

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00221] Height profile cross-section D257 is represented by a collection of
height data points,
which are herein referred to as hixels. Each hixel represents the height (z)
of a point in an (x,y)
position grid. As shown in Figure 7A, the y-coordinate represents the cross-
belt position, the x-
coordinate represents the along-belt position, and the z-coordinate represents
height. Height
profile cross-section extraction process 153 is applied to each frame of area
camera 152, the
camera being triggered each time the transport system moves by a predetermined
distance, about
0.005 inches in one embodiment.
[00222] The resulting sequence of height profile cross-sections are combined
into groups by
an aggregation process 155 to build closed height profiles D247. The
aggregation process 155 is
based on a pre-defined minimum association distance. If the distance between
any two hixels is
less than this association distance, they are considered to belong to the same
group. A closed
height profile D247 is created once there are no more hixels arriving from
height profile cross-
section extraction process 153 that can plausibly be associated with the
group. In other words, a
closed height profile D247 comprises all of the non-zero height points on the
belt that could
plausibly be part of a single item. It should be noted that a closed height
profile D247 may
actually comprise two or more close together items.
[00223] Each closed height profile D247 is compared to pre-determined minimum
length and
width dimensions to ensure that it represents a real item and a not just a
few, noise-generated
hixels. When available, closed height profiles D247 are sent to the dimension
parameter
estimation process 157 and the dimension merging process 145. Closed height
profiles D247 are
optionally sent to history database 350.
[00224] In an embodiment, the height profile may be smoothed to account for
sensor noise. In
this approach, once a height profile is assembled for a single object,
portions of the profile that
appear to be outliers may be removed. As will be appreciated, removal of
apparent outliers prior
to profile assembly could eliminate portions of an actual object that are
separated by a
discontinuity, for example a mug handle may appear as an object separate from
the mug body in
a particular viewing plane. However, once the profile is assembled, this type
of discontinuity
would tend to be resolved, allowing for smoothing to be performed without
destroying
information about discontinuous object regions.
[00225] It may also be useful to include a zeroing or belt-floor
determination function for the
height profiling system. During ordinary use, the belt will continuously pass
through the laser
stripe projection, and the system should measure a zero object height. In
theory, the belt floor
may be measured using a running average height measurement, and that
measurement may be
used as a dynamic threshold that is subtracted from or added to the measured
height of objects
51

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
passing along the conveyor. In practice, it may be difficult to distinguish an
empty belt from a
belt carrying short items, which could throw off the zero measurement if
treated as an empty belt.
One solution is to use a predetermined height threshold and for the system to
treat anything less
than the threshold height is an empty belt, Even if a real object passes
through the system, its
effects will be smoothed as a result of the running averaging. This may allow
for removal of
slow-varying portions of the signal while allowing for removal of high
frequency information.
[00226] The second data source for dimension sensor 150 is a selected line-
scan camera 132A
(where the suffix "A" indicates the selected camera), wherein the selected
camera is, in this
example, specifically the upward-looking line-scan camera array 131A. Camera
132A produces
line-scan data after receiving line-scan trigger signals D142. The line-scan
data is then sent to a
line-scan camera buffer 133A, as described above for indicia reader 130.
[00227] As has already been mentioned, many of the same data processing
functions are used
for dimensioning and item isolation, Thus, the line-scan camera buffer 133A
outputs image
swaths to the circular acquisition buffer 135A, which is illustrated in Figure
15 as being disposed
in item isolator parameter processor 144. Also, as one of skill in the art
will recognize, the
various data processing steps illustrated herein are grouped as belonging to a
particular processor
(e.g., item isolating parameter processor, dimension estimator, etc) for
convenience of
explication only and such grouping is not intended to indicate in which
physical processing unit
such processing steps occur.
[00228] The upward looking line-scan camera is disposed to observe the bottom
of items on
the sensing volume conveyor belt, This camera is aligned to image through the
small gap
between the in-feed conveyor belt and the sensing volume conveyor belt. Unlike
the other line-
scan cameras, the upward looking line-scan camera does not need a large depth-
of-focus because
it is generally observing a consistent plane. That is, the bottom of every
item tends to be
approximately in the plane of the sensing volume conveyor belt. In general,
each line scan
comprises some dark pixels (where no item is over the gap) and some
illuminated pixels (where
part of an item is over the gap). The silhouette generator 141, in the item
isolator parameter
processor 144 processes the line-scan data D181 received from the circular
acquisition buffer
135A line-by-line and determines if the intensity of any of the pixels exceed
a predetermined
threshold. Pixels that exceed the threshold are set to the binary level of
high while those below
the threshold are set to binary low, viz., 0. Any line containing at least one
high value is called a
silhouette D242. (A line without one high value is a null silhouette) It will
be understood that
any silhouette may contain information about multiple items. The silhouette
D242 produced by
52

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
silhouette generator 141 is sent to an outline generator 143, which is the
logical process for
building bottom outlines.
100229] In conjunction with the upward looking line-scan camera, the light
curtain assembly
also observes the gap 36 and objects passing over it. As described above, pair-
wise scans of the
LEDs and photodiodes detect shadowed portions of the scanned line, Because the
light curtain is
a bright field detector, its silhouettes correspond not to bright pixels, as
in the upward looking
line-scan camera, but rather to dark pixels. For many objects, both detectors
will mark the same
silhouette positions, However for certain objects, one of the two detectors
may fail to observe
the item. For example, the light curtain may fail when a transparent object
passes through its
field of view, while the camera may fail when confronted with an object that
is a poor reflector.
In one embodiment, the two silhouettes can be subjected to a Boolean OR
operation so that if
either or both detectors identify an object, the object is noted by the
system. Alternately, the two
systems can operate independently, and each produce its own set of parameters
for evaluation by
the system.
1002301 The sequence of silhouettes are combined into clusters by an
aggregation process
similar to the generation of groups that takes place in outline generator 143,
The outline
generator 143 is based on a defined minimum association distance. If the
distance between any
two high pixels in the sequence of silhouettes is less than this association
distance, they are
considered to belong to the same cluster. Thus, a cluster includes both pixels
along a scan line
and pixels in adjacent scan lines, The bottom outline D244 of each such pixel
cluster is
computed by taking slices along the x- (along-belt) and y- (cross-belt)
directions, and by finding
the first and last transitions between cluster pixels and background for each
row and column,
That is, if there are gaps between cluster pixels along a row or column the
processor skips these
transitions because there are more pixels in the same cluster further along
the row or column.
This bottom outline definition assumes that items are generally convex. When
this approach to
extracting outlines is used, holes inside items will be ignored. The bottom
outline D244 is used
during the dimension merging process 145. For a system incorporating both a
light curtain and a
line scan camera, there may be two bottom outlines D244, or alternately, the
two acquired data
sets can be used in tandem to define a single bottom outline D244. For the
purposes of the
following discussion and associated Figures, either outline separately or both
together are
referred to as D244, and the singular should be understood as comprehending
the plural.
[00231] The bottom outline D244 is used in some embodiments to refine the
dimension
understanding of each item. For example, as described above, the laser stripe
viewed by area
camera 152 is at an angle to the sensing volume. Because of the angle, tall
items may shadow
53

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
adjacent short items. Information from the upward looking line-scan camera may
allow the
dimensioner and item isolator to more reliably detect those shadowed items and
report their
bottom outlines in the x and y dimensions.
[00232] Before
calculating length, width, and height of the smallest bounding box enclosing
an item during the dimension parameter estimation process 157, the closed
height profile D247
may be mathematically rotated (in the plane of the conveyor belt) to a
standard orientation during
the dimension merging process 145. In some embodiments, the closed height
profile D247 is
projected on the x-y plane (i.e., the conveyor belt plane) to correlate with
the set of transverse,
longitudinal, and rotational co-ordinates of the bottom outline D244. The
first and second
moments of these points are calculated, from which the orientation of the
major and minor axes
are derived. The closed height profile D247 may then be mathematically rotated
such that those
axes are aligned with respect to the rows and columns of a temporary image
buffer, thereby
simplifying calculations of the item's length and width. =
[00233] The item's length may be defined as the largest of the two dimensions
in the x-y plane
while the width is defined as the smaller. The item's height is also
calculated by histogramming
all the item's height data from the closed height profile and finding the
value near the peak (e.g.,
the 95th percentile).
[00234] For subsequent validation of the item during the dimension merging
process 145,
additional moments can be computed describing the item's height. After
rotating the closed
height profile D247, the three-dimensional second moments are calculated. In
calculating these
moments, the item is considered to be of uniform density, filled from the top
of the measured
height to the belt surface. The dimension system generates parameters
including, but not limited
to, second moments, which are distinct from those used to determine the item's
orientation, and
the width, length, and height, which are stored in a history database. These
parameters, along
with the weight information from the weight sensor and the indicia from the
indicia reader, are
used for validating the item.
[00235] Once a
bottom outline D244 is complete (in the sense that no more pixels will be
associated with this group of pixels), feature extraction is performed to
determine the item's
orientation, length, and width. In some embodiments, pixels along the outline
(perimeter) of a
cluster on the x-y plane (i.e., the sensing volume conveyor belt plane) are
analyzed. Pixels
within the outline are treated as filled, even if there are holes within the
interior of the actual
item. The first and second moments of these points are calculated, and the
orientations of the
major and minor axes are derived. The bottom outline D244 is then
mathematically rotated such
that those axes are aligned with respect to the rows and columns of a
temporary image buffer,
54

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
simplifying calculations of the bottom outline's length and width. The bottom
outline's length,
width, orientation, and second moment, collectively known herein as merged
data D256, are sent
to the item isolation process 146 and the dimension parameter estimation
process 157.
[00236] The bottom outlines D244 and the closed height profiles D247 are also
used in the
dimension parameter estimation process 157. The dimension parameter estimation
process 157
also receives the UII value D231 along with the corresponding transport system
location D148
regarding an item.
[00237] Iii the dimension parameter estimation process 157, the dimension
estimator 154
receives the bottom outline D244, the CHI value D23I with the transport system
location D148,
and the closed height profile D247 to determine a bounding box for each
individual item. In
some embodiments, because noise from even a single stray pixel could adversely
change the
measurement, an item's length, width, and height are not based on the maximum
extent of the
aggregated pixels. Instead, the dimension merging process 145 computes a
histogram that bears
a number of pixels in each of the three dimensions, after the item has been
rotated to the standard
orientation. The distances are computed between about the one-percentile and
about ninety-nine-
percentile boundaries to give the length, width, arid height of the item.
[002381 If an item does not produce a bottom outline, the only dimensioning
data produced by
the item is a closed height profile. This can occur, for example, if the
bottom of the item is very
dark, as perhaps with a jar of grape jelly, though the supplemental use of the
light curtain will
tend to address this issue. Feature extraction and item isolation are
performed solely on the
closed height profile when the closed height profile D247 is the only
dimensioning data
produced. If light curtain data and closed height profile are available and
camera data is not, then
those two may be used.
[00239] If a group has one or more bottom outlines D244 and one or more closed
height
profiles D247, there are several choices for extracting features. In an
embodiment, the system
may ignore the bottom outlines and only operate on the basis of the closed
height profiles. In
other words, in this approach, bottom outlines are only used to assist in the
interpretation of
dimensioning data collected from closed height profiles. Feature extraction
based on multiple
closed height profiles is performed just as it is for a single closed height
profile, but using data
from the group of closed height profiles.
[00240] Finally, if the dimension parameter estimation process 157 has not
received a closed
height profile D247 corresponding with transport system location value D148,
the dimension
parameter estimation process 157 will have only the bottom outline D244 to
determine the
dimensioning data D166 for the item. For example, a greeting card has a height
too short to be

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
detected by the dimension sensor 150. Therefore, the height of the item is set
to zero, and the
item's length and width are determined solely from the bottom outline. The
length and width are
calculated by rotating and processing the bottom outline's x,y data as
described above for the
dimension estimator 154 using first and second moments. When no closed height
profile is
available, a three-dimensional second moment is not calculated.
1002411 Periodically, the dimension parameter estimation process 157 checks
the transport
system location D148, and sends collected dimensioning data D166 to the item
description
compiler 200 when it determines that there are no further closed height
profiles D247 or bottom
outlines D244 to be associated with a particular item, The dimension estimator
154 also uses the
data to estimate various dimensioning data D166 including, but not limited to,
parameter values
regarding the general shape of the item (cylindrical, rectangular solid,
necked bottle shape, etc.),
the item's orientation on the transport system, and details concerning the
item's three-
dimensional co-ordinates on the sensing volume conveyor belt. In this
embodiment, = the
dimension sensor 150 is also capable of calculating other parameter values
based on the size and
shape of the item. The various dimensioning data D166 along with the transport
system location
D148 values of the items, are sent to the item description compiler 200 as
they are calculated.
[00242] Figure 15 also shows the Item Isolator 140, which may allow the system
to operate on
non-singulated items. In operation, the item isolator 140 recognizes that
something (one or more
items) has entered the sensing volume, During the dimension merging process
145, when the
closed height profiles D247 and bottom outlines D244 overlap spatially (i.e.,
they are at least
partially merged) they may be associated with a single item, and the item
isolator 140 may be
said to have isolated an item passing through the sensing volume. In the item
isolation process
146, the item isolator 140 merges the closed height profile D247 with the
bottom outline D244,
generating merged data D256. Due to the way bottom outline D244 and closed
height profile
D247 descriptions are created, all bottom outlines D244 are mutually
disjointed spatially, and all
closed height profiles D247 are mutually disjointed spatially. The dimension
merging process
145 waits for an event. The dimension merging process 145 stores and keeps
tracks of closed
height profiles D247 and bottom outlines D244 as they are received. When a new
closed height
profile D247 is received, the dimension merging process 145 checks it against
the collection of
bottom outlines D244 to see if the closed height profile D247 and a particular
bottom outline
D244 overlap spatially. Closed height profiles D247 and bottom outlines D244
that overlap
spatially are placed into one group. The dimension merging process 145 does
not check the
closed height profile D247 against other closed height profiles because they
are, by definition,
disjoint. Similarly, after a new bottom outline D244 is received, it is
checked against the
56

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
collection of the closed height profiles D247 received to see if the bottom
outline 1)244 overlaps
any closed height profile D247.
[002431 During the dimension merging process 145, the item isolator 140
matches the
transport system location D148 values of the bottom outline D244 with any
closed height profile
1)247 that shares substantially the same transport system location D148
values. At this point, the
item isolator 140 recognizes the bottom silhouette of the item and recognizes
the height of
substantially every point of the item, and is ready to deliver the merged data
D256 to the item
isolation process 146.
[00244] Second, the
item isolator 140 determines how many distinct items comprise the object
that entered the sensing volume. In certain cases, several individual items
are mistaken as a
single item in one or the other data sets. The purpose of the item isolation
process 146 is to
determine when closed height profile D247 and bottom outlines D244 represent
the same single
item and when they represent multiple items,
[00245] Third, the
item isolator 140, specifically the item indexer, assigns a Unique Item
Index value (UII) D231 to each distinct item, and, fourth, along with the UII
D231, the item
isolator 140 identifies the two-dimensional location of the item (the
transport system location
D148 value). With knowledge of the merged data D256, likely belonging to a
single item, the
item isolator 140 assigns a Ull value D231 to the merged data D256 with known
transport system
location D148 values. The item isolation process 146 results in the Ulf value
1)231 along with
the transport system location 1)148 being communicated to the dimension
parameter estimation
process 157 for further processing by the dimension estimator. The
dimension parameter
estimation process 157 receives the Ull value 1)231, the merged data D256 with
known transport
system location D148 values, and outputs the dimensioning data D166 with the
Uli value D231
(and the transport system location) to other parts of the system (particularly
the item description
compiler 200 as shown in Figures 12 & 13).
[00246] Item isolation process 146 improves the reliability of system
output. In an
embodiment, a failure of the item isolator 140 stops all system operations,
because the system
cannot ascertain the number of items in the sensing volume or the location of
those items, and,
therefore, does not know what to do with the data from the parameter sensors.
However, failure
of only a portion of the item isolation system need not stop the system. The
item isolation
process 146 allows the item isolator 140 to continue to function if the upward
looking line-scan
camera stops functioning, using light curtain data and/or closed height
profiles D247 for each
item,
57

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00247] Conversely, if the dimension estimator 154 fails and the upward
looking line-scan
camera outline detection and/or the light curtain continues to function,
bottom outlines D244 but
no closed height profiles D247 will be reported, The system may continue to
operate in a
degraded mode since the heights of items are not available for item
identification. However,
determination of item weight, length, and width is still possible, and items
will not generally go
through the sensing volume undetected, even if a number of exceptions is
increased.
[00248] Referring now to Figure 16, a schematic illustration of weight
sensor 170 is shown.
Weight sensor 170 includes an in-motion scale 172 and a weight generator 174.
In-motion scale
172 includes object sensors (in-feed conveyor belt object sensor 173A, sensing
volume entrance
object sensor 173B, and sensing volume exit object sensor 173C are shown) and
load cells 175A,
175B, 175C, and 175D.
[00249] Object sensors, such as in-feed conveyor belt object sensor 173A,
sensing volume
entrance object sensor 1738, and sensing volume exit object sensor 173C, allow
the weight
generator to track which items are on the in-motion scale 172 at a given time.
Sensing volume
entrance object sensor 173B is positioned near the in-feed end of the sensing
volume. Sensing
volume exit object sensor 173C, positioned near the out-feed end of the
sensing volume, along
with sensing volume entrance object sensor 173B provides loading information
to enable the
system to accurately calculate the weight of multiple items in the sensing
volume at a given time.
In-feed conveyor belt object sensor 173A is positioned several inches upstream
from the in-feed
end of the sensing volume conveyor belt and enables an optional operating mode
in which the in-
feed conveyor belt can be stopped.
[00250] To put it another way, the inclusion of object sensors enables the
system to estimate
the weight of most of the individual items by combining the instantaneous
total weight on the
sensing volume conveyor belt (not shown in Figure 16) with the item's
transport system location
D148 values. However, in some embodiments, accurate weight data D191 cannot be
measured
by the weight generator 174 when items enter the sensing volume while other
items are exiting,
Therefore, in these embodiments, object sensors may be employed to prevent
simultaneous
loading and unloading of items from in-motion scale 172. In other words,
object position logic
176, upon receiving transport system location D148 and data from in-feed
conveyor belt object
sensor 173A, sensing volume entrance object sensor 173B, and sensing volume
exit object sensor
173C, can determine that an item will be entering the sensing volume at the
same time that an
item will be exiting the sensing volume and can signal the transport system to
hold back from
passing any new items to the sensing volume if there is an item about to
depart from the sensing
volume. In other embodiments, the object position logic can also stop the
sensing volume
58

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
conveyor belt if, for example, the scale has not had time to settle after
loading a new item. The
object position logic 176 transmits start and stop signals D115 to the average
and differeneing
process 178 where the logic calculates the average of and the changes in
initial sensing data
received from load cells 175 to ensure that calculations are performed at the
proper time.
[00251] It will be noted that stopping and starting the conveyor belts to
hold back items from
loading into/unloading from the sensing volume has no negative effects on the
measurements
made by the system; from the perspective of the sensing volume stopping the in-
feed conveyor
belt only spreads out items on the sensing volume conveyor belt while stopping
the sensing
volume conveyor belt puts all of the digital processing steps into a suspended
mode that may be
restarted when the belt is restarted.
[00252] As shown in Figure 16, object position logic 176 additionally uses
the information
received from the object sensors along with the transport system location D148
to issue belt
control commands D50. These commands are sent to the transport system location
sensor 120
(Figure 13) wherein, in one embodiment, the motor controllers reside. For
example, using the
information received from sensing volume object sensor 173C, object position
logic 176 can
determine that an item is about to exit the sensing volume. In order to
prevent an item from
entering the sensing volume at the same time, object position logic 176 can
send a belt control
command D50 to stop the in-feed conveyor belt from continuing to transport
items toward the
sensing volume. Additionally, or alternatively, belt control commands D50 can
include
increasing or decreasing the speed of the conveyor belts in order to limit the
number of items that
an operator of the system 25 can physically place on the in-feed conveyor
belt. Similarly, in
some embodiments, the in-motion scale 172 may require periodic self-
calibration time during
which no items are permitted on the scan tunnel conveyor belt, allowing it to
return to its tare
weight in order to maintain accuracy. This calibration condition is achieved
by stopping the in-
feed conveyor belt. Other belt control commands D50 can be transmitted by
object position logic
176, depending on the specific application contemplated.
[00253] Load cells 175A, 175B, 175C, and 175D are disposed in the load path
and typically
support the sensing volume conveyor belt (not shown in Figure 16, but shown in
at least Figure
6B). Each load cell generates an electrical signal proportional to the
compression force applied
to the load cell. In some embodiments, load cells 175A, 175B, 175C, and 175D
are digitized
with a high sample rate (e.g., 4000 samples per second) before being
transmitted for processing
by weight generator 174.
[00254] The high sample rate load cell samples are received by the
summation process 177,
wherein the signals from the load cells are summed and scaled to represent the
total weight data
59

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
of the in-motion scale 172 and any items on the in-motion scale 172. The total
weight data D190
from the summation process 177 is optionally sent to the history database in
step D190.
Additionally, this sum is low-pass filtered (or averaged) to improve the
signal-to-noise ratio and
give a more accurate total weight in the average-and-differencing process 178.
The number of
digital samples included in the average calculated during the average-and-
differencing process
178 is limited by the number of samples taken while the weight on the in-
motion scale 172 is
stable, For example, if only one item were loaded onto the sensing volume
conveyor belt, the
stable period extends from the moment the one item is fully on the sensing
volume conveyor belt
until the moment the item begins to move off of the sensing volume conveyor
belt. When more
than one item is on the sensing volume conveyor belt at a given time the
stable periods are
limited to the times when no item is being loaded onto or moving off of the
sensing volume
conveyor belt. In a noise-free environment, the weight generator could
identify stable periods by
the data alone. However, the weight generator typically operates in the
presence of some, if not a
significant amount of, noise, Object sensors 173A, 173B, and 173C, therefore,
inform the weight
generator (via object position logic 176) when items are loading or unloading
from the sensing
volume conveyor belt for appropriate averaging. It should be noted that
although the language
herein suggests temporal considerations, in an embodiment the system process
does not include a
clock signal, but rather is only clocked by incremental movements of the scan
tunnel conveyor
belt. Thus, a stable period can be extended by stopping the scan tunnel
conveyor belt and the
actual number of samples in the average will continue to increase at the data
sample rate (4000
samples per second in one embodiment).
1002551 Additionally, average-and-differencing process 178, as commanded by
the start and
stop signals D115, performs a differeneing operation between weight values
obtained before an
item is loaded onto/unloaded from scale 172 and after an item is loaded
onto/unloaded from scale
172. The weight values thusly obtained are assigned to the item or items
loaded onto/unloaded
from scale 172 during the instant transition. There are several alternative
approaches to
performing the differencing function that may be used to achieve essentially
the same weight
data D191. The selection among these alternatives is generally determined by
the available
hardware and digital processing resources and by operating conditions (e.g.,
load cell signal-to-
noise ratio, load cell drift, etc.). One particular approach is discussed
below in conjunction with
Figure 17.
[00256] Returning to Figure 16, weight values D191A are transferred from
average-and-
differeneing process 178 to an assign-weight process 179, wherein weight
values Dl 91A are
combined with object position data D113, which is data that was generated by
object position

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
logic 176. It should be noted that object position logic 176 cannot identify
individual items in an
overlap condition. Object positions D113 are determined by combining the off
and on signals
from the object sensors with the transport system locations D148. The
combination of item
weights and object positions are the item weight data D191. For non-overlapped
items the item
weight data is the weight of the item; for overlapped items the item weight
data is the combined
weight of the more than one item. Item weight data D191 is passed on to the
item description
compiler 200. Optionally, the continuous stream of total weight data D190 is
sent to the history
database 350 (as shown in Figure 12),
1002571 As mentioned above, various approaches are available to calculate
the weight of
individual items on scale 172. Figure 17 provides timing diagrams depicting
schematically each
output from an element of an embodiment of the weight sensor 170 that is
schematically
illustrated in Figure 16. The first data line at the top of Figure 17 provides
an example of an
output of summation process 177. The second data line of Figure 17 provides an
example of an
output of the in-feed conveyor belt object sensor 173A. The third data line of
Figure 17 provides
an example of an output of the sensing volume entrance object sensor 173B. The
fourth data line
of Figure 17 provides an example of an output of the sensing volume exit
object sensor 173C.
The first data line of Figure 17 illustrates the changing, summed, digitized
load cell signals as a
function of time, where constant transport system speed is assumed. The
second, third and fourth
data lines of Figure 17 show the (binary) output of the three object sensors.
[00258] In the second data line of Figure 17, item A is shown detected first
by the in-feed
conveyor belt object sensor 173A, at the third to fourth time interval. While
item A remains on
the in-feed conveyor belt (as shown detected by in-feed conveyor belt object
sensor 173A), the
first data line shows that the weight sensor 170 does not detect a weight
value as shown by the
constant (0,0) from the start of the clock at zero to the fifth time interval.
As item A enters the
sensing volume conveyor belt shown in the third data line from the fifth
second to the sixth time
interval, the sensing volume entrance object sensor 1738 detects the presence
of item A. item
A's weight is recorded by the weight generator, as shown from about point
(5,0) to about point
(6,3) on the first data line in Figure 17. After item A has completely crossed
the belt gap and is
entirely located on the sensing volume conveyor belt, the weight sensor 170
shows the weight of
item A as static, from about point (6,3) to about point (11.5, 3). Cued by
item position logic 176,
the average-and-differencing process 178 averages load cell signals during the
first indicated
acceptable averaging window and takes the difference between the weight value
3, obtained at
the end of said first acceptable averaging window, and the weight value 0,
obtained just prior to
item A loading onto the scale (as indicated by object sensors 173A and 17313).
61

CA 02862421 2014-06-27
WO 2013/106446 PCT/U52013/020854
[00259] As shown in the second data line, from the nine and half time
interval after the start of
the system to nearly the eleventh time interval, the in-feed conveyor belt
object sensor 173A
detects the presence of another item, B, on the in-feed conveyor belt. As item
B enters the
sensing volume on sensing volume conveyor belt, sensing volume entrance object
sensor 173B
detects item B's presence from about the 11.5 to about the 13.5 on the x axis
time interval. The
total weight of item A and item B is recorded by the weight sensor 170, as
shown from about
point (11.5,3) to point about (13.5, 9) on the first data line. After item B
has completely crossed
the belt gap and is entirely located on the sensing volume conveyor belt, the
total weight of item
A and item B is static, from about point (13.5, 9) to about point (20, 9).
Cued by object position
logic 176, the average-and-differencing process 178 averages load cell signals
during the second
indicated acceptable averaging window and takes the difference between the
weight value 9,
obtained at the end of said second acceptable averaging window, and the weight
value 3,
obtained previously for item A. That is, since the weight sensor 170 knows
that item A weighs
about three units, and the aggregate weight of item A and item B is nine
units, then the system
calculates that item B weighs about six units.
[00260] As shown in the fourth data line of Figure 17, from the twentieth
time interval to the
twenty-first time interval after the start of the system, the sensing volume
exit object sensor 173C
detects the presence of item A exiting the sensing volume on the sensing
volume conveyor belt.
As item A leaves the sensing volume on the out-feed conveyor belt, the weight
sensor 170
detects a diminishing weight value from about point (20, 9) to about point
(21, 6). The weight
sensor 170 can thus verify the weight of item A. Since the weight value
dropped from about nine
units to about six units when item A left the sensing volume, item A weighs
about 3 units.
[00261] After item A has completely traveled out of the sensing volume and is
entirely located
on the out-feed conveyor belt, the weight sensor 170 shows the weight of item
B as static, from
about point (21, 6) to about point (27, 6), Again the weight sensor 170 can
verify its first
calculation of the weight value for item B by detecting a static weight value
of about six units
during the period of time that only item B is detected on the sensing volume
conveyor belt. As
shown in the fourth linear graph, from the twenty-seventh time interval to the
twenty-ninth time
interval after the start of the system, the sensing volume exit object sensor
173C detects the
presence of item B exiting the sensing volume on the sensing volume conveyor
belt. As item B
leaves the sensing volume on the out-feed conveyor belt, the weight sensor 170
detects a
diminishing weight value from about point (27, 6) to about point (29, 0).
Subtracting 6 from 0
verifies that the item that just left the sensing volume (item B) weighs 6
units.
62

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00262] Load cell weight sensors often exhibit zero offset drifts over time
and temperature
variations. This potential drift is shown schematically in the first data line
of Figure 17 for time
intervals beyond 29. In one embodiment of the system, this drift is reset
automatically during
periods in which no items are on the scale, as cued by object position logic
176.
[00263] The calculation approach described above may fail to operate properly
when one item
is loaded onto the scale at the same time that a second item is unloaded. To
avoid this condition,
in one embodiment object position logic 176, an AND condition for in-fecd
conveyor belt object
sensor 173A and sensing volume exit object sensor 173C generates a command to
stop the in-
feed conveyor belt until the exiting item has cleared the sensing volume. This
belt motor control
command D50 may be transmitted to transport sensor processor 127 (Figure 13),
where the
motor controllers reside for convenience.
[00264] As has been mentioned, there are multiple alternative approaches to
process the total
weight signals D190 to estimate the weight of individual items when they are
non-singulated on
the scale, generally including making weight estimates before, during, and/or
after each item
enters and/or leaves the scale. In addition, there are alternative approaches
that, under certain
operating conditions, can estimate the weight of individual items even if they
are partially
overlapped. For example, consider the total weight values illustrated in the
first data line of
Figure 17. The slopes of the transition lines between the acceptable averaging
windows are
proportional to the weights of the items loading onto or unloading from the
scale. When there
are two partially overlapping items loading onto the scale, the slope of the
transition line changes
as the number of items being loaded changes. Thus, in a noise free environment
it is a trivial
exercise to apportion the total weight measured during the stable period to
the two overlapping
items that loaded onto the scale.
100265] Figure 18 is a data flow diagram for an item description compiler
200 conducting the
geometric merging process. The item description compiler 200 aggregates the
parameter values
corresponding to an individual item into an item description, wherein the
parameter values are
received from the various parameter processors. In the embodiment depicted in
Figure 18, the
parameter values are shown as the Ull value D231, dimensioning data D166,
weight data D191,
and digital indieia data D235, but other parameter values are contemplated
herein. Each
parameter value, as presented to the description compiler includes its
corresponding transport
system location values D148. The item description compiler 200 uses these
location values to
match parameter values that apply to a single item. That set of matched
parameter values is the
item description. The item description, when judged to be complete by the item
description
compiler 200, is then provided to the product identification processor.
63

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00266] The item description compiler 200 uses a geometric-based data
association technique,
using the object association library described above to aggregate the
asynchronously produced
item parameter values. Time can be used to correlate the various parameter
values with a unique
item but, because the various parameter values may have been produced at
different times as the
item moved through the scan tunnel, and because belt velocity may not be
constant, this approach
can be difficult to implement. However, the transport system location at which
each item is
disposed is a fixed parameter associated with that one item (once it enters
the scan tunnel), as is
the transport system location value, relative to a known reference location at
which each sensor
makes its measurement. Therefore, each measured parameter value can be matched
to the item
that was at the sensor's location at the moment of measurement,
[00267] During system operation, the transport location sensor 120 (shown
in Figure 13) is
continually supplying a transport system location value to each parameter
processor. Each
parameter processor tags the parameter values it produces with the transport
system location
value corresponding to the instant its initial sensing data was collected.
Additionally, item
isolator 140 and dimension sensor 150 (both shown in Figures 13 & 15) provide
a full three-
dimensional location for each isolated item, meaning that they provide the
item description
compiler 200 with the mathematical description of where the surfaces of each
item are in camera
space. The library of calibration data 250 is a record of where in physical
space each sensing
element in each parameter sensor is aimed. The transformation process 202
converts the
mathematical description of the surfaces of each item from camera space to
physical space with
accurate spatial (x,y,z) positioning information.
[00268] The transformation process 202 uses detailed knowledge of each
parameter sensor's
three-dimensional field of view (e.g., the vector describing where each pixel
on each line-scan
camera is pointed in three-dimensional space). With that information, the item
description
compiler 200 can associate data from the multiple parameter sensors with the
item that was at a
particular transport system location, as long as the spatial uncertainty of
each measurement
coordinate can be kept sufficiently small. In an embodiment, all spatial
measurements are known
to accuracies generally less than about two-tenths of an inch. The smallest
features requiring
spatial association are the indicia, which in practice measure at least about
six tenths of an inch in
their smallest dimension even with minimum line widths smaller than the about
ten mils
specified by the GS1 standard. Consequently, even the smallest indicia can be
uniquely
associated with the spatial accuracies of the embodiment described.
[00269] The first step in being able to spatially associate parameter
values with a particular
item is to calibrate the absolute spatial positions of each parameter sensor's
measurements. For
= 64

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
example, the left-front line-scan camera's indicia reader transmits each
digital indicia value,
along with the line scan camera's pixel number of the center of the indicia
and the transport
system location value D148 at which the camera was triggered when reading the
first corner of
the indicia, The item description compiler receives that information and
transforms the pixel
number and transport system location into absolute spatial co-ordinates.
100270] For the indicia reader, pixels corresponding to the four extreme
points defining the
edges of the visible plane inside the sensing volume are identified by
accurately positioning two
image targets, one at each end of a given camera (at the extreme ends of the
sensing volume), and
as close to the line-scan camera as possible, within the sensing volume. The
pixels imaging those
targets define the two near-end points of the visible image plane. The process
is repeated for the
two extreme points at the far-end of each line-scan camera's field of view.
[00271] For example, for the side line-scan cameras, targets are placed
just above the sensing
tunnel conveyor belt and at the maximum item height, as close to the input
mirror as possible
inside the sensing volume. The same targets are imaged at the far end of that
line-scan camera's
range. The (x,y,z) co-ordinates for each test image target are recorded, along
with the particular
camera and camera pixel number where the image of' each target appears. The
three co-ordinates
define the imaging plane for that camera. Through interpolation or
extrapolation, the imaging
ray for any pixel comprising that line-scan camera can be derived from those
four points, and that
line-scan camera's reported three co-ordinates of where it saw an indicium
with the optical ray
along which it was imaged can be mapped.
[00272] Accurate spatial (x,y,z) positioning information is known for each
image target during
geometric calibration. In some embodiments, the coordinate system is as
illustrated in Figures
7A and 7B. The geometric calibration is performed manually, without making use
of data from
the transport location sensor, although the dimension estimator uses that data
for its own
processing. However, automated geometric calibration is also possible, using
data from the
transport location sensor. In an embodiment, the geometric calibration data is
stored in a library
250. However, it should be clear that geometric calibration data D201 is not a
required element
in all embodiments. In those embodiments where it is present, the geometric
calibration data
D201 is transferred from the library 250 to the transformation process 202
within the item
description compiler 200.
[00273] Although the line-scan camera ray alone does not uniquely define
the exact point in
space where the indicium was located, the line-scan camera ray intersects the
three-dimensional
representation of the item itself, as provided by the item isolator 140 and
dimension estimator

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
150. Together, the line-scan camera rays and the three-dimensional item
representations create a
one-to-one correspondence between indicia and items.
[00274] Another parameter sensor using a level of geometric calibration is
the weight
estimator. In the described embodiment, the weight estimator obtains item X-
axis position
information from its object sensors. That is, in terms of Figure 18, the
weight estimator assigns a
weight value to item A or B based on the output of at least the sensing volume
entrance object
sensor, which indicated where along the virtual belt the items were first
loaded onto the scale.
The object sensor positions can be manually calibrated by simply measuring
their distances
relative to the dimension estimator co-ordinates, or automatically calibrated
using moving
calibration items and instantaneous transport system locations reported by the
transport location
sensor.
[00275] It will be noted that in the illustrated embodiment items are
loaded onto the in-motion
scale 172 before they are observed by area camera 152. Similarly, the upward
looking line-scan
camera 88 (shown at least on Figure 8A) might read an item's indicium before
it is observed by
area camera 152. Thus, weight measurements and indicia readings may be made
before the
dimension sensor 150 and item isolator 140 (schematically shown in Figure 15)
have determined
what items are in the scan tunnel. Indeed, the system's product identification
function would
perform as well with dimension sensor 150 and upward-looking line scan camera
132A located at
the end of the scan tunnel as it does with those sensors located at the front
of the scan tunnel.
The frontward location of these two sensors is preferred only to minimize the
processing lag
required to produce an identification. That is, the product identification can
be produced sooner
after the item leaves then tunnel when the data is collected at the front of
the tunnel than at the
end of the tunnel,
1002761 The weight estimator only knows the X-axis location of the items it
weighs. Two
items that overlap side-by-side (i.e., have common X locations but different Y
locations) on the
in-motion scale may be difficult to weigh individually. Thus, the reported
weight in this instance
is an aggregate weight of all the side-by-side items at that transport system
location (x value).
When a weight value arrives at the item description compiler 200 (shown
schematically in Figure
18) with a transport system location that matches more than one item, the item
description
compiler 200, in some embodiments, adds that weight value to each item's item
description
D167, along with an indication that it is an aggregate weight. In other
embodiments, the unique
item identifier(s) for the other side-by-side items are also added to the item
description D167, for
reasons described below.
66

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
[00277] The various parameter values that are transformed through the
transformation process
202 become spatially-transformed parameter values D70, which are then
delivered to an
information queue 207. The information queue 207 is a random access buffer,
that is, it does not
operate in a first in first out system. Because there are generally multiple
items on the sensing
volume conveyor belt, and because each parameter sensor sends its sensed
parameters as soon as
it recognizes them, the information queue 207 at any point in time contains
spatially-transformed
parameter values D70 from multiple items arranged in their order of arrival.
Because, for
example, the latency between the time an item's indicium physically passes
through a line-scan
camera's field¨of-view and the time the indicia reader produces the
corresponding indicia value
is highly variable, it is even possible that some spatially-transformed
parameter values D70 may
not be recognized or interpreted until long after the item has exited the
system 25.
[00278] The item description compiler 200 seeks to determine which of the
reported spatially-
transformed parameter values D70 in the information queue 207 was measured on
the surface or
at the location of the item through the process of geometric merging or geo-
parameter matching.
[00279] The data merging process of the item identification processor 300
depends on the
dimension sensor 150 and the item isolator 140. The item isolator 140
determines what items are
in the sensing volume (and gives them a unique tracking number, the 1III) and
the dimension
sensor 150 creates dimensioning data, including but not limited to the closed
height profiles with
the corresponding bottom outlines. Together, the data from the item isolator
140 and the
dimension sensor 150 form the baseline entry in the item description D167
being created in the
item description compiler 200. Other parameter values are identified as
belonging to the item
and are added to the item description D167. In some embodiments, the data
merging process 215
receives transport system locations D148 and delivers image retrieval requests
D149 to the
region extract process 138 of the indicia reader 130 shown in Figure 14.
[00280] As mentioned above, parameter values are received by the item
description compiler
200 from the various parameter sensors, undergo transformations 202 and are
temporarily placed
in an information queue 207, As the item description compiler 200 builds an
item description
D167 through having the data merging process 215 match spatially-transformed
parameter values
D70 with the same transport system locations D148, it sends a data request
D169 to the
information queue 207 to remove the spatially-transformed parameter value D70
from the
information queue 207 to place it in the appropriate item description D167.
Thus, spatially-
transformed parameter values D70 are continuously added to and deleted from
the information
queue 207,
67

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
1002811 Finally, the item description D167 is sent out to the item
identification processor 300.
The item description compiler 200 sends the item description D167 file to item
identification
processor 300 at a point in the processing based on one or more selected
criteria. The criteria
may include, for example, sending the item description D167 when the current
transport system
location exceeds the item location by more than about 25% of the length of the
sensing volume.
In an embodiment, the send criterion may correspond to a belt position less
than or equal to a
particular distance from the end of the output belt.
[00282] Some parameter values are never associated with any item and may be
referred to as
orphan values. Orphan values are created if, for example, a parameter value is
delayed by a
processor reboot or if the transport system location D148 value has a defect.
Likewise, where an
item moves relative to the conveyor, for example a rolling bottle or can,
certain values may be
orphaned. An accumulation of unmatched parameter values in the information
queue 207 has the
tendency to impair system performance. In some embodiments, the item
description compiler
200 can include functionality for deleting parameter values from the
information queue 207 over
a certain selected time period. The determination to delete parameter values
depends on whether
the virtual location of new spatially-transformed parameter value D70 arriving
in the information
queue 207 is significantly beyond the length of the out-feed conveyor belt,
for example. This
condition would indicate that the orphan value is associated with an item long
gone from the
sensing volume,
[00283] Figure 19 is a data flow diagram for the item identification
processor 300. The item
description compiler 200 creates an item description D167 for each item
isolated by the item
isolator. The item identification processor 300 opens a file for each item
description D167
provided to it by the item description compiler 200. The item description D167
includes a list of
all the available measured parameter values collected by the system. The basic
function
performed by item identification processor 300 is to compare item description
D167 to a set of
product descriptions, stored in the product description database 310, and to
decide according to
pre-determined logic rules if the item is one of those products. In some
embodiments, the
product descriptions in product description database 310 contain the same sort
of information
about the products as have been collected about the items. Typically, product
descriptions
include digital indicia values, weight data, and dimensioning data about the
products. In some
embodiments, the product description may comprise other parameter values of
the products,
statistical information about the various parameters (for example, the
standard deviation of the
weight), digital photographs of each product, etc.
68

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
[00284] In an embodiment, a polygonal representation of an item can be
generated for the
focal plane space of each camera. Thus, for each object, there are multiple
polygons generated
corresponding to each of the camera views of that object. By way of example,
for a system
having seven perspectives, seven polygons would be generated and stored for
use in the merging
process as described below.
[00285] The item identification processor 300 attempts to determine a best
match between the
unknown item's parameter values and the database of (known) product
parameters. In some
embodiments, the indicia value (typically the UPC), is used as the primary
database query.
Assuming an exact indicia match is found in the product description database,
the item
identifieation processor 300 examines the remaining parameter values to decide
if the item is the
product represented by the indicia. This is a validation that the UPC has not
been misread or
destroyed. As described above, partial UPCs (or other codes) may be further
evaluated to narrow
a number of choices of possible items, and in an embodiment, a small number of
choices can be
passed to an operator for resolution,
[00286] The item description D167 is provided to a formulate-database-query
process 305,
which compares available item parameters to determine based on, for example, a
given indicium,
weight and height, what the item is. When a query D209 has been formulated,
the formulate-
database-query process 305 delivers it to the product description database
310, which in turn
provides a query result D210 to a product identification logic process 312.
Product identification
logic 312 compares query result D210, which is a product description, with the
original item
description D167 to decide if the two descriptions are similar enough to
declare an identification.
[00287] The item identification processor 300 is preprogrammed with a set
of logic rules by
which this validation is performed. In some embodiments, the rules are
deterministic (for
example, the weight must be within x% of the nominal weight for the product).
In other
embodiments, the rules can be determined using, for example, fuzzy logic.
[00288] Fuzzy or adaptive logic may find particular use in product
identification logic 312 to
address unusual situations. For example, some items will have multiple digital
indicia values and
certain products will be known to have multiple visible indicia, since
multiple line-scan cameras
produce images of each item and since some items have two or more distinct
indicia (e.g., a
multi-pack of water, where each bottle may have one bar code, and the multi-
pack case may have
a different bar code). In this example, fuzzy logic may perform better than a
strict rule that
governs how conflicting information is handled,
[00289] Although in some embodiments the digital indicia value may be a
preferred parameter
value for the database lookup, there are instances in which the formulate-
database-query process
69

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
305 uses one or more of the other parameter values in a first attempt to try
to identify an item.
For example, where indicia are misread or have been partially or fully
obscured from the line-
scan camera, the formulate-database-query process 305 is programmable to use
the other
parameter values previously described to accurately identify the item as a
product. For example,
if the weight, shape, and size of the item have been measured with a high
degree of certainty and
a few of the digits of the bar code were read, these data may provide a
sufficiently unique
product identification.
[00290] The output of product identification logic 312 is either a product
description with a
probability of identification or an exception flag which indicates that no
matching product
description was found. A lack of match may occur, for example, where an item
is scanned that
had never been entered into the database. This output is transferred to a
product/exception
decision process 314 in which a programmable tolerance level is applied. If
the probability of
identification is above this tolerance, the product identification data D233
and the Ull value
D231 are output. In typical embodiments, the identification output is
delivered to a logic engine
400. On the other hand, if the probability of identification is below the
tolerance level, then
product/exception decision process 314 associates an exception flag D232 with
the Uli D231.
Optionally, in some embodiments, when an item is flagged as an exception the
UII D231 is
delivered to an exception handler 320. The optional exception handler 320 can
include doing
nothing (e.g., letting the customer have this item for free), providing an
indicator to a system
operator to take action, or it could involve performing an automatic rescan.
[00291] Another optional function that is part of the item identification
processor is the ability
to update the product description database based on the new item's parameter
values. For
example, the mean and standard deviation of the weight of the product, which
are typical
parameters stored in product description database 310, can be refined with the
new weight data
collected each time that particular, product is identified, In some
embodiments, the item
identification processor 300 updates its product description database 310 with
every parameter
value it receives regarding items passing through the sensing volume. The
database update
process 313 receives Ufl D231 and item description D167 from the formulate
database query 305
process and performs the database update when it receives the product
description D233 and UII
D231 from product/exception decision process 314. Database update process 313
also receives
notice when UII D231 is an exception (flag D232) so that it can purge
inaccurate product
descriptions D167 associated with the exception IJII D231,
[00292] Prior to multi-read disambiguation, the Merger employs a single-
pass "best match"
algorithm for assigning barcodes to an item at its scheduled output position
(i.e., the Y belt

CA 02862421 2014-06-27
WO 2013/106446 PCT/US2013/020854
position at which the Merger sends information for an item to the output
subsystem for
subsequent transmission to the POS). The best match algorithm for barcodes
takes as input 1) a
single item for which output is to be generated, 2) an item domain consisting
of all items to be
considered when identifying the best barcode-to-item match - the output item
is also part of this
domain, and 3) a barcode domain consisting of all barcodes available to be
assigned to the output
item.
[00293] The algorithm works by visiting each barcode in the barcode domain,
in turn, and
computing a matching metric (Figure Of Merit - POM) between the barcode and
all items in the
supplied item domain. Once all barcode-to-item associations have been
computed, the algorithm
discards all associations with FOM values that are below a specific threshold
(this threshold may
be arrived at heuristically, and may be updated in accordance with real-world
performance, either
as a user setting or automatically). All remaining barcode-to-item
associations are then sorted
according to distance along the camera ray and the association with the
shortest distance is
considered to be the best match (the logic being that it is not likely to read
a barcode on an item
that is behind another item - thus, the barcode closest to the camera lens is
more likely to be
properly associated with the front item). If the item identified as the best
match is the same as the
output item, the barcode is assigned to the output item. Otherwise, the
barcode is not assigned.
[00294] By using the object identification system 25 described above to
identify unsalable
products to be grouped into lots in the reversed logistics method 1100
described above and
illustrated in Figure 1, the speed at which the products are identified may be
increased. In an
embodiment, the information generated by the object identification system may
be used to sort
the items as the items exit the scan tunnel based on disposition information
assigned in the
system to each product identifier (e.g., bar code). Such disposition
information may be based, for
example, on authorizations by product owners in accordance with vendor
agreements.
[00295] In addition, the object identification system 25 may be used to
capture high resolution
images of the products as the products pass through the system, which may
eliminate the need for
a return to vendor or hold for vendor sort if the intent of the vendor having
the product held or
returned is to investigate and determine the cause of damages to the product,
particularly if there
has been a package change. In an embodiment, the high resolution images
generated by the
object identification system may be uploaded to an on-line auction site for
customer bidding, so
customers may see images of the actual items being bid on, rather than stock
photos associated
with the items.
[00296] While in the foregoing specification this invention has been
described in relation to
certain particular embodiments thereof, and many details have been set forth
for purpose of
71

CA 02862421 2014-06-27
WO 2013/106446
PCT/US2013/020854
illustration, it will be apparent to those skilled in the art that the
invention is susceptible to
alteration and that certain other details described herein can vary
considerably without departing
from the basic principles of the invention. In addition, it should be
appreciated that structural
features or method steps shown or described in any one embodiment herein can
be used in other
embodiments as well.
72

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(86) PCT Filing Date 2013-01-09
(87) PCT Publication Date 2013-07-18
(85) National Entry 2014-06-27
Examination Requested 2017-10-19
Dead Application 2022-07-12

Abandonment History

Abandonment Date Reason Reinstatement Date
2021-07-12 FAILURE TO PAY APPLICATION MAINTENANCE FEE

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Registration of a document - section 124 $100.00 2014-06-27
Application Fee $400.00 2014-06-27
Maintenance Fee - Application - New Act 2 2015-01-09 $100.00 2014-12-17
Maintenance Fee - Application - New Act 3 2016-01-11 $100.00 2015-12-31
Maintenance Fee - Application - New Act 4 2017-01-09 $100.00 2017-01-06
Request for Examination $800.00 2017-10-19
Maintenance Fee - Application - New Act 5 2018-01-09 $200.00 2017-12-18
Maintenance Fee - Application - New Act 6 2019-01-09 $200.00 2018-12-18
Maintenance Fee - Application - New Act 7 2020-01-09 $200.00 2020-01-03
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
SUNRISE R&D HOLDINGS, LLC.
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Amendment 2019-12-05 2 47
Amendment 2020-01-31 4 77
Examiner Requisition 2020-05-19 7 403
Amendment 2020-07-06 34 1,044
Change to the Method of Correspondence 2020-07-06 3 76
Claims 2020-07-06 13 345
Amendment 2020-10-29 4 125
Abstract 2014-06-27 2 93
Claims 2014-06-27 3 94
Drawings 2014-06-27 25 537
Description 2014-06-27 72 4,371
Representative Drawing 2014-06-27 1 47
Cover Page 2014-10-14 2 70
Amendment 2017-07-07 1 29
Request for Examination 2017-10-19 1 31
Maintenance Fee Payment 2017-12-18 1 33
Amendment 2018-02-26 6 272
Examiner Requisition 2018-08-16 7 321
Amendment 2018-10-18 1 32
Maintenance Fee Payment 2018-12-18 1 33
Amendment 2019-02-15 163 8,324
Description 2019-02-15 72 4,450
Claims 2019-02-15 4 142
Amendment 2016-08-17 1 28
Amendment 2019-05-23 3 67
Examiner Requisition 2019-07-26 5 319
Amendment 2019-09-04 3 59
Amendment 2019-10-15 28 898
Claims 2019-10-15 12 296
PCT 2014-06-27 4 175
Assignment 2014-06-27 19 466
Prosecution-Amendment 2014-10-22 1 26
Fees 2014-12-17 1 33
Prosecution-Amendment 2015-04-21 1 24
Amendment 2015-08-14 2 50
Amendment 2015-10-06 1 27
Fees 2015-12-31 1 33
Amendment 2016-01-05 1 31
Amendment 2016-03-04 1 35
Amendment 2016-05-11 2 54
Amendment 2016-11-08 1 33
Fees 2017-01-06 1 33
Amendment 2017-02-24 1 32