Language selection

Search

Patent 3053894 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3053894
(54) English Title: DEFECT PREDICTION USING HISTORICAL INSPECTION DATA
(54) French Title: PREDICTION DE DEFAUTS A L'AIDE DE DONNEES HISTORIQUES D'INSPECTION
Status: Compliant
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06Q 50/04 (2012.01)
  • G06N 20/00 (2019.01)
  • G06N 3/02 (2006.01)
  • G06Q 10/04 (2012.01)
(72) Inventors :
  • CAO, HAN KY (Viet Nam)
  • NGUYEN, BINH THANH (Viet Nam)
  • PHAM, KHANH NAM (Viet Nam)
(73) Owners :
  • INSPECTORIO INC. (United States of America)
(71) Applicants :
  • INSPECTORIO INC. (United States of America)
(74) Agent: BORDEN LADNER GERVAIS LLP
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2019-09-03
(41) Open to Public Inspection: 2021-01-19
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
62/876,239 United States of America 2019-07-19

Abstracts

English Abstract



Defect prediction using historical inspection data is provided. In various
embodiments,
historical inspection data of a factory is received. The inspection data
comprises indications
of defects in one or more product produced in the factory. A plurality of
features is extracted
from the inspection data. The plurality of features is provided to a defect
prediction model.
The defect prediction model comprises a trained classifier or a collaborative
filter. An
indication is obtained from the defect prediction model of a plurality of
defects likely to occur
in the one or more product.


Claims

Note: Claims are shown in the official language in which they were submitted.



CLAIMS:

1. A system comprising:
a computing node comprising a computer readable storage medium having program
instructions embodied therewith, the program instructions executable by a
processor of
the computing node to cause the processor to perform a method comprising:
receiving historical inspection data of a factory, the inspection data
comprising
indications of defects in one or more product produced in the factory;
extracting a plurality of features from the inspection data;
providing the plurality of features to a defect prediction model, wherein the
defect prediction model comprises a trained classifier or a collaborative
filter;
obtaining from the defect prediction model an indication of a plurality of
defects likely to occur in the one or more product.
2. The system of claim 1, wherein the defect prediction model comprises a
trained
classifier and a collaborative filter, the trained classifier and the
collaborative filter
being configured to provide a consensus output.
3. The system of claim 1, wherein the defect prediction model comprises a
trained
classifier and a collaborative filter, the trained classifier and the
collaborative filter
being configured to provide an ensemble output.
4. The system of claim 1, wherein the indications of defects in one or more
product
comprise indications of defects in a predetermined product style, product
line, or
product category.

Page 44


5. The system of claim 1, wherein the indications of defect in one or more
product
comprise a plurality of defect names and a defect rate corresponding to each
of the
plurality of defect names.
6. The system of claim 1, wherein the plurality of features comprises:
attributes of a past inspection at the factory,
attributes of the one or more product, or
attributes of the defects in the one or more product.
7. The system of claim 1, wherein the trained classifier comprises an
artificial neural
network.
8. The system of claim 7, wherein the artificial neural network comprises a
deep neural
network.
9. The system of claim 1, wherein the collaborative filter comprises a
neighborhood
model or a latent factor model.
10. The system of claim 1, wherein the plurality of defects comprises a
predetermined
number of most likely defects.
11. The system of Claim 1, the method further comprising pre-processing the
data.
12. The system of Claim 11, wherein pre-processing the data comprises
aggregating the
data.
13. The system of Claim 12, wherein pre-processing the data further comprises
filtering
the data.
14. The system of Claim 1, wherein extracting the plurality of features from
the data
comprises applying a mapping from a defect name to one or more standardized
defect
names from a predetermined nomenclature, for each of the indications of
defects.

Page 45


15. The system of Claim 1, wherein the historical inspection data comprises a
plurality of
product names, and wherein extracting the plurality of features from the data
comprises applying a mapping from each of the plurality of product names to a
standardized product name from a predetermined nomenclature.
16. The system of Claim 1, wherein the method further comprises:
anonymizing the historical inspection data of the factory.
17. The system of Claim 1, wherein the data further comprise performance
history of the
factory.
18. The system of Claim 1, wherein the data further comprise geographic
information of
the factory.
19. The system of Claim 1, wherein the data further comprise product data of
the factory.
20. The system of Claim 1, wherein the data further comprise brand data of
inspected
products of the factory.
21. The system of Claim 1, wherein the data span a predetermined time window.
22. The system of Claim 1, wherein
providing the plurality of features to the defect prediction model comprises
sending
the plurality of features to a remote defect prediction server, and
obtaining from the defect prediction model an indication of a plurality of
defects
comprises receiving an indication of a plurality of defects from the defect
prediction
server.
23. The system of Claim 1, wherein extracting the plurality of features
comprises applying
a dimensionality reduction algorithm.

Page 46


24. The system of Claim 1, wherein the indication of a plurality of defects
likely to occur
comprises a list of a plurality of defects likely to occur at the factory.
25. The system of Claim 24, wherein the list comprises a defect name, defect
rate, and
defect description for each of the plurality of defects.
26. The system of Claim 24, wherein the list comprises a list of a plurality
of defects
likely to occur in a particular purchase order, product, product style,
product line, or
product category.
27. The system of Claim 22, wherein obtaining the indication of the plurality
of defects
further comprises indication the report to a user.
28. The system of Claim 27, wherein providing the indication to a user
comprises sending
the indication to a mobile or web application.
29. The system of Claim 28, wherein said sending is performed via a wide area
network.
30. The system of Claim 1, wherein the trained classifier comprises a support
vector
machine.
31. The system of Claim 1, wherein obtaining the indication from the defect
prediction
model comprises applying a gradient boosting algorithm.
32. The system of Claim 1, wherein the method further comprises:
measuring performance of the defect prediction model by comparing the
indication of a plurality of defects to a ground truth indication of a
plurality of
defects;
optimizing parameters of the defect prediction model according to the
performance.

Page 47


33. The system of Claim 32, wherein optimizing the parameters of the defect
prediction
model comprises modifying hyperparameters of a trained machine learning model.
34. The system of Claim 32, wherein optimizing the parameters of the defect
prediction
model comprises replacing a first machine learning algorithm with a second
machine
learning algorithm, the second machine learning algorithm comprising
hyperparameters configured to improve the performance of the defect prediction

model.
35. A computer program product for defect prediction, the computer program
product
comprising a computer readable storage medium having program instructions
embodied therewith, the program instructions executable by a processor to
cause the
processor to perform a method comprising:
receiving historical inspection data of a factory, the inspection data
comprising
indications of defects in one or more product produced in the factory;
extracting a plurality of features from the inspection data;
providing the plurality of features to a defect prediction model, wherein the
defect prediction model comprises a trained classifier or a collaborative
filter;
obtaining from the defect prediction model an indication of a plurality of
defects likely to occur in the one or more product.
36. A method comprising:
receiving historical inspection data of a factory, the inspection data
comprising
indications of defects in one or more product produced in the factory;
extracting a plurality of features from the inspection data;

Page 48


providing the plurality of features to a defect prediction model, wherein the
defect prediction model comprises a trained classifier or a collaborative
filter;
obtaining from the defect prediction model an indication of a plurality of
defects likely to occur in the one or more product.
37. The method of claim 36, wherein the defect prediction model comprises a
trained
classifier and a collaborative filter, the trained classifier and the
collaborative filter
being configured to provide a consensus output.
38. The method of claim 36, wherein the defect prediction model comprises a
trained
classifier and a collaborative filter, the trained classifier and the
collaborative filter
being configured to provide an ensemble output.
39. The method of claim 36, wherein the indications of defects in one or more
product
comprise indications of defects in a predetermined product style, product
line, or
product category.
40. The method of claim 36, wherein the indications of defect in one or more
product
comprise a plurality of defect names and a defect rate corresponding to each
of the
plurality of defect names.
41. The method of claim 36, wherein the plurality of features comprises:
attributes of a past inspection at the factory,
attributes of the one or more product, or
attributes of the defects in the one or more product.
42. The method of claim 36, wherein the trained classifier comprises an
artificial neural
network.

Page 49


43. The method of claim 42, wherein the artificial neural network comprises a
deep neural
network.
44. The method of claim 36, wherein the collaborative filter comprises a
neighborhood
model or a latent factor model.
45. The method of claim 36, wherein the plurality of defects comprises a
predetermined
number of most likely defects.
46. The method of Claim 36, the method further comprising pre-processing the
data.
47. The method of Claim 46, wherein pre-processing the data comprises
aggregating the
data.
48. The method of Claim 47, wherein pre-processing the data further comprises
filtering
the data.
49. The method of Claim 36, wherein extracting the plurality of features from
the data
comprises applying a mapping from a defect name to one or more standardized
defect
names from a predetermined nomenclature, for each of the indications of
defects.
50. The method of Claim 36, wherein the historical inspection data comprises a
plurality
of product names, and wherein extracting the plurality of features from the
data
comprises applying a mapping from each of the plurality of product names to a
standardized product name from a predetermined nomenclature.
51. The method of Claim 36, further comprising:
anonymizing the historical inspection data of the factory.
52. The method of Claim 36, wherein the data further comprise performance
history of the
factory.

Page 50


53. The method of Claim 36, wherein the data further comprise geographic
information of
the factory.
54. The method of Claim 36, wherein the data further comprise product data of
the
factory.
55. The method of Claim 36, wherein the data further comprise brand data of
inspected
products of the factory.
56. The method of Claim 36, wherein the data span a predetermined time window.
57. The method of Claim 36, wherein
providing the plurality of features to the defect prediction model comprises
sending
the plurality of features to a remote defect prediction server, and
obtaining from the defect prediction model an indication of a plurality of
defects
comprises receiving an indication of a plurality of defects from the defect
prediction
server.
58. The method of Claim 36, wherein extracting the plurality of features
comprises
applying a dimensionality reduction algorithm.
59. The method of Claim 36, wherein the indication of a plurality of defects
likely to
occur comprises a list of a plurality of defects likely to occur at the
factory.
60. The method of Claim 59, wherein the list comprises a defect name, defect
rate, and
defect description for each of the plurality of defects.
61. The method of Claim 59, wherein the list comprises a list of a plurality
of defects
likely to occur in a particular purchase order, product, product style,
product line, or
product category.

Page 51


62. The method of Claim 57, wherein obtaining the indication of the plurality
of defects
further comprises indication the report to a user.
63. The method of Claim 62, wherein providing the indication to a user
comprises sending
the indication to a mobile or web application.
64. The method of Claim 63, wherein said sending is performed via a wide area
network.
65. The method of Claim 36, wherein the trained classifier comprises a support
vector
machine.
66. The method of Claim 36, wherein obtaining the indication from the defect
prediction
model comprises applying a gradient boosting algorithm.
67. The method of Claim 36, further comprising:
measuring performance of the defect prediction model by comparing the
indication of a plurality of defects to a ground truth indication of a
plurality of
defects;
optimizing parameters of the defect prediction model according to the
performance.
68. The method of Claim 67, wherein optimizing the parameters of the defect
prediction
model comprises modifying hyperparameters of a trained machine learning model.
69. The method of Claim 67, wherein optimizing the parameters of the defect
prediction
model comprises replacing a first machine learning algorithm with a second
machine
learning algorithm, the second machine learning algorithm comprising
hyperparameters configured to improve the performance of the defect prediction

model.

Page 52

Description

Note: Descriptions are shown in the official language in which they were submitted.


DEFECT PREDICTION USING HISTORICAL INSPECTION DATA
BACKGROUND
[0001] Embodiments of the present disclosure relate to defect prediction, and
more
specifically, to defect prediction using historical inspection data.
BRIEF SUMMARY
[0002] According to embodiments of the present disclosure, methods of and
computer
program products for defect prediction are provided. In various embodiments,
historical
inspection data of a factory is received. The inspection data comprises
indications of defects
in one or more product produced in the factory. A plurality of features is
extracted from the
inspection data. The plurality of features is provided to a defect prediction
model. The defect
prediction model comprises a trained classifier or a collaborative filter. An
indication is
obtained from the defect prediction model of a plurality of defects likely to
occur in the one or
more product.
[0003] In some embodiments, the defect prediction model comprises a trained
classifier and a
collaborative filter, the trained classifier and the collaborative filter
being configured to
provide a consensus output. In some embodiments, the defect prediction model
comprises a
trained classifier and a collaborative filter, the trained classifier and the
collaborative filter
being configured to provide an ensemble output.
[0004] In some embodiments, the indications of defects in one or more product
comprise
indications of defects in a predetermined product style, product line, or
product category. In
Page 1 of 52
CA 3053894 2019-09-03

some embodiments, the indications of defect in one or more product comprise a
plurality of
defect names and a defect rate corresponding to each of the plurality of
defect names.
[0005] In some embodiments, the plurality of features comprises: attributes of
a past
inspection at the factory, attributes of the one or more product, or
attributes of the defects in
the one or more product.
[0006] In some embodiments, the trained classifier comprises an artificial
neural network. In
some embodiments, the artificial neural network comprises a deep neural
network. In some
embodiments, the collaborative filter comprises a neighborhood model or a
latent factor
model. In some embodiments, the plurality of defects comprises a predetermined
number of
most likely defects.
[0007] In some embodiments, the method further comprising pre-processing the
data. In
some embodiments, pre-processing the data comprises aggregating the data. In
some
embodiments, pre-processing the data further comprises filtering the data. In
some
embodiments, extracting the plurality of features from the data comprises
applying a mapping
from a defect name to one or more standardized defect names from a
predetermined
nomenclature, for each of the indications of defects. In some embodiments, the
historical
inspection data comprises a plurality of product names, and wherein extracting
the plurality of
features from the data comprises applying a mapping from each of the plurality
of product
names to a standardized product name from a predetermined nomenclature.
[0008] In some embodiments, the method further comprises: anonymizing the
historical
inspection data of the factory. In some embodiments, the data further comprise
performance
history of the factory. In some embodiments, the data further comprise
geographic
information of the factory. In some embodiments, the data further comprise
product data of
Page 2 of 52
CA 3053894 2019-09-03

the factory. In some embodiments, the data further comprise brand data of
inspected products
of the factory. In some embodiments, the data span a predetermined time
window.
[0009] In some embodiments, providing the plurality of features to the defect
prediction
model comprises sending the plurality of features to a remote defect
prediction server, and
obtaining from the defect prediction model an indication of a plurality of
defects comprises
receiving an indication of a plurality of defects from the defect prediction
server. In some
embodiments, extracting the plurality of features comprises applying a
dimensionality
reduction algorithm. In some embodiments, the indication of a plurality of
defects likely to
occur comprises a list of a plurality of defects likely to occur at the
factory. In some
embodiments, the list comprises a defect name, defect rate, and defect
description for each of
the plurality of defects. In some embodiments, the list comprises a list of a
plurality of
defects likely to occur in a particular purchase order, product, product
style, product line, or
product category. In some embodiments, obtaining the indication of the
plurality of defects
further comprises indication the report to a user. In some embodiments,
providing the
indication to a user comprises sending the indication to a mobile or web
application. In some
embodiments, said sending is performed via a wide area network.
[0010] In some embodiments, the trained classifier comprises a support vector
machine. In
some embodiments, obtaining the indication from the defect prediction model
comprises
applying a gradient boosting algorithm.
[0011] In some embodiments, the method further comprises: measuring
performance of the
defect prediction model by comparing the indication of a plurality of defects
to a ground truth
indication of a plurality of defects; optimizing parameters of the defect
prediction model
according to the performance. In some embodiments, optimizing the parameters
of the defect
Page 3 of 52
CA 3053894 2019-09-03

prediction model comprises modifying hyperparameters of a trained machine
learning model.
In some embodiments, optimizing the parameters of the defect prediction model
comprises
replacing a first machine learning algorithm with a second machine learning
algorithm, the
second machine learning algorithm comprising hyperparameters configured to
improve the
performance of the defect prediction model.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0012] Fig. 1 is a schematic view of an exemplary system for defect prediction
according to
embodiments of the present disclosure.
[0013] Fig. 2 illustrates a process for defect prediction according to
embodiments of the
present disclosure.
[0014] Fig. 3A-B illustrates a framework for defect prediction according to
embodiments of
the present disclosure.
[0015] Fig. 4 illustrates a framework for defect prediction according to
embodiments of the
present disclosure.
[0016] Fig. 5 illustrates a process for training a defect prediction system
according to
embodiments of the present disclosure.
[0017] Fig. 6 illustrates an exemplary process for feature extraction
according to
embodiments of the present disclosure.
[0018] Fig. 7 illustrates an exemplary process for feature extraction
according to
embodiments of the present disclosure.
[0019] Fig. 8 depicts a computing node according to embodiments of the present
disclosure.
Page 4 of 52
CA 3053894 2019-09-03

DETAILED DESCRIPTION
[0020] Quality control in factories is commonly addressed by reducing the
number of defects
found at the factory. For some defects, this requires addressing the
underlying cause of the
defect, while other defects, such as some systemic and recurring defects, are
addressed by
reactive corrective actions, without taking into account the broader trends in
a factory's
performance that may have led to the defect. For example, after a quality
inspection at the
factory fails, defected products may be removed from the production cycle,
however, the
causes of the defects remain unknown, and little information is obtained
regarding potential
problems that may appear in the future during the planning and production
stages of
manufacturing.
[0021] Furthermore, when investigating the performance, quality control, or
defects of a
factory, a brand or retailer is typically limited by the data at their
disposal. Often, brands and
retailers only have access to factory performance data obtained by their
internal teams, and
are not privy to similar data from other factories, brands, or retailers. Even
within a factory,
due to various circumstances (e.g., non-digital recording of information,
manual information
gathering processes, or siloed data), the data obtained by self -inspection
programs and third-
party inspections and quality control interventions may be unavailable when
trying to
investigate the performance or defects of the factory.
[0022] To address these and other shortcomings, the present disclosure
provides a framework
for predicting defects that are likely to occur at a factory, such as a
textile or apparel factory.
In embodiments of the present disclosure, machine learning methods are used to
predict
defects at a factory before they occur, whereby the underlying causes of the
defects may be
investigated and addressed, improving the overall quality of the factory.
Knowing which
Page 5 of 52
CA 3053894 2019-09-03

defects are likely to occur in advance will enable a factory, brand, retailer,
or their business
partners to shift from a reactive quality control approach, whereby quality
issues are dealt
with after they are found, to a proactive approach, whereby corrective actions
may be taken
before defects occur or before an inspection is to take place.
[0023] For example, for a predicted defect at a factory, a root cause analysis
may be
conducted. Such an analysis may include analyzing the frequency of a given
defect occurring
at the factory or similar factories within specific product categories during
the last several
months. In addition, historical records and corresponding corrective and
preventive actions
may be reviewed. In addition, additional information may be obtained from the
factory for
further reference.
[0024] Additionally, the present disclosure provides for obtaining and
analyzing data across
multiple factories, brands, retailers, and inspection services in training a
defect prediction
model to accurately predict defects likely to occur at a particular factory.
In embodiments of
the present disclosure, data from quality control or intervention activities,
performed by a
wide variety of services or personnel at a wide variety of factories, brands,
or retailers, may be
input into a defect prediction model in order to train the model to predict
defects likely to
occur in a particular location, and may be used as input data into the defect
prediction model
to obtain an indication of defects likely to occur.
[0025] Using data from multiple factories, brands, retailers, and inspection
services allows for
a robust defect prediction model to be generated, whereby a large amount of
data and
analytics may be leveraged to provide accurate defect prediction for a factory
that otherwise
has comparatively little data with which to proactively address quality
issues.
Page 6 of 52
CA 3053894 2019-09-03

[0026] It will be appreciated that a defect prediction system has many
applications. A user
may obtain an account with a service, input their data to the service, and
obtain defect
prediction results from the service. The service may be accessed via a mobile
or web
application. Obtaining data from multiple users allows for leveraging larger
amounts of data
in order to provide more robust predictions, increasing user collaboration,
and facilitating
proactive quality assurance strategies.
[0027] In some embodiments, inspectors may use the defect prediction system
prior to an
inspection to obtain a visualization of the most likely defects they are going
to find. In some
embodiments, a mobile app is provided that displays the necessary steps and
procedures for
an inspector, thereby facilitating completion of an assigned inspection. One
procedure in an
inspection is the workmanship procedure, where an inspector checks according
to a given
workflow to ensure the quality of all products. In many cases, most defects
are found during
this procedure. The outputs of the defect prediction model may be provided as
part of the
workmanship section on such a mobile application. This allows visualization of
the most
likely defects for the inspector to find.
[0028] In some embodiments, defect prediction may be used by factories,
brands, or retailers
to implement preventative actions during the production planning stage of
manufacture.
Knowing which defects are likely to occur will enable factories to provide
solutions and
implement actions for mitigating the effect of the defects or preventing the
defects from
occurring. Brands or retailers may also use the defect predictions in order to
ensure that
preventative actions are being implemented as part of the production planning.
[0029] In many cases, during the production planning stage of manufacture, a
brand/retailer
raises questions related to the production plan and potential defects or
issues that may arise at
Page 7 of 52
CA 3053894 2019-09-03

the factory. Previous inspection performance of the factory may be used to
provide
preventive actions for the incoming production. At the time when the factory
submits
responses to a given question, it can use the insights regarding defects
likely to occur at the
factory for a specific product category in order to proactively suggest
necessary actions to
correct and prevent these issues. Those steps can help both the brand/retailer
and the factory
to reduce potential risk in the later production stage.
[0030] In embodiments of the present disclosure, historical inspection data of
a factory is
input into a defect prediction model, and a list of the top k defects most
likely to occur at the
factory is obtained. The inspection data may include information regarding the
factory and/or
specific product lines or product categories within the factory. The
inspection data may
include information regarding observed defects at the factory, including
defect names and
types, the number of defects observed in total, and the distribution of
defects among the
inspected products. The obtained list of defects may include defects that will
likely be
observed in a subsequent inspection of the factory, product line, or product
category within
the factory. The list may include predictions as to the types of defects
found, the total number
of types of defects found, and the distribution of each defect among a
factory's products, and
will be useful in planning future actions to be taken at a factory.
[0031] As used herein, the term defect refers to any flaw, shortcoming, or
imperfection in the
production cycle of a factory. In other words, a defect refers to an
observable, undesirable
deviation from a predetermined production quality standard. A defect may be
found on a
variety of levels of a production cycle, e.g., in the factory as a whole, in a
particular product
category, product line, product, or production method of the factory. A defect
may be present
in the various features of a product or product line, or during various phases
of the production
Page 8 of 52
CA 3053894 2019-09-03

or inspection cycle, e.g., the design, workmanship, packaging, manufacturing,
or
documentation. A defect may be quantifiable within a range of discrete or
continuous values,
or may be measured as a binary value (e.g., whether the defect is present or
not). A defect
may be found in a variety of ways, e.g., by inspectors during an inspection,
by dedicated
internal quality control teams at a factory, or by the personnel responsible
for the production
phase during which the defects were found. Finding defects, and preventing
them from
occurring, is a necessary component of quality control in a manufacturing
process.
[0032] In embodiments of the present disclosure, data related to a factory are
received. The
data may comprise historical inspection data of the factory, indications of
defects in one or
more product produced in the factory, other attributes of defects at the
factory, and/or other
attributes of the factory. In some embodiments, features are extracted from
the data. In some
embodiments, the data are preprocessed. In some embodiments, defect names are
mapped to
terms in a corresponding nomenclature. In some embodiments, the features are
provided to a
defect prediction model. In some embodiments, the defect prediction model
comprises a
machine learning model (e.g., a neural network or collaborative filter). In
some
embodiments, an indication of a plurality of defects likely to occur in one or
more product of
the factory is obtained from the prediction model. In some embodiments, the
obtained
indication comprises a list of defects that are likely to occur at the
factory.
[0033] Referring now to Fig. 1, a schematic view of an exemplary system for
defect
prediction according to embodiments of the present disclosure is shown. System
100
comprises defect collection server 106, defect prediction model 108, defect
prediction server
118, and inspection quality/compliance platform 102. In some embodiments,
there are three
Page 9 of 52
CA 3053894 2019-09-03

phases of operation of system 100: a training phase, a prediction phase, and
an updating
phase.
[0034] In the training phase, defect collection server 106 generates an
initial dataset by
collecting historical inspection data 104. Historical inspection data 104 may
be input into
defect collection server 106 at one time via batch insertion. Historical
inspection data 104 is
then combined with brand data 110, factory data 112, master product data 114,
and master
defect data 116, forming the initial training dataset. A number of relevant
features from the
historical inspection data and the other inputted data may then be extracted.
A number of
machine learning models are trained on the initial training dataset, and the
performance of
each model is evaluated. The performance of the machine learning models is
compared, and
the model with the most desirable performance is chosen as the defect
prediction model 108
and deployed onto defect prediction server 118. In some embodiments, multiple
models are
deployed to defect prediction server 118, in which case during the predictive
phase, a
consensus result is obtained from the multiple models. Similarly, the top
results from the
multiple models may be combined to provide an ensemble result. Application
programming
interfaces (APIs) may be built to allow web or mobile applications to interact
with defect
prediction server 118 and defect collection server 106 by providing data and
querying the
prediction server to obtain defect predictions. In some embodiments, the
defect prediction
server comprises a remote server.
[0035] In the prediction phase, inspection quality/compliance platform 102,
which may be
adapted to integrate with a web or mobile application (e.g., via APIs), may be
used to query
and provide data to defect prediction server 118, and obtain defect prediction
results from
Page 10 of 52
CA 3053894 2019-09-03

defect prediction server 118. In some embodiments, the defect prediction
results comprise a
list of the k most likely defects to occur at a factory or product line.
[0036] In the updating phase, inspection quality/compliance platform 102 may
be used to
provide new data to defect collection server 106. In some embodiments, new
inspection data
are regularly input into defect collection server 106 as inspections take
place at factories.
Features may be extracted from the new data and input into defect prediction
model 108,
resulting in updated defect predictions for a particular factory or product
line. In some
embodiments, new data of a particular factory or product are used to update
the defect
predictions for that factory or product. In some embodiments, new data of a
particular factory
or product are used to update the defect predictions of a different factory or
product.
[0037] In some embodiments, the defect prediction model may be tested against
new data and
updated to improve performance. In some embodiments, new data are provided in
the form of
new inspection data and/or customer feedback on previous predicted results.
Customer
feedback may include ground-truth reports of defects comprising indications of
the accuracy
of prior predictions, such as which predictions made by the prediction model
were incorrect,
as well as corrected results for the predictions. In some embodiments, new
data may be
collected with brand data, factory data, master product data, and master
defect data to form a
new dataset. It will be appreciated that the new dataset may be structured
similarly to the
dataset described above. In some embodiments, the existing training dataset
may be added to
the new dataset. In some embodiments, the performance of the defect prediction
model is
measured against the new dataset. In some embodiments, if the performance of
the defect
prediction model is below a certain threshold, the defect prediction model is
updated. The
threshold may be chosen heuristically, or may be adaptively calculated during
training. In
Page 11 of 52
CA 3053894 2019-09-03

some embodiments, updating the defect prediction model comprises modifying the
various
features extracted from input data. In some embodiments, updating the defect
prediction
model comprises modifying the parameters of a machine learning model in the
defect
prediction model. In some embodiments, a new machine learning model may be
chosen to
perform defect prediction. It will be appreciated that the methods of re-
training the prediction
model may be similar to those used in training the defect prediction system,
as described
above. The process of re-training the prediction model may be repeated a
number of times
until the performance of the model on the new dataset reaches an acceptable
threshold. The
updated defect prediction model is then deployed onto the defect prediction
server, and the
existing training dataset may be updated to include the new data.
[0038] Referring now to Fig. 2, a process for defect prediction according to
embodiments of
the present disclosure is shown. In some embodiments, input data 201 are
provided to defect
prediction system 202, and defect prediction results 206 are obtained. In some
embodiments,
input data 201 comprise an identification of a given factory, product
category, and/or client,
brand, or retailer. In some embodiments, input data 201 comprise various data
(e.g.,
inspection data, factory data) that may be used for defect prediction. In some
embodiments,
defect prediction system 202 comprises a remote defect prediction server. In
some
embodiments, defect prediction system 202 comprises a trained classifier. In
some
embodiments, defect prediction system 202 comprises a collaborative filter. In
some
embodiments, defect prediction system 202 employs a machine learning model to
predict
defects likely to occur at a factory. In some embodiments, defect prediction
system 202
receives input data 201 and performs data processing step 203. In some
embodiments, data
processing step 203 comprises mapping terms used in the input data to terms in
a standardized
Page 12 of 52
CA 3053894 2019-09-03

nomenclature. In some embodiments, all available relevant data are collected
and processed
at 203. In some embodiments, feature extraction step 204 is performed by
defect prediction
system 202 to extract various features. In some embodiments, feature
extraction step 204 is
performed on data that has been processed at step 203. In some embodiments, a
feature
vector is output. In some embodiments, the features extracted at 204 are
provided to a defect
prediction model at 205. In some embodiments, the defect prediction model
comprises a
trained machine learning model. In some embodiments, the defect prediction
model outputs
prediction results 206. In some embodiments, prediction results 206 comprise a
list of defects
likely to occur at a factory. In some embodiments, the list of defects is
limited to providing
the k most likely defects to occur.
[0039] In some embodiments, defect prediction model comprises a trained
classifier. In some
embodiments, the trained classifier is a deep neural network. In some
embodiments, the
defect prediction model applies a collaborative filtering method to input
data. In some
embodiments, the collaborative filtering method uses a neighborhood model or a
latent factor
model. According to the present disclosure, other suitable techniques for the
prediction model
include factorization machines, neural factorization machines, field-aware
neural factorization
machines, deep factorization machines, and deep cross networks.
[0040] In some embodiments, the trained classifier is a random decision
forest. However, it
will be appreciated that a variety of other classifiers are suitable for use
according to the
present disclosure, including linear classifiers, support vector machines
(SVM), gradient
boosting classifiers, or neural networks such as convolutional neural networks
(CNN) or
recurrent neural networks (RNN).
Page 13 of 52
CA 3053894 2019-09-03

[0041] Suitable artificial neural networks include but are not limited to a
feedforward neural
network, a radial basis function network, a self-organizing map, learning
vector quantization,
a recurrent neural network, a Hopfield network, a Boltzmann machine, an echo
state network,
long short term memory, a bi-directional recurrent neural network, a
hierarchical recurrent
neural network, a stochastic neural network, a modular neural network, an
associative neural
network, a deep neural network, a deep belief network, a convolutional neural
networks, a
convolutional deep belief network, a large memory storage and retrieval neural
network, a
deep Boltzmann machine, a deep stacking network, a tensor deep stacking
network, a spike
and slab restricted Boltzmann machine, a compound hierarchical-deep model, a
deep coding
network, a multilayer kernel machine, or a deep Q-network.
[0042] Various metrics may be used to measure the performance of learning
models. In some
embodiments, the metrics used to measure the performance include precision@k
and
recall@k. However, it will be appreciated that other metrics may also be
suitable for use,
such as precision, recall, AUC, and F-1 score.
[0043] In embodiments of the present disclosure, data may be obtained in a
variety of
formats. Data may be structured or unstructured, and may comprise information
stored in a
plurality of media. Data may be inputted manually into a computer, or may be
obtained
automatically from a file via a computer. It will be appreciated that a
variety of methods are
known for obtaining data via a computer, including, but not limited to,
parsing written
documents or text files using optical character recognition, text parsing
techniques (e.g.,
finding key/value pairs using regular expressions), and/or natural language
processing,
scraping web pages, and obtaining values for various measurements from a
database (e.g., a
relational database), XML file, CSV file, or JSON object.
Page 14 of 52
CA 3053894 2019-09-03

[0044] In some embodiments, factory or inspection data may be obtained
directly from a data
management system. In some embodiments, the data management system is
configured to
store information related to factories and/or inspections. The data management
system may
collect and store various types of information related to factories and
inspections, such as
information pertaining to purchase orders, inspection bookings, assignments,
reports,
corrective and preventive action (CAPA), inspection results, and other data
obtained during
inspections. It will be appreciated that a large set of data may be available,
and in some
embodiments, only a subset of the available data is used for input into a
prediction model.
[0045] As used herein, an inspection booking refers to a request for a future
inspection to take
place at a proposed date. The inspection booking may be initiated by a vendor,
brand, or
retailer, and may contain information of a purchase order corresponding to the
future
inspection. As used herein, an assignment refers to a confirmed inspection
booking. The
assignment may contain a confirmation of the proposed date of the inspection
booking, as
well as an identification of an assigned inspector and information related to
the booking.
[0046] Data may be obtained via a data pipeline to collect data from various
sources of
factory and inspection data. A data pipeline may be implemented via an
Application
Programming Interface (API) with permission to access and obtain desired data
and calculate
various features of the data. The API may be internally facing, e.g., it may
provide access to
internal databases containing factory or inspection data, or externally
facing, e.g., it may
provide access to factory or inspection data from external brands, retailers,
or factories. In
some embodiments, data are provided by entities wishing to obtain a prediction
result from a
prediction model. The data provided may be input into the model in order to
obtain a
prediction result, and may also be stored to train and test various prediction
models.
Page 15 of 52
CA 3053894 2019-09-03

[0047] In embodiments of the present disclosure, data may be sent and received
via mobile or
web applications. A user may have an account with a service that is adapted to
send data via
a mobile or web application and receive results from a prediction server. The
data may be
sent manually or automatically. The data may be input to the server
automatically after a
triggering event, such as an inspection, or may be input automatically at
regular intervals
(e.g., every month, every 180 days). Similarly, information may be sent to a
user via the
mobile or web application. The information may comprise prediction results
from a
prediction server. The information may be sent to a user upon request, or it
may be sent
automatically. The information may be sent automatically after a triggering
event, such as a
change in existing prediction results or a reconfiguration of a prediction
model in the
prediction system, or it may be sent automatically at a regular interval. It
will be appreciated
that a variety of other methods and data transfer schemes may also be used for
sending and
receiving information via an application.
[0048] The mobile application may be implemented on a smartphone, tablet, or
other mobile
device, and may run on a variety of operating systems, e.g., i0S, Android, or
Windows. In
various embodiments, defect prediction results are sent to the mobile or web
application via a
wide area network.
[0049] According to the present disclosure, data may be obtained at a variety
of levels. Data
may be taken of a specific purchase order of a brand or retailer, product,
product line, style of
product, product category, division within a factory, or factory. Data may
also be obtained for
multiple products, product lines, categories, or divisions within a factory,
and may be
obtained for a number of products, product lines, product categories, or
divisions across
multiple factories. It will be appreciated that while certain examples are
described in terms of
Page 16 of 52
CA 3053894 2019-09-03

data related to a factory or product line, it will be appreciated that this is
meant to encompass
specific product, purchase order, style, or other classification. Likewise,
while certain
examples are described in terms of results related to a factory or product
line, it will be
appreciated that this is meant to encompass specific product, purchase order,
style, or other
classification. Similarly, while various examples described herein refer to
data of a factory, it
will be appreciated that the present disclosure is applicable to brands,
retailers, or other
business entities involved in the manufacture or production of products.
[0050] As used herein, a product or product line's style refers to a
distinctive appearance of
an item based a corresponding design. A style may have a unique identification
(ID) within a
particular brand, retailer, or factory. Style IDs may be used as an
identifying feature by which
other measurements may be aggregated in order to extract meaningful features
related to
inspection results and defect prediction.
[0051] In some embodiments, obtained data are anonymized so that identifying
information
of the factory, brand, or retailer is not available to an ordinary user.
[0052] The obtained data may also be aggregated and statistical analysis may
be performed
on the data. According to embodiments of the present disclosure, data may be
aggregated and
analyzed in a variety of ways, including, but not limited to, adding the
values for a given
measurement over a given time window (e.g., 7 days, 14 days, 180 days or a
year), obtaining
the maximum and minimum values, mean, median, and mode for a distribution of
values for a
given measurement over a given time window, and obtaining measures of the
prevalence of
certain values or value ranges among the data. For any feature or measurement
of the data,
one can also measure the variance, standard deviation, skewness, kurtosis,
hyperskewness,
Page 17 of 52
CA 3053894 2019-09-03

hypertailedness, and various percentile values (e.g., 5%, 10%, 25%, 50%, 75%,
90%, 95%,
99%) of the distribution of the feature or measurement over a given time
window.
[0053] The data may also be filtered prior to aggregating or performing
statistical or
aggregated analyses. Data may be grouped by certain characteristics, and
statistical analysis
may be performed on the subset of data bearing the characteristics. For
example, the above
metrics can be calculated for data related only to a particular inspection
type, or to inspections
of above a minimum sample size.
[0054] Aggregation and statistical analysis may also be performed on data
resulting from
prior aggregation or statistical analysis. For example, the statistical values
of a given
measurement over a given time period may be measured over a number of
consecutive time
windows, and the resulting values may be analyzed to obtain values regarding
their variation
over time. For example, the average inspection fail rate of a factory may be
calculated for
various consecutive 7-day windows, and the change in the average fail rate may
be measured
over the 7-day windows.
[0055] In embodiments of the present disclosure, historical inspection data
includes
information related to the results of past inspections (e.g., whether an
inspection was passed
or not, information related to defects found during the inspection), as well
as information
obtained over the course of the inspection (e.g., a general profile and
performance report of
the factory). Examples of suitable data for predicting defects that are likely
to occur at a
factory include: data obtained from previous inspections at the same factory,
data obtained
from inspections at other factories, data obtained from inspections at other
factories with
similar products or product lines to the subjects of the future inspections,
data obtained from
the factory across multiple inspections, data regarding future inspection
bookings, (e.g., the
Page 18 of 52
CA 3053894 2019-09-03

geographic location, time, entity performing the inspection, and/or the type
of inspection),
data related to the business operations of the factory, data related to
product quality of the
factory, general information regarding the factory, data related to the
sustainability of the
factory or other similar factories, and/or data related to the performance of
the factory or other
similar factories. The data may comprise information obtained from customer
reviews on
products or product lines similar to those produced by the factory, and/or
customer reviews on
products or product lines originating at the factory. It will be appreciated
that for some
metrics, a factory may be divided into various divisions, with different
metrics obtained for
each division.
[0056] Examples of data related to defect prediction include: the number of
orders placed at
the factory, the quantity of the orders, the quality of the orders, the
monetary value of the
orders, general information regarding the orders, the description of each
product at the
factory, (e.g., the product's stock keeping unit (SKU), size, style, color,
quantity, and
packaging method), the financial performance of the factory, the number of
inspected items at
the factory during an inspection, the number of inspected items at the factory
during
inspections of procedures such as workmanship, packaging, and measurement,
information
regarding the acceptable quality limit (AQL) of processes at the factory
(e.g., the sampling
number used to test quality), the inspection results of past inspections at
the factory, the
inspection results of past inspections for a particular product/product line,
the inspection
results at other factories with similar products, the inspection results of
past inspections at
business partners of the factory, the values for various metrics collected
over the course of
inspections, the geographic location of the factory, the factory's size, the
factory's working
Page 19 of 52
CA 3053894 2019-09-03

conditions and hours of operation, and aggregations and statistical metrics of
the
aforementioned data.
[0057] Historical inspection data may also include specific information
regarding defects
found during an inspection. This may include the number of defects found, the
number of
defective units, the names of the defects, the types of defects, the
categories of defects, the
rates of the defects among the tested merchandise, the severity of the
defects, and the
distribution of the defect types and/or their severity among the tested
products. In some
embodiments, the defect category corresponds to an inspection procedure during
which the
defect was found, e.g., workmanship, packaging, or measurement. In some
embodiments,
defects are classified by product lines, product category, or levels
(minor/major/critical).
[0058] Historical inspection data may comprise a list of all of the defects
found at a factory
during an inspection. The list may refer to defects using defect names as
given by a particular
factory, or it may use defect names corresponding to names in a standardized
nomenclature.
Average defect rates may then be calculated for particular defects or
factories over a given
time window. Inspection data may also comprise listings of all of the
categories and product
lines of a factory, as well as all of the possible defects that may be found
for the products in
the factory.
[0059] Information regarding the factory, e.g., the factory location, the
factory profile, and/or
product information related to the products inspected, e.g., the product name,
the product line,
the product category, may be obtained from inspection data. An exemplary
factory profile
includes factory head count, factory business area, factory address, and/or
factory contact. A
measure of overall factory performance may also be obtained by estimating a
defect rate of
different defects and an overall inspection failure rate during a given time
window.
Page 20 of 52
CA 3053894 2019-09-03

[0060] In embodiments of the present disclosure, for each defect found, a
variety of metrics
corresponding to the defect may be obtained. For example, one may obtain the
sample size
measured when finding the defect, the type of inspection that was performed
(e.g., internal
inspection, 3rd party inspection), the total number of available quantity of
the inspected
product, product line, or product category, the number of different styles of
the product, and
the number of defected items measured. In various embodiments, inspection
types include
self-inspection, DUPRO inspection (DUring PROduction inspection), FRI
inspection (Final
Random Inspection), Pre-Production inspection, 1st inline production
inspection, 2nd inline
production inspection, and/or re-inspection. For a particular defect, one may
obtain an
average value of the rate of occurrence of the defect during a particular time
window.
[0061] It will be appreciated that data may be collected over a variety of
time windows e.g.,
the last 7, 14, 30, 60, or 90 days, or a particular 7, 14, 30, 60, or 90 day
window. Data may be
collected from a number of factories, divisions within factories, brands,
retailers, product
categories, product lines, and products. Data may be collected on a variety of
scales, for
example, on the scale of a particular factory or group of factories, divisions
within factories,
and product categories, product lines, or products either within a factory or
across multiple
factories. In some embodiments, inspection data and corresponding defect data
are
timestamped.
[0062] It will be appreciated that a large number of features may be extracted
by a variety of
methods, such as manual feature extraction, whereby features with a
significant correlation to
the target variable (e.g., the defects likely to occur) are calculated or
extracted from the
obtained data. A feature may be extracted directly from the data, or may
require processing
and/or further calculation to be formatted in such a way that the desired
metric may be
Page 21 of 52
CA 3053894 2019-09-03

extracted. For example, given the results of various inspections at a factory
over the last year,
one may wish to calculate the percentage of failed inspections over the time
period. In some
embodiments, extracting features results in a feature vector, which may be
preprocessed by
applying dimensionality reduction algorithms (such as principal component
analysis and
linear discriminant analysis) or inputting the feature vector into a neural
network, thereby
reducing the vector's size and improving the performance of the overall
system.
[0063] In embodiments of the present disclosure, neural networks may be used
for defect
prediction. Defect prediction using neural networks may be formulated as
follows:
Assume that for a given factory and product category, n attributes fxi, x2,
..., } may be
extracted, and D = {d1, d2,. , dm} is a list of all possible defects that may
be found during
an inspection, where M is the total number of defects. Given a feature vector
for a factory
and product category, x = {x1, x2, , xii), a function F (x) may be determined
for estimating
a defect-rate vector 9(x):
9(x) = F (x) = [ 1, , m]
Equation 1,
where i is the predicted defect rate of the ith defect to be found during the
next inspection at
that factory, where i = 1,2, ..., M) and Er_i = 1. It will be appreciated that
after
calculating the vector 9(x), the top K defects likely to occur at a factory
may be easily
extracted from the vector j"(x) by sorting all of the elements in the set t,
1, 2,, .14} and
selecting the top K indices.
Page 22 of 52
CA 3053894 2019-09-03

[0064] In embodiments of the present disclosure, while one factory may have
multiple
inspections for the same product or product category, every inspection is
associated with a
unique factory. A list of the actual defect rates of each defect, {s1, s2, ,
sm} occurring at the
factory may be defined for a given product category and a particular
inspection. Additionally,
the terms yikj and 9ikj may be defined as follows:
S1> sf
Yij = t 0, otherwise
Equation 2
1
^lc
= __________________________________________
1 + e
Equation 3
where sir and gir are the actual defect rate and the predicted defect rate,
respectively, of the ith
defect during the k "'inspection in the training dataset. It will be
appreciated that a variety of
recommendation methods may be used to learn the function F(x), such as deep
and wide
neural networks, factorization machines, and neural factorization machines.
[0065] To train the neural network, a variety of loss functions may be used.
In some
embodiments, a pointwise approach to a learning-to-rank problem is taken,
whereby the loss
is defined as:
N M
2
Loss= MN(N ¨1) cE yiki 2w2
k=1 i#J
Equation 4
Page 23 of 52
CA 3053894 2019-09-03

where N is defined as the number of inspections used in the dataset, M is the
total number of
defects, as described previously, w is a weight vector, A is a regularization
constant used in
the training process, and CE is the cross-entropy error function between y and
9, given by:
yiki lOg 54) _ (1 -y) log(1 9ikj).
Equation 5
[0066] Feature extraction for input into the neural network may be performed
by extracting
attributes from historical inspection data (such as those described in Table
1) over a given
time window, and using the extracted attributes as inputs into the neural
networks. The
features may be converted to various forms. In some embodiments, features that
may be
represented as categorical variables are converted using one-hot encoding into
a one-hot
vector. Features, such as defect descriptions, that are written in English or
another language
may be processed to be transformed into a vector. In some embodiments, all
numeric and
stopwords are removed from a description, and then, using a bag-of-words
method, each
defect description may be transformed into a high-dimensional vector, where
each element of
the vector is the number of appearances of a particular word in the defect
description. In
some embodiments, a bag-of-words method is used. An example of a suitable
method is
described in Wallach, Topic Modeling: Beyond Bag-of-words
(https://doi.org/10.1145/1143844.1143967).
[0067] By combining the various textual, categorical, and other features
together over a given
time window, a unique vector of dimension L may be obtained. In order to
predict the K most
likely defects to occur in the next inspection, a vector of dimension L may be
obtained for
each of the M defects, and the M vectors may be concatenated, forming an M x L
matrix.
Page 24 of 52
CA 3053894 2019-09-03

This matrix may be referred to as the "feature image" of the factory and
product category, and
may be used as the input data to a neural network.
[0068] A pair of factory and product category corresponds to a list of M
feature vectors of L
dimensions. These vectors may be used to predict the probability of occurrence
of all M
defects. These M vectors can concatenate into a two-dimensional matrix of the
size M X L,
which can be considered as an M x L image or a "feature image" of the pair of
the factory and
the product category.
[0069] After the feature extraction process, various deep learning methods may
be applied to
learn a suitable model for defect prediction. In some embodiments, a deep and
wide neural
network (DWN2) may be used. Using a DWN2, given an input vector x =
[x1,x2,...,xii},
all categorical variables may be transformed into corresponding embedding
vectors, from
which a concatenated vector may be obtained. The concatenated vector may be
passed
through several hidden layers with various activation functions. In some
embodiments, the
hidden layers are fully connected. In some embodiments, stochastic gradient
descent may be
used to learn the model parameters, although it will be appreciated that a
variety of
optimization methods may be used depending on the loss function used in the
network.
[0070] The input of the defect prediction model is the feature vector of
dimension L. For
given a factory and a product category, the probability of occurrence of each
defect (among M
defects) may be computed, and these likelihood values may be sorted to extract
the most
likely defects.
[0071] Referring now to Fig. 3A-B, a framework for defect prediction according
to
embodiments of the present disclosure is shown. Framework 300 comprises deep
neural
network 304. Input data 302 comprises features extracted from historical
inspection data, as
Page 25 of 52
CA 3053894 2019-09-03

well as factory and product information, although it will be appreciated that
a variety of
features and combinations of types of features may be used to generate the
input data. Input
data 302 is sent to neural network 304, and vector 306, corresponding to the
predicted defect
rates f i, , m) for a factory or product category. In some embodiments, a
sigmoid
activation function is used in the neural network in order to ensure that
. i = 1.
Equation 6
[0072] Defect prediction may be transformed into a recommendation problem,
whereby
defects are matched to a particular factory and/or product line where they are
likely to be
found. In embodiments of the present disclosure, recommendation algorithms,
such as
collaborative filtering (CF), may be used to predict defects likely to occur
in a factory.
Various methods of applying collaborative filtering techniques to the input
data may be used
to generate defect prediction results, e.g., memory-based approaches such as
neighborhood
based CF, item-based/user-based top-N recommendations, model based approaches,
context
aware CF, hybrid approaches, and latent factor based models.
[0073] In embodiments of the present disclosure, collaborative filtering may
be used to
predict defects by using various neighborhood models. In a factory-oriented
neighborhood
model, the rates of various defects may be estimated based on known defect
rates of many
factory inspections over a given time window. In a defect-oriented
neighborhood model, the
rates of various defects may be estimated based on known defect rates at the
same factory for
similar defects and/or products. In a neighborhood model, one may choose a
function to
measure the similarity between two items. It will be appreciated that a
variety of similarity
Page 26 of 52
CA 3053894 2019-09-03

measures may be used according to the present disclosure, such as Euclidean
distance,
Manhattan distance, Pearson correlation, and vector cosine. By calculating a
similarity
measure between each pair of defects, a defect rate rFi may be calculated for
each defect i in
each factory F, which denotes the estimated rate of occurrence of that defect
at the next
inspection of the factory. The defect rate rFt may represent a weighted
average of calculated
defect rates for neighboring defects.
[0074] In embodiments of the present disclosure, collaborative filtering based
on latent factor
models may be used to predict defects likely to occur at a factory. In this
model, a factory F
is associated with a factory-factor vector xF, and a defect u is associated
with a defect-factor
vector yu. The predicted defect rate rFi, representing the predicted rate of
defect i at factory
F, may be calculated as an inner product of the two latent factor vectors, xF
and yi:
TFi = x7F:Yi
Equation 7
During the training process for the learning model, parameter estimation may
be achieved by
solving the optimization problem,
min / (rFi 43'02 + A(IlxFir + iiYi 112)
x.,y*
rFt is known
Equation 8
[0075] In Equation 8, A is a regularization parameter. This optimization
problem may be
calculated by using stochastic gradient descent to obtain the most suitable
parameters of the
model.
Page 27 of 52
CA 3053894 2019-09-03

[0076] Referring now to Fig. 4, a framework for defect prediction according to
embodiments
of the present disclosure is shown. Framework 400 uses a collaborative
filtering method for
defect prediction. Using historical inspection data over a given time window,
factory defect
table 410 may be generated, indicating the defect rate for each defect at each
factory under
consideration. In some embodiments, a defect rate may take on a value in the
range [0, 1.0],
or NA if the defect rate of a defect at a particular factory is unknown. In
some embodiments,
table 410 is combined with additional information 420, which may include
factory
information, product information, inspection information, brand information,
and/or defect
information. Factory defect table 410 and/or additional information 420 may be
input into
collaborative filtering model 430. Collaborative filtering model 430 may be
deployed as the
defect prediction model on the defect prediction server described above.
Collaborative
filtering model 430 may output estimated defect rate vector 440 for each
factory, indicating a
predicted defect rate for each defect measured in table 410. The defect rates
indicated in
vector 440 may correspond to a list of defects likely to be found in the next
inspection at the
factory. In some embodiments, vector 440 indicates the defects likely to occur
or be found in
the next inspection of a factory for a particular brand/retailer and/or
product category.
[0077] Referring now to Fig. 5, a process for training a defect prediction
system according to
embodiments of the present disclosure is shown. The steps of process 500 may
be performed
to train a defect prediction model. In some embodiments, the model is deployed
on a
prediction server. The steps of process 500 may be performed locally to the
factory site, may
be performed by a remote server, e.g., a cloud server, or may be shared among
a local
computation device and a remote server. At 501, an initial training dataset is
created. In
some embodiments, the training dataset may comprise historical inspection data
of a large
Page 28 of 52
CA 3053894 2019-09-03

number of factories. In some embodiments, the training dataset comprises
historical
inspection data obtained over particular time windows (e.g., 3 months, 6
months, 9 months).
In some embodiments, the initial training dataset comprises information
regarding defects
found during historical inspections. It will be appreciated that the data may
include the
various features described above. The data may then be preprocessed at 503. In
some
embodiments, preprocessing the data comprises mapping terminology used in the
data to a
standardized nomenclature. Relevant features may then be extracted from the
data at 505.
The relevant features may include features related to historical inspections
and observed
defects, as discussed above. At 507, a number of machine learning models
(e.g., collaborative
filtering, deep neural networks) may be trained on the training dataset, and
the performance of
each model is evaluated, using the methods described above (e.g., measuring
the precision@k
and recall@k). The hyperparameters of each model may be configured to optimize
the
model's performance. The most useful features for performing the prediction
may be
selected. The model with the most desired performance is chosen at 509. At
511, the chosen
model is deployed onto a prediction server, where it may be used to provide
defect prediction
results for new input data, such as on new input data from a web or mobile
application.
[0078] In some embodiments, the initial training dataset may be divided into a
training
dataset, a testing dataset, and a validation dataset. In some embodiments, the
initial training
dataset is divided into a training dataset and a testing dataset. In some
embodiments, cross
validation techniques are used to estimate the performance of each defect
prediction model.
Performance results may be validated by subjecting the trained defect
prediction model to
new inspection data.
Page 29 of 52
CA 3053894 2019-09-03

[0079] In some embodiments, defect names, product names, and any other factory-
specific
terms that may appear in obtained data may be mapped to one or more terms in a
predefined
nomenclature. Given that many factories and inspection services use different
names to
categorize and label defects and products, when combining and comparing data
from multiple
sources, it may be necessary to map variant names to one or more predefined
terms or names.
Mapping terminology used by a number of factories to a particular nomenclature
also
prevents redundancies in the obtained data, whereby two defect types are
listed as separate
types of defects when they are in fact the same. Even within a factory,
different terms may be
used to refer to the same products or defects, as the brands, retailers, or
inspection services
used by the factory may change over time. Furthermore, standardizing the
obtained data to a
standardized nomenclature allows for new business partners of factories and
retailers to better
understand and evaluate the performance of the factory or retailer without
having to
understand the particular terminology used to measure their performance. Thus,
the present
disclosure provides for processing the obtained data to combine equivalent
terminology and to
map terminology used across multiple data sources to a predetermined
nomenclature.
[0080] In some embodiments, mapping terms to a nomenclature comprises
assembling a list
of possible terms that may be mapped to. In some embodiments, various
descriptors may be
associated with each term. For example, when mapping defects to a
nomenclature, one may
create a master list of master defects, wherein each defect is associated with
various defect
data, e.g., a master defect category, master defect name, and master defect
description. The
entries into each data type may vary based on the product being described.
Using the
mapping, any defect may be associated with one or more master defect. It will
be appreciated
that the nomenclature may be updated or expanded as new types of data are
created or
Page 30 of 52
CA 3053894 2019-09-03

measured. It will also be appreciated that a similar process may be used to
map product
names or other data that varies from source to source to a standardized
nomenclature.
[0081] Referring now to Fig. 6, an exemplary process for feature extraction is
illustrated
according to embodiments of the present disclosure. In the example of Fig. 6,
defect data are
obtained from two brands, Brand A and Brand B. For each brand, each defect is
mapped to
one or more master defects in a master defect list.
[0082] Referring now to Fig. 7, an exemplary process for feature extraction is
illustrated
according to embodiments of the present disclosure. In the example of Fig. 7,
a list of master
product lines, master product categories, and master product names may be
defined. In some
embodiments, the nomenclature is hierarchical, whereby certain terms are
associated with a
particular parent term, which itself may be associated with its own parent
term. For example,
certain master product names may be associated with certain master product
categories, and
certain master product categories may be associated with certain master
product lines. In
some embodiments, mapping a term to a standardized term in a nomenclature may
comprise
selecting the value of first category (e.g., a master product line), and then
selecting the value
of a second category from among the available possibilities associated with
the first category
(e.g., the master product categories associated with the master product line).
Similarly, the
value of a third category may be selected from among available possibilities
associated with
the second category (e.g., a master product name may be selected from
available master
product names associated with the selected master product category). In some
embodiments,
terms are mapped directly to the most specific standardized term, thereby
determining the
value of the parent terms.
Page 31 of 52
CA 3053894 2019-09-03

[0083] Individual brands or retailers may have different definitions of
product lines, product
categories, and product items. In order to improve the performance of the
defect prediction
model, these definitions may be standardized in various embodiments by
constructing a
universal set of product lines, product categories, and product items. For
example, a general
list of different master product lines may be defined (e.g., footwear,
apparel, hardgoods, etc.),
which can cover all possible cases. Each master product line is split it into
different product
categories, and each of these product categories is divided it into multiple
product items. In
this way, it is guaranteed that one product item belongs to a unique product
line and a unique
product category. Once established, the master product can be used to map the
corresponding
product line, product category, and product item from a given brand or
retailer. At the time
that feature vectors of given a factory and product category are computed, the
master product
line and the master product category are used.
[0084] In some embodiments, any defect found in a product may be assigned to a
master
product line, master product category, master product name, master defect
name, master
defect category, and master defect description. The mapped data may then be
used to train
the prediction model and obtain prediction results.
[0085] Table 1 lists a number of features that may be extracted from
inspection data using the
methods described above. The master product and master defect features are
denoted by an
asterisk.
Factory ID
Factory Location (e.g., city, country)
The sample size of the inspection
Page 32 of 52
CA 3053894 2019-09-03

The inspection type
The total number of available quantity in a product category
The number of styles in the inspection
The number of items in the inspection
Brand ID
Product Category
Product Line
Product Name
Master Product Category (*)
Master Product Line (*)
Defect Level (e.g., critical, major, minor)
Defect Category
Defect Description
Defect Name
Master Defect Category (*)
Master Defect Name (*)
Page 33 of 52
CA 3053894 2019-09-03

The average value of the defect rate of each defect occuring in all
inspections at a factory
during the last 7 days from the evaluation date
The average value of the defect rate of each defect occuring in all
inspections at a factory
during the last 14 days from the evaluation date
The average value of the defect rate of each defect occuring in all
inspections at a factory
during the last 30 days from the evaluation date
The average value of the defect rate of each defect occuring in all
inspections at a factory
during the last 60 days from the evaluation date
The average value of the defect rate of each defect occuring in all
inspections at a factory
during the last 90 days from the evaluation date
Table 1
[0086] According to embodiments of the present disclosure, the defect
prediction model
provides an indication of a plurality of defects likely to occur in one or
more product. In
some embodiments, the indication comprises a list of defects likely to occur
at the factory. In
some embodiments, the list includes the top K defects most likely to occur at
the factory. It
will be appreciated that the defects most likely to occur at the factory may
be understood to be
the defects most likely to be found at the next inspection. It will also be
appreciated that the
list of defects may be specific to a product, product line, style, product
category, division
within a factory, or factory, and each individual defect may include an
indication as to which
specific level of granularity it applies to. In some embodiments, the received
indication is
specific to a purchase order of a specific brand or retailer. The list may
also include the name
Page 34 of 52
CA 3053894 2019-09-03

of each defect. In some embodiments, defect names used in the standard
nomenclature are
mapped back to the names used by the specific factory/brand/retailer receiving
the report.
The value of K may be chosen by a user, or may be predetermined. In some
embodiments, all
defects with a probability above a certain threshold are received from the
defect prediction
model. The threshold may be chosen in a variety of ways, e.g., chosen by the
user,
predetermined by the defect prediction system, or learned adaptively during
training. In some
embodiments, the defect likelihood score of a defect for a factory and a
product category can
be considered as the predicted probability of the defect at the factory with
the product
category. For instance, the score 0.5 means the defect has a 50% chance to
happen in the
factory with the product category.
[0087] The information provided for each defect in the report may comprise a
number of
different values. In some embodiments, the report indicates whether the defect
is likely to
occur. The likelihood of the defect occurring may be compared to a threshold,
in the manner
described above. In some embodiments, the report indicates the likelihood of
the defect
occurring. In some embodiments, the report comprises an indication of the
severity of the
defect in a product. In some embodiments, the report comprises an indication
of the
percentage of products likely to have the defect. In some embodiments, the
report may
include the number of different defects expected to be found within a
particular product, the
number of total defects expected to be found within the available products,
and/or the
distribution of defects and/or their severity among the available products. In
some
embodiments, a description of the defect is provided. This may guide an
inspector in
identifying and measuring the particular defect.
Page 35 of 52
CA 3053894 2019-09-03

[0088] Referring now to Fig. 8, a schematic of an example of a computing node
is shown.
Computing node 10 is only one example of a suitable computing node and is not
intended to
suggest any limitation as to the scope of use or functionality of embodiments
described
herein. Regardless, computing node 10 is capable of being implemented and/or
performing
any of the functionality set forth hereinabove.
[0089] In computing node 10 there is a computer system/server 12, which is
operational with
numerous other general purpose or special purpose computing system
environments or
configurations. Examples of well-known computing systems, environments, and/or

configurations that may be suitable for use with computer system/server 12
include, but are
not limited to, personal computer systems, server computer systems, thin
clients, thick clients,
handheld or laptop devices, multiprocessor systems, microprocessor-based
systems, set top
boxes, programmable consumer electronics, network PCs, minicomputer systems,
mainframe
computer systems, and distributed cloud computing environments that include
any of the
above systems or devices, and the like.
[0090] Computer system/server 12 may be described in the general context of
computer
system-executable instructions, such as program modules, being executed by a
computer
system. Generally, program modules may include routines, programs, objects,
components,
logic, data structures, and so on that perform particular tasks or implement
particular abstract
data types. Computer system/server 12 may be practiced in distributed cloud
computing
environments where tasks are performed by remote processing devices that are
linked through
a communications network. In a distributed cloud computing environment,
program modules
may be located in both local and remote computer system storage media
including memory
storage devices.
Page 36 of 52
CA 3053894 2019-09-03

[0091] As shown in Fig. 8, computer system/server 12 in computing node 10 is
shown in the
form of a general-purpose computing device. The components of computer
system/server 12
may include, but are not limited to, one or more processors or processing
units 16, a system
memory 28, and a bus 18 that couples various system components including
system memory
28 to processor 16.
[0092] Bus 18 represents one or more of any of several types of bus
structures, including a
memory bus or memory controller, a peripheral bus, an accelerated graphics
port, and a
processor or local bus using any of a variety of bus architectures. By way of
example, and not
limitation, such architectures include Industry Standard Architecture (ISA)
bus, Micro
Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics
Standards
Association (VESA) local bus, Peripheral Component Interconnect (PCI) bus,
Peripheral
Component Interconnect Express (PCIe), and Advanced Microcontroller Bus
Architecture
(AMBA).
[0093] Computer system/server 12 typically includes a variety of computer
system readable
media. Such media may be any available media that is accessible by computer
system/server
12, and it includes both volatile and non-volatile media, removable and non-
removable media.
[0094] System memory 28 can include computer system readable media in the form
of
volatile memory, such as random access memory (RAM) 30 and/or cache memory 32.

Computer system/server 12 may further include other removable/non-removable,
volatile/non-volatile computer system storage media. By way of example only,
storage
system 34 can be provided for reading from and writing to a non-removable, non-
volatile
magnetic media (not shown and typically called a "hard drive"). Although not
shown, a
magnetic disk drive for reading from and writing to a removable, non-volatile
magnetic disk
Page 37 of 52
CA 3053894 2019-09-03

(e.g., a "floppy disk"), and an optical disk drive for reading from or writing
to a removable,
non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can
be
provided. In such instances, each can be connected to bus 18 by one or more
data media
interfaces. As will be further depicted and described below, memory 28 may
include at least
one program product having a set (e.g., at least one) of program modules that
are configured
to carry out the functions of embodiments of the disclosure.
[0095] Program/utility 40, having a set (at least one) of program modules 42,
may be stored in
memory 28 by way of example, and not limitation, as well as an operating
system, one or
more application programs, other program modules, and program data. Each of
the operating
system, one or more application programs, other program modules, and program
data or some
combination thereof, may include an implementation of a networking
environment. Program
modules 42 generally carry out the functions and/or methodologies of
embodiments as
described herein.
[0096] Computer system/server 12 may also communicate with one or more
external devices
14 such as a keyboard, a pointing device, a display 24, etc.; one or more
devices that enable a
user to interact with computer system/server 12; and/or any devices (e.g.,
network card,
modem, etc.) that enable computer system/server 12 to communicate with one or
more other
computing devices. Such communication can occur via Input/Output (I/O)
interfaces 22. Still
yet, computer system/server 12 can communicate with one or more networks such
as a local
area network (LAN), a general wide area network (WAN), and/or a public network
(e.g., the
Internet) via network adapter 20. As depicted, network adapter 20 communicates
with the
other components of computer system/server 12 via bus 18. It should be
understood that
although not shown, other hardware and/or software components could be used in
conjunction
Page 38 of 52
CA 3053894 2019-09-03

with computer system/server 12. Examples, include, but are not limited to:
microcode, device
drivers, redundant processing units, external disk drive arrays, RAID systems,
tape drives, and
data archival storage systems, etc.
[0097] The present disclosure may be embodied as a system, a method, and/or a
computer
program product. The computer program product may include a computer readable
storage
medium (or media) having computer readable program instructions thereon for
causing a
processor to carry out aspects of the present disclosure.
[0098] The computer readable storage medium can be a tangible device that can
retain and
store instructions for use by an instruction execution device. The computer
readable storage
medium may be, for example, but is not limited to, an electronic storage
device, a magnetic
storage device, an optical storage device, an electromagnetic storage device,
a semiconductor
storage device, or any suitable combination of the foregoing. A non-exhaustive
list of more
specific examples of the computer readable storage medium includes the
following: a portable
computer diskette, a hard disk, a random access memory (RAM), a read-only
memory
(ROM), an erasable programmable read-only memory (EPROM or Flash memory), a
static
random access memory (SRAM), a portable compact disc read-only memory (CD-
ROM), a
digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically
encoded device
such as punch-cards or raised structures in a groove having instructions
recorded thereon, and
any suitable combination of the foregoing. A computer readable storage medium,
as used
herein, is not to be construed as being transitory signals per se, such as
radio waves or other
freely propagating electromagnetic waves, electromagnetic waves propagating
through a
Page 39 of 52
CA 3053894 2019-09-03

waveguide or other transmission media (e.g., light pulses passing through a
fiber-optic cable),
or electrical signals transmitted through a wire.
[0099] Computer readable program instructions described herein can be
downloaded to
respective computing/processing devices from a computer readable storage
medium or to an
external computer or external storage device via a network, for example, the
Internet, a local
area network, a wide area network and/or a wireless network. The network may
comprise
copper transmission cables, optical transmission fibers, wireless
transmission, routers,
firewalls, switches, gateway computers and/or edge servers. A network adapter
card or
network interface in each computing/processing device receives computer
readable program
instructions from the network and forwards the computer readable program
instructions for
storage in a computer readable storage medium within the respective
computing/processing
device.
[0100] Computer readable program instructions for carrying out operations of
the present
disclosure may be assembler instructions, instruction-set-architecture (ISA)
instructions,
machine instructions, machine dependent instructions, microcode, firmware
instructions,
state-setting data, or either source code or object code written in any
combination of one or
more programming languages, including an object oriented programming language
such as
Smalltalk, C++ or the like, and conventional procedural programming languages,
such as the
"C" programming language or similar programming languages. The computer
readable
program instructions may execute entirely on the user's computer, partly on
the user's
computer, as a stand-alone software package, partly on the user's computer and
partly on a
remote computer or entirely on the remote computer or server. In the latter
scenario, the
remote computer may be connected to the user's computer through any type of
network,
Page 40 of 52
CA 3053894 2019-09-03

including a local area network (LAN) or a wide area network (WAN), or the
connection may
be made to an external computer (for example, through the Internet using an
Internet Service
Provider). In some embodiments, electronic circuitry including, for example,
programmable
logic circuitry, field-programmable gate arrays (FPGA), or programmable logic
arrays (PLA)
may execute the computer readable program instructions by utilizing state
information of the
computer readable program instructions to personalize the electronic
circuitry, in order to
perform aspects of the present disclosure.
[0101] Aspects of the present disclosure are described herein with reference
to flowchart
illustrations and/or block diagrams of methods, apparatus (systems), and
computer program
products according to embodiments of the disclosure. It will be understood
that each block of
the flowchart illustrations and/or block diagrams, and combinations of blocks
in the flowchart
illustrations and/or block diagrams, can be implemented by computer readable
program
instructions.
[0102] These computer readable program instructions may be provided to a
processor of a
general purpose computer, special purpose computer, or other programmable data
processing
apparatus to produce a machine, such that the instructions, which execute via
the processor of
the computer or other programmable data processing apparatus, create means for

implementing the functions/acts specified in the flowchart and/or block
diagram block or
blocks. These computer readable program instructions may also be stored in a
computer
readable storage medium that can direct a computer, a programmable data
processing
apparatus, and/or other devices to function in a particular manner, such that
the computer
readable storage medium having instructions stored therein comprises an
article of
Page 41 of 52
CA 3053894 2019-09-03

manufacture including instructions which implement aspects of the function/act
specified in
the flowchart and/or block diagram block or blocks.
[0103] The computer readable program instructions may also be loaded onto a
computer,
other programmable data processing apparatus, or other device to cause a
series of operational
steps to be performed on the computer, other programmable apparatus or other
device to
produce a computer implemented process, such that the instructions which
execute on the
computer, other programmable apparatus, or other device implement the
functions/acts
specified in the flowchart and/or block diagram block or blocks.
[0104] The flowchart and block diagrams in the Figures illustrate the
architecture,
functionality, and operation of possible implementations of systems, methods,
and computer
program products according to various embodiments of the present disclosure.
In this regard,
each block in the flowchart or block diagrams may represent a module, segment,
or portion of
instructions, which comprises one or more executable instructions for
implementing the
specified logical function(s). In some alternative implementations, the
functions noted in the
block may occur out of the order noted in the figures. For example, two blocks
shown in
succession may, in fact, be executed substantially concurrently, or the blocks
may sometimes
be executed in the reverse order, depending upon the functionality involved.
It will also be
noted that each block of the block diagrams and/or flowchart illustration, and
combinations of
blocks in the block diagrams and/or flowchart illustration, can be implemented
by special
purpose hardware-based systems that perform the specified functions or acts or
carry out
combinations of special purpose hardware and computer instructions.
Page 42 of 52
CA 3053894 2019-09-03

[0105] The descriptions of the various embodiments of the present disclosure
have been
presented for purposes of illustration, but are not intended to be exhaustive
or limited to the
embodiments disclosed. Many modifications and variations will be apparent to
those of
ordinary skill in the art without departing from the scope and spirit of the
described
embodiments. The terminology used herein was chosen to best explain the
principles of the
embodiments, the practical application or technical improvement over
technologies found in
the marketplace, or to enable others of ordinary skill in the art to
understand the embodiments
disclosed herein.
Page 43 of 52
CA 3053894 2019-09-03

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2019-09-03
(41) Open to Public Inspection 2021-01-19

Abandonment History

There is no abandonment history.

Maintenance Fee

Last Payment of $100.00 was received on 2023-08-25


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if standard fee 2024-09-03 $277.00
Next Payment if small entity fee 2024-09-03 $100.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $400.00 2019-09-03
Maintenance Fee - Application - New Act 2 2021-09-03 $100.00 2021-08-12
Maintenance Fee - Application - New Act 3 2022-09-06 $100.00 2022-08-11
Maintenance Fee - Application - New Act 4 2023-09-05 $100.00 2023-08-25
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
INSPECTORIO INC.
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative Drawing 2020-12-11 1 13
Cover Page 2020-12-11 2 45
Abstract 2019-09-03 1 14
Description 2019-09-03 43 1,805
Claims 2019-09-03 9 296
Drawings 2019-09-03 9 166