Language selection

Search

Patent 3050952 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3050952
(54) English Title: INSPECTION RISK ESTIMATION USING HISTORICAL INSPECTION DATA
(54) French Title: ESTIMATION DU RISQUE D'INSPECTION AU MOYEN DE DONNEES HISTORIQUES D'INSPECTION
Status: Report sent
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06Q 40/08 (2012.01)
  • G06N 3/02 (2006.01)
(72) Inventors :
  • NGUYEN, BINH THANH (Viet Nam)
  • NGUYEN, VIET CUONG THANH (Viet Nam)
(73) Owners :
  • INSPECTORIO INC. (United States of America)
(71) Applicants :
  • INSPECTORIO INC. (United States of America)
(74) Agent: BORDEN LADNER GERVAIS LLP
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2019-07-31
(41) Open to Public Inspection: 2019-10-11
Examination requested: 2019-07-31
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
62/864,950 United States of America 2019-06-21

Abstracts

English Abstract


Inspection risk estimation using historical inspection data is provided. In
various
embodiments, attributes of a future inspection of a factory and historical
data related to the
future inspection are received. A plurality of features are extracted from the
attributes of the
future inspection and the historical data. The plurality of features are
provided to a trained
classifier. A risk score indicative of a probability of failure of the future
inspection is
obtained from the trained classifier.


Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS:
1. A system comprising:
a computing node comprising a computer readable storage medium having program
instructions embodied therewith, the program instructions executable by a
processor of
the computing node to cause the processor to perform a method comprising:
receiving attributes of a future inspection of a factory;
receiving historical data related to the future inspection;
extracting a plurality of features from the attributes of the future
inspection and
the historical data;
providing the plurality of features to a trained classifier;
obtaining from the trained classifier a risk score indicative of a probability
of
failure of the future inspection.
2. The system of Claim 1, the method further comprising pre-processing the
historical
data.
3. The system of Claim 2, wherein pre-processing the data comprises
aggregating the
historical data.
4. The system of Claim 3, wherein pre-processing the data further comprises
filtering the
data.
5. The system of Claim 1, wherein the data further comprise performance
history of the
factory.
6. The system of Claim 1, wherein the data further comprise geographic
information of
the factory.
7. The system of Claim 1, wherein the data further comprise ground truth
risk scores.
32

8. The system of Claim 1, wherein the data further comprise product data of
the factory.
9. The system of Claim 1, wherein the historical data span a predetermined
time window.
10. The system of Claim 1, wherein
providing the plurality of features to the trained classifier comprises
sending the
plurality of features to a remote risk prediction server, and
obtaining from the trained classifier a risk score comprises receiving a risk
score from
the risk prediction server.
11. The system of Claim 1, wherein extracting the plurality of features
comprises
removing features with a low correlation to a target variable.
12. The system of Claim 1, wherein extracting the plurality of features
comprises applying
a dimensionality reduction algorithm.
13. The system of Claim 1, wherein extracting the plurality of features
from the historical
data comprises applying an artificial neural network.
14. The system of Claim 13, wherein applying the artificial neural network
comprises
receiving a first feature vector as input, and outputting a second feature
vector, the second
feature vector of a smaller dimensionality than the first feature vector.
15. The system of Claim 1, the method further comprising:
providing the risk score to a user.
16. The system of Claim 15, wherein providing the risk score to the user
comprises
sending the risk score to a mobile or web application.
17. The system of Claim 16, wherein said sending is performed via a wide
area network.
18. The system of Claim 1, wherein the trained classifier comprises an
artificial neural
network.
33

19. The system of Claim 1, wherein the trained classifier comprises a
support vector
machine.
20. The system of Claim 1, wherein obtaining the risk score comprises
applying a gradient
boosting algorithm.
21. The system of Claim 1, wherein the risk score is related to the
probability by a linear
mapping.
22. The system of Claim 1, wherein the method further comprises:
measuring performance of the trained classifier by comparing the risk score to
a ground truth risk score;
optimizing parameters of the trained classifier according to the performance.
23. The system of Claim 22, wherein optimizing the parameters of the
trained classifier
comprises modifying hyperparameters of a trained machine learning model.
24. The system of Claim 23, wherein optimizing the parameters of the
trained classifier
comprises replacing a first machine learning algorithm with a second machine
learning
algorithm, the second machine learning algorithm comprising hyperparameters
configured to
improve the performance of the trained classifier.
25. A method comprising:
receiving attributes of a future inspection of a factory;
receiving historical data related to the future inspection;
extracting a plurality of features from the attributes of the future
inspection and the
historical data;
providing the plurality of features to a trained classifier;
34

obtaining from the trained classifier a risk score indicative of a probability
of failure of
the future inspection.
26. The method of Claim 25, further comprising pre-processing the
historical data.
27. The method of Claim 26, wherein pre-processing the data comprises
aggregating the
historical data.
28. The method of Claim 27, wherein pre-processing the data further
comprises filtering
the data.
29. The method of Claim 25, wherein the data further comprise performance
history of the
factory.
30. The method of Claim 25, wherein the data further comprise geographic
information of
the factory.
31. The method of Claim 25, wherein the data further comprise ground truth
risk scores.
32. The method of Claim 25, wherein the data further comprise product data
of the
factory.
33. The method of Claim 25, wherein the historical data span a
predetermined time
window.
34. The method of Claim 25, wherein
providing the plurality of features to the trained classifier comprises
sending the
plurality of features to a remote risk prediction server, and
obtaining from the trained classifier a risk score comprises receiving a risk
score from
the risk prediction server.
35. The method of Claim 25, wherein extracting the plurality of features
comprises
removing features with a low correlation to a target variable.

36. The method of Claim 25, wherein extracting the plurality of features
comprises
applying a dimensionality reduction algorithm.
37. The method of Claim 25, wherein extracting the plurality of features
from the
historical data comprises applying an artificial neural network.
38. The method of Claim 37, wherein applying the artificial neural network
comprises
receiving a first feature vector as input, and outputting a second feature
vector, the second
feature vector of a smaller dimensionality than the first feature vector.
39. The method of Claim 25, further comprising:
providing the risk score to a user.
40. The method of Claim 39, wherein providing the risk score to the user
comprises
sending the risk score to a mobile or web application.
41. The method of Claim 40, wherein said sending is performed via a wide
area network.
42. The method of Claim 25, wherein the trained classifier comprises an
artificial neural
network.
43. The method of Claim 25, wherein the trained classifier comprises a
support vector
machine.
44. The method of Claim 25, wherein obtaining the risk score comprises
applying a
gradient boosting algorithm.
45. The method of Claim 25, wherein the risk score is related to the
probability by a linear
mmapping.
46. The method of Claim 25, further comprising:
measuring performance of the trained classifier by comparing the risk score to
a ground truth risk score;
36

optimizing parameters of the trained classifier according to the performance.
47. The method of Claim 46, wherein optimizing the parameters of the
trained classifier
comprises modifying hyperparameters of a trained machine learning model.
48. The method of Claim 47, wherein optimizing the parameters of the
trained classifier
comprises replacing a first machine learning algorithm with a second machine
learning
algorithm, the second machine learning algorithm comprising hyperparameters
configured to
improve the performance of the trained classifier.
49. A computer program product for inspection risk estimation, the computer
program
product comprising a computer readable storage medium having program
instructions
embodied therewith, the program instructions executable by a processor to
cause the
processor to perform a method comprising:
receiving attributes of a future inspection of a factory;
receiving historical data related to the future inspection;
extracting a plurality of features from the attributes of the future
inspection and the
historical data;
providing the plurality of features to a trained classifier;
obtaining from the trained classifier a risk score indicative of a probability
of failure of
the future inspection.
50. The computer program product of Claim 49, the method further comprising
pre-
processing the historical data.
51. The computer program product of Claim 50, wherein pre-processing the
data
comprises aggregating the historical data.
37

52. The computer program product of Claim 51, wherein pre-processing the
data further
comprises filtering the data.
53. The computer program product of Claim 49, wherein the data further
comprise
performance history of the factory.
54. The computer program product of Claim 49, wherein the data further
comprise
geographic information of the factory.
55. The computer program product of Claim 49, wherein the data further
comprise ground
truth risk scores.
56. The computer program product of Claim 49, wherein the data further
comprise
product data of the factory.
57. The computer program product of Claim 49, wherein the historical data
span a
predetermined time window.
58. The computer program product of Claim 49, wherein
providing the plurality of features to the trained classifier comprises
sending the
plurality of features to a remote risk prediction server, and
obtaining from the trained classifier a risk score comprises receiving a risk
score from
the risk prediction server.
59. The computer program product of Claim 49, wherein extracting the
plurality of
features comprises removing features with a low correlation to a target
variable.
60. The computer program product of Claim 49, wherein extracting the
plurality of
features comprises applying a dimensionality reduction algorithm.
61. The computer program product of Claim 49, wherein extracting the
plurality of
features from the historical data comprises applying an artificial neural
network.
38

62. The computer program product of Claim 61, wherein applying the
artificial neural
network comprises receiving a first feature vector as input, and outputting a
second feature
vector, the second feature vector of a smaller dimensionality than the first
feature vector.
63. The computer program product of Claim 49, the method further
comprising:
providing the risk score to a user.
64. The computer program product of Claim 63, wherein providing the risk
score to the
user comprises sending the risk score to a mobile or web application.
65. The computer program product of Claim 64, wherein said sending is
performed via a
wide area network.
66. The computer program product of Claim 49, wherein the trained
classifier comprises
an artificial neural network.
67. The computer program product of Claim 49, wherein the trained
classifier comprises a
support vector machine.
68. The computer program product of Claim 49, wherein obtaining the risk
score
comprises applying a gradient boosting algorithm.
69. The computer program product of Claim 49, wherein the risk score is
related to the
probability by a linear mapping.
70. The computer program product of Claim 49, wherein the method further
comprises:
measuring performance of the trained classifier by comparing the risk score to

a ground truth risk score;
optimizing parameters of the trained classifier according to the performance.
71. The computer program product of Claim 70, wherein optimizing the
parameters of the
trained classifier comprises modifying hyperparameters of a trained machine
learning model.
39

72. The
computer program product of Claim 71, wherein optimizing the parameters of the
trained classifier comprises replacing a first machine learning algorithm with
a second
machine learning algorithm, the second machine learning algorithm comprising
hyperparameters configured to improve the performance of the trained
classifier.

Description

Note: Descriptions are shown in the official language in which they were submitted.


INSPECTION RISK ESTIMATION USING HISTORICAL INSPECTION DATA
BACKGROUND
[0001] Embodiments of the present disclosure relate to inspection risk
estimation, and more
specifically, to inspection risk estimation using historical inspection data.
BRIEF SUMMARY
[0002] According to embodiments of the present disclosure, methods of and
computer
program products for inspection risk estimation are provided. In various
embodiments,
attributes of a future inspection of a factory and historical data related to
the future inspection
are received. A plurality of features are extracted from the attributes of the
future inspection
and the historical data. The plurality of features are provided to a trained
classifier. A risk
score indicative of a probability of failure of the future inspection is
obtained from the trained
classifier.
[0003] In various embodiments, the historical data are preprocessed. In
various
embodiments, preprocessing the data comprises aggregating the historical data.
In various
embodiments, pre-processing the data further comprises filtering the data.
[0004] In various embodiments, the data further comprise performance history
of the factory.
In various embodiments, the data further comprise geographic information of
the factory. In
various embodiments, the data further comprise ground truth risk scores. In
various
embodiments, the data further comprise product data of the factory. In various
embodiments,
the data span a predetermined time window.
Page 1 of 40
CA 3050952 2019-07-31

[0006] In various embodiments, providing the plurality of features to the
trained classifier
comprises sending the plurality of features to a remote risk prediction
server, and
obtaining from the trained classifier a risk score comprises receiving a risk
score from the risk
prediction server.
[0007] In various embodiments, extracting the plurality of features comprises
removing
features with a low correlation to a target variable. In various embodiments,
extracting the
plurality of features comprises applying a dimensionality reduction algorithm.
In various
embodiments, extracting a plurality of features from the data comprises
applying an artificial
neural network. In various embodiments, applying the artificial neural network
comprises
receiving a first feature vector as input, and outputting a second feature
vector, the second
feature vector having a lower dimensionality than the first feature vector.
[0008] In various embodiments, the risk score is provided to a user. In
various embodiments,
providing the risk score to the user comprises sending the risk score to a
mobile or web
application. In various embodiments, said sending is performed via a wide area
network.
[0009] In various embodiments, the trained classifier comprises an artificial
neural network.
In various embodiments, the trained classifier comprises a support vector
machine. In various
embodiments, obtaining from the trained classifier a risk score comprises
applying a gradient
boosting algorithm.
[0010] In various embodiments, the risk score is related to the probability by
a linear
mapping.
[0011] In various embodiments, the performance of the trained classifier is
measured by
comparing the risk score to a ground truth risk score, and parameters of the
trained classifier
are optimized according to the performance. In various embodiments, optimizing
the
Page 2 of 40
CA 3050952 2019-07-31

parameters of the trained classifier comprises modifying hyperparameters of a
trained
machine learning model. In various embodiments, optimizing the parameters of
the trained
classifier comprises replacing a first machine learning algorithm with a
second machine
learning algorithm, the second machine learning algorithm comprising
hyperparameters
configured to improve the performance of the trained classifier.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0012] Fig. 1 is a schematic view of an exemplary system for inspection risk
estimation
according to embodiments of the present disclosure.
[0013] Fig. 2 illustrates a process for inspection risk estimation according
to embodiments of
the present disclosure.
[0014] Fig. 3 illustrates a process for training an inspection risk estimation
system according
to embodiments of the present disclosure.
[0015] Fig. 4 illustrates a process for updating an inspection risk estimation
system according
to embodiments of the present disclosure.
[0016] Fig. 5 illustrates a process for training an inspection risk estimation
system according
to embodiments of the present disclosure.
[0017] Fig. 6 illustrates a process for training an inspection risk estimation
system according
to embodiments of the present disclosure.
[0018] Fig. 7 illustrates a process for training an inspection risk estimation
system according
to embodiments of the present disclosure.
[0019] Fig. 8 depicts a computing node according to embodiments of the present
disclosure.
Page 3 of 40
CA 3050952 2019-07-31

DETAILED DESCRIPTION
[0020] Inspections commonly occur in factories in order to ensure quality
control and
adherence to protocol. Estimating the risk of failing a particular inspection
in advance of the
inspection date allows factories and their business partners the ability to
implement a dynamic
quality control program based on the estimated risk.
[0021] The present disclosure provides a framework for estimating the risk of
failure of an
inspection, prior to the inspection date, using historical inspection data and
machine learning
methods.
[0022] In embodiments of the present disclosure, inspection risk estimation is
performed by
obtaining data related to an inspection, extracting a plurality of features
from the data,
providing the features to a trained classifier, and obtaining from the trained
classifier a risk
score indicative of the probability that the inspection is likely to pass or
fail. In some
embodiments, a feature vector is generated and inputted into the trained
classifier, which in
some embodiments comprises a machine learning model.
[0023] In embodiments of the present disclosure, data may be obtained in a
variety of
formats. Data may be structured or unstructured, and may comprise information
stored in a
plurality of media. Data may be inputted manually into a computer, or may be
obtained
automatically from a file via a computer. It will be appreciated that a
variety of methods are
known for obtaining data via a computer, including, but not limited to,
parsing written
documents or text files using optical character recognition, text parsing
techniques (e.g.,
finding key/value pairs using regular expressions), and/or natural language
processing,
scraping web pages, and/or obtaining values for various measurements from a
database (e.g.,
a relational database), XML file, CSV file, or JSON object.
Page 4 of 40
CA 3050952 2019-07-31

[0024] In some embodiments, factory or inspection data may be obtained
directly from an
inspection management system, or other system comprising a database. In some
embodiments, the inspection management system is configured to store
information related to
factories and/or inspections. The inspection management system may collect and
store
various types of information related to factories and inspections, such as
information
pertaining to purchase orders, inspection bookings, assignments, reports,
corrective and
preventive action (CAPA), inspection results, and other data obtained during
inspections. It
will be appreciated that a large set of data may be available, and in some
embodiments, only a
subset of the available data is used for input into a prediction model. The
subset of data may
contain a sufficient number of attributes to successfully predict inspection
results.
[0024] As used herein, an inspection booking refers to a request for a future
inspection to take
place at a proposed date. The inspection booking may be initiated by a vendor,
brand, or
retailer, and may contain information of a purchase order corresponding to the
future
inspection. As used herein, an assignment refers to a confirmed inspection
booking. The
assignment may contain a confirmation of the proposed date of the inspection
booking, as
well as an identification of an assigned inspector and information related to
the booking.
[0025] Data may be obtained via a data pipeline that collects data from
various sources of
factory and inspection data. A data pipeline may be implemented via an
Application
Programming Interface (API) with permission to access and obtain desired data
and calculate
various features of the data. The API may be internally facing, e.g., it may
provide access to
internal databases containing factory or inspection data, or externally
facing, e.g., it may
provide access to factory or inspection data from external brands, retailers,
or factories. In
some embodiments, data are provided by entities wishing to obtain a prediction
result from a
Page 5 of 40
CA 3050952 2019-07-31

prediction model. The data provided may be input into the model in order to
obtain a
prediction result, and may also be stored to train and test various prediction
models.
[0026] The factory and inspection data may also be aggregated and statistical
analysis may be
performed on the data. According to embodiments of the present disclosure,
data may be
aggregated and analyzed in a variety of ways, including, but not limited to,
adding the values
for a given measurement over a given time window (e.g., 7 days, 14 days, 30
days, 60 days,
90 days, 180 days, or a year), obtaining the maximum and minimum values, mean,
median,
and mode for a distribution of values for a given measurement over a given
time window, and
obtaining measures of the prevalence of certain values or value ranges among
the data. For
any feature or measurement of the data, one can also measure the variance,
standard
deviation, skewness, kurtosis, hyperskewness, hypertailedness, and various
percentile values
(e.g., 5%, 10%, 25%, 50%, 75%, 90%, 95%, 99%) of the distribution of the
feature or
measurement over a given time window.
[0027] The data may also be filtered prior to aggregating or performing
statistical or
aggregated analyses. Data may be aggregated by certain characteristics, and
statistical
analysis may be performed on the subset of data bearing the characteristics.
For example, the
above metrics can be calculated for data related only to inspections that
passed or failed,
related to during product (DUPRO) inspections, or to inspections of above a
minimum sample
size.
[0028] Aggregation and statistical analysis may also be performed on data
resulting from
prior aggregation or statistical analysis. For example, the statistical values
of a given
measurement over a given time period may be measured over a number of
consecutive time
windows, and the resulting values may be analyzed to obtain values regarding
their variation
Page 6 of 40
CA 3050952 2019-07-31

over time. For example, the average inspection fail rate of a factory may be
calculated for
various consecutive 7-day windows, and the change in the average fail rate may
be measured
over the 7-day windows.
[0029] In embodiments of the present disclosure, inspection data include
information
correlated with the results of the inspection (e.g., whether the inspection
was passed or not).
Examples of suitable data for predicting the outcome of an inspection include:
data obtained
from previous inspections at the same factory at which the future inspection
is to take place,
data obtained from inspections at other factories, data obtained from
inspections at other
factories with similar products or product lines to the subjects of the future
inspections, data
obtained from the factory across multiple inspections, attributes of future
inspection bookings
(e.g., the geographic location, time, entity performing the inspection, and/or
the type of
inspection), data related to the business operations of the factory, data
related to product
quality of the factory, general information regarding the factory, data
related to the
sustainability of the factory or other similar factories, and/or data related
to the performance
of the factory or other similar factories. The data may comprise the results
of past inspections
(e.g., whether the inspection was passed or not). The data may comprise
information obtained
from customer reviews on products or product lines similar to those produced
by the factory,
and/or customer reviews on products or product lines originating at the
factory. It will be
appreciated that for some metrics, a factory may be divided into various
divisions within the
factory, with different metrics obtained for each division.
[0030] Examples of data related to future inspection include: the number of
orders placed at
the factory, the quantity of the orders, the quality of the orders, the
monetary value of the
orders, general information regarding the orders, the description of each
product at the
Page 7 of 40
CA 3050952 2019-07-31

factory, (e.g., the product's stock keeping unit (SKU), size, style, color,
quantity, and
packaging method), the financial performance of the factory, the number of
inspected items at
the factory, the number of inspected items at the factory during inspections
of procedures such
as workmanship, packaging, and measurement, information regarding the
acceptable quality
limit (AQL) of processes at the factory (e.g., the sampling number used to
test quality), the
inspection results of past inspections at the factory, the inspection results
of past inspections
for the product/product line, the inspection results at other factories with
similar products, the
inspection results of past inspections at business partners of the factory,
the values for various
metrics collected over the course of inspections, the geographic location of
the factory, the
factory's size, the factory's working conditions and hours of operation, the
time and date of
the inspection, the inspection agency, the individual agents performing the
inspection, and
aggregations and statistical metrics of the aforementioned data.
[0031] As used herein, a product or product line's style refers to a
distinctive appearance of
an item based a corresponding design. A style may have a unique identification
(ID) within a
particular brand, retailer, or factory. Style IDs may be used as an
identifying feature by which
other measurements may be aggregated in order to extract meaningful features
related to
inspection results and risk calculation.
[0032] It will be appreciated that a large number of features may be extracted
by a variety of
methods, such as manual feature extraction, whereby features with a
significant correlation to
the target variable (e.g., the results of the future inspection) are
calculated or extracted from
the obtained data. A feature may be extracted directly from the data, or may
require
processing and/or further calculation to be formatted in such a way that the
desired metric
may be extracted. For example, given the results of various inspections at a
factory over the
Page 8 of 40
CA 3050952 2019-07-31

last year, one may wish to calculate the percentage of failed inspections over
the time period.
In some embodiments, extracting features results in a feature vector, which
may be
preprocessed by applying dimensionality reduction algorithms (such as
principal component
analysis and linear discriminant analysis) or inputting the feature vector
into a neural network,
thereby reducing the vector's size and improving the performance of the
overall system.
[0033] In some embodiments, the trained classifier is a random decision
forest. However, it
will be appreciated that a variety of other classifiers are suitable for use
according to the
present disclosure, including linear classifiers, support vector machines
(SVM), gradient
boosting classifiers, or neural networks such as convolutional neural networks
(CNN) or
recurrent neural networks (RNN).
[0034] Suitable artificial neural networks include but are not limited to a
feedforward neural
network, a radial basis function network, a self-organizing map, learning
vector quantization,
a recurrent neural network, a Hopfield network, a Boltzmann machine, an echo
state network,
long short term memory, a bi-directional recurrent neural network, a
hierarchical recurrent
neural network, a stochastic neural network, a modular neural network, an
associative neural
network, a deep neural network, a deep belief network, a convolutional neural
networks, a
convolutional deep belief network, a large memory storage and retrieval neural
network, a
deep Boltzmann machine, a deep stacking network, a tensor deep stacking
network, a spike
and slab restricted Boltzmann machine, a compound hierarchical-deep model, a
deep coding
network, a multilayer kernel machine, or a deep Q-network.
[0035] In some embodiments, an estimated risk score comprises a value in a
specified range,
e.g., a value in the range [0,100]. For example, a future inspection at a
factory with perfect
performance that has never failed an inspection may achieve a score of 0,
indicating that it is
Page 9 of 40
CA 3050952 2019-07-31

almost certain to pass, while a future inspection at a factory with poor
performance that has
failed every inspection may achieve a score of 100, indicating that it will
almost certainly fail.
In some embodiments, the estimated risk score may be compared against a
threshold value,
and a binary value may be generated, indicating whether the inspection is
likely to pass or not
(e.g., 0 if the score is below the threshold, and 1 otherwise). The threshold
may be chosen
heuristically, or may be adaptively calculated during the training of the
machine learning
model. In some embodiments, determining the risk score is transformed into a
binary
classification problem.
[0036] The performance of machine learning models according to embodiments of
the present
disclosure may be tested against new data, and the machine learning model may
be updated in
order to improve its performance. In some embodiments, updating the machine
learning
model comprises modifying hyperparameters of the model. In some embodiments,
updating
the machine learning model comprises using a different machine learning method
than the one
currently used in the model, and modifying the hyperparameters of the
different machine
learning method in order to achieve a desired performance.
[0037] In embodiments of the present disclosure, historical inspection data
from a number of
inspections during a given time window are used in estimating the risk of
failing a particular
inspection. It will be appreciated that a variety of time windows may be used,
e.g., three
months, six months, nine months, or a year. In some embodiments, the
estimation may be
updated at a regular frequency, e.g., every week, every two weeks, or every
month. Obtaining
updated risk estimations of inspections will assist retailers and
manufacturers in reducing their
potential risk when anticipating an inspection.
Page 10 of 40
CA 3050952 2019-07-31

[0038] In some embodiments, the predicted risk results are converted to a
binary output
indicating whether the inspection is likely to pass or fail.
[0039] In embodiments of the present disclosure, a machine learning model
comprising a
classifier is trained by assembling a training dataset comprising historical
data of inspections
during a variety of time windows, and corresponding performance results for
these
inspections over their respective time windows. In some embodiments, the
inspection data
further comprise data related to the factories in which the inspection took
place, such as data
related to previous inspections at the factory, the performance of the
factory, or general
information related to the factory, as discussed above. In some embodiments,
inspections are
assigned a label indicating whether they are likely to pass or to fail. An
initial training dataset
is generated from the collected data, to which machine learning techniques may
be applied to
generate an optimal model for predicting inspection risk. It will be
appreciated that inspection
risk prediction may be transformed into a binary classification problem, where
a given
inspection is classified as being likely to either pass or fail.
[0040] In some embodiments, training the machine learning model comprises
extracting
features from the initial training dataset. In some embodiments, the selected
features to be
extracted have a high correlation to a target variable. In some embodiments,
the number of
features is reduced in order to reduce the calculation cost in training and
deploying the risk
estimation model. In some embodiments, a number of machine learning methods
and
classification approaches are tested on the training dataset, and a model with
the most desired
performance is chosen for deployment in the risk estimation model. It will be
appreciated that
a variety of machine learning algorithms may be used for risk assessment,
including logistic
regression models, random forest, support vector machines (SVM), deep neural
networks, or
Page 11 of 40
CA 3050952 2019-07-31

boosting methods, (e.g., gradient boosting, Catboost). The hyperparameters of
each model
may be learned to achieve a desired performance. For example, in some
embodiments, the
Institute of Data Science of Technologies (iDST) framework may be used for
hyperparameter
tuning. It will be appreciated that the performance of a machine learning
model may be
measured by different metrics. In some embodiments, the metrics used to
measure the
machine learning model's performance comprise accuracy, precision, recall,
AUC, and/or Fl
score.
[0041] In embodiments of the present disclosure, the hyperparameters for
various machine
learning risk estimation models are learned, and the performance of each model
is measured.
In some embodiments, the metrics used to measure the machine learning model's
performance comprise accuracy, precision, recall, AUC, and/or Fl score. In
some
embodiments, the initial dataset is divided into three subsets: a training
dataset, a validation
dataset, and a testing dataset.
[0042] In some embodiments, 60% of the initial dataset is used for the
training dataset, 20%
is used for the validation dataset, and the remaining 20% is used for the
testing dataset. In
some embodiments, cross validation techniques are used to estimate the
performance of each
risk estimation model. Performance results may be validated by subjecting the
selected risk
prediction model to new inspection data.
[0043] It will be appreciated that predicting the risk of failing an
inspection is useful in
achieving dynamic, risk-based quality control. For example, given the risk of
a particular
inspection, a specific inspection workflow or template may be automatically
generated based
on the requirements of either the factory or a business partner of the
factory. The calculated
risk may be applied to the critical path or time and action plan of a style or
purchase order in
Page 12 of 40
CA 3050952 2019-07-31

order to modify the number of inspections required. Based on the calculated
level of risk of a
particular inspection, an inspection team may assess whether they should waive
or confirm an
inspection booking. Estimated risk may also be leveraged to make
determinations as to the
nature of inspections. For example, for an inspection with a high risk of
failure, the
inspection might be performed via an internal, independent team, while a low
risk inspection
might have the personnel responsible for the performance of the factory
performing the
inspections themselves.
[0044] Referring now to Fig. 1, a schematic view of an exemplary system for
inspection risk
estimation according to embodiments of the present disclosure is shown.
Inspection booking
ID 102 is provided, and relevant features 104 are extracted from inspection
database 112
comprising historical inspection data. The extracted features may be
represented by a feature
vector. The feature vector may be pre-processed prior to being input into
inspection risk
prediction server 106. An estimated prediction result 108 is obtained. In some
embodiments,
pre-processing the feature vector comprises applying a dimensionality
reduction technique to
the vector, such as principal component analysis or linear discriminant
analysis. The
estimated prediction result may comprise a binary value indicating whether the
inspection is
likely to pass or fail. In some embodiments, the estimated prediction result
comprises a value
in a specified range, e.g., a value in the range [0,100]. Relevant features
104 may be obtained
from a factory, from inspection database 112, or from any combination of
sources. The
relevant features may comprise data related to inspections at a factory in
which the future
inspection is to take place, data related to the performance of the factory,
data related to the
factory in general, data relating to a product being inspected, or data
related to the inspection
booking, as discussed above. The relevant features may also be specific to the
type of product
Page 13 of 40
CA 3050952 2019-07-31

the inspection will be conducted for, or the specific product line of the
product. In some
embodiments, estimated prediction result 108 is sent to mobile or web
application 110, where
it may be used for further analysis or decision making. The mobile application
may be
implemented on a smartphone, tablet, or other mobile device, and may run on a
variety of
operating systems, e.g., i0S, Android, or Windows. In various embodiments,
estimated
prediction result 108 is sent to mobile or web application 110 via a wide area
network.
[0045] Referring now to Fig. 2, a process for inspection risk estimation
according to
embodiments of the present disclosure is shown. Inspection booking 201 is
input into
inspection risk prediction system 202 to obtain predicted inspection result
206. In some
embodiments, inspection risk prediction system 202 employs a machine learning
model to
estimate the risk of failure associated with an inspection. In some
embodiments, inspection
risk prediction system 202 is deployed on a server. In some embodiments, the
server is a
remote server. In some embodiments, inspection risk estimation process 200
comprises
performing data processing step 203 to collect and process data related to
inspection booking
201. Data processing may comprise various forms of aggregating the data,
obtaining
statistical metrics of the data, and formatting the data in such a way that
features can be
extracted from them. In some embodiments, the data are obtained from a variety
of sources.
In some embodiments, process 200 comprises performing feature extraction step
204 on the
collected data to extract various features. In some embodiments, feature
extraction step 204 is
performed on data that has been processed at step 203. In some embodiments, a
feature
vector is output. In some embodiments, the features extracted at 204 are input
into a trained
classifier at 205. In some embodiments, the classifier comprises a trained
machine learning
model. In some embodiments, the classifier outputs prediction results 206. In
some
Page 14 of 40
CA 3050952 2019-07-31

embodiments, steps 203, 204, and 205 are performed by inspection risk
prediction system
202. The steps of process 200 may be performed locally to the inspection site,
may be
performed by a remote server, e.g., a cloud server, or may be shared among a
local
computation device and a remote server. In some embodiments, prediction
results 206
comprise a binary value indicating whether or not the inspection is likely to
be failed.
[0046] Referring now to Fig. 3, a process for training an inspection risk
estimation system
according to embodiments of the present disclosure is shown. The steps of
process 300 may
be performed to train an inspection risk estimation model. In some
embodiments, the model
is deployed on a prediction server. The steps of process 300 may be performed
locally to the
factory site, may be performed by a remote server, e.g., a cloud server, or
may be shared
among a local computation device and a remote server. At 302, an initial
training dataset is
created. In some embodiments, the training dataset may comprise data of a
large number of
past inspections from a number of factories, as well as the results of the
inspections (e.g., pass
or fail). The dataset may comprise data related to the factory at which the
inspection took
place and/or the product or product line for which the inspection took place,
and may
comprise various values corresponding to various measurements made over the
course of
previous inspections. In some embodiments, inspection data and corresponding
inspection
results are timestamped. In some embodiments, the data obtained may be
aggregated over a
given length of time or number of inspections. In some embodiments, the data
obtained is
collected only from inspections during a given time window. In some
embodiments, a list of
factories and inspection results may be obtained, with inspection results as
labels for the
inspection data.
Page 15 of 40
CA 3050952 2019-07-31

[0047] At 304, the inspection risk prediction is formulated as a binary
classification problem
wherein a given inspection is classified as either predicted to pass or
predicted to fail. In
some embodiments, a label of 1 is assigned to an inspection if it is predicted
to pass, and a
label of 0 is assigned if the inspection is predicted to fail.
[0048] Useful features are then extracted from the initial training dataset.
The extracted
features may correspond to different time windows, e.g., three months, six
months, nine
months, or a year. The importance of each feature in estimating a final risk
result for an
inspection is calculated. In some embodiments, the importance of each feature
is calculated
by measuring the feature's correlation with the target label (e.g., the
inspection result). At
306, a number of machine learning models are trained on the training dataset,
and the
performance of each model is evaluated. It will be appreciated that acceptable
machine
learning models include a Catboost classifier, a neural network (e.g., a
neural network with 4
fully-connected hidden layers and a ReLU activation function), a decision
tree, extreme
boosting machines, random forest classifier, SVM, and logistic regression, in
addition to those
described above. The hyperparameters of each model may be tuned so as to
optimize the
performance of the model. In some embodiments, the metrics used to measure the
machine
learning model's performance comprise accuracy, precision, recall, AUC, or Fl
score. The
most useful features for performing the desired estimation are selected. At
308, the
performance of the machine learning models are compared. The model with the
most desired
performance is chosen at 310. In some embodiments, a final list of features
used in the
prediction calculation is outputted. At 312, the chosen model is deployed onto
a prediction
server.
Page 16 of 40
CA 3050952 2019-07-31

[0049] Referring now to Fig. 4, a process for updating an inspection risk
estimation system
according to embodiments of the present disclosure is shown. In some
embodiments of
process 400, an existing inspection risk prediction model is updated. In some
embodiments,
updating the prediction model comprises inputting new data and modifying the
parameters of
the learning system accordingly to improve the performance of the system. In
some
embodiments, a new machine learning model may be chosen to perform the
estimation. The
inspection risk prediction model may be updated at regular intervals, e.g.,
monthly,
bimonthly, or quarterly, or may be updated when a certain amount of new data
are
accumulated. It will be appreciated that an updated risk estimation system
provides for more
accurate risk estimation compared to existing methods.
[0050] In some embodiments, new data and inspection results 420 for a number
of
inspections are collected from inspection management platform 410 and used to
generate a
new dataset with labels corresponding to the data for each inspection.
Inspection
management platform 410 may comprise a database containing inspection data and
inspection
results for a number of past inspections. New data and inspection results 420
may comprise
customer feedback regarding prior predictions, and may include ground truth
risk scores
comprising indications of the accuracy of prior predictions, such as which
predictions made
by the prediction model were incorrect, as well as corrected results for the
predictions. It will
be appreciated that the new dataset may be structured in a similar way to the
initial dataset
described above. In some embodiments, the new dataset is combined with an
existing
training dataset 430 to create a new training dataset 440. In some
embodiments, the
performance of the latest version of the trained risk prediction model 499,
comprising
inspection risk predictor 450, is measured on the new training dataset. In
some embodiments,
Page 17 of 40
CA 3050952 2019-07-31

if the performance of the latest version of the trained risk prediction model
499 and predictor
450 is under a certain threshold, feature re-engineering step 460 and/or
applying new machine
learning models 480 may be performed at 470 to retrain the prediction model.
The threshold
may be chosen heuristically, or may be adaptively calculated during training.
[0051] It will be appreciated that the methods of re-training the prediction
model at 470 may
be similar to those used in training the inspection risk estimation system, as
described above.
The process of re-training the prediction model may be repeated a number of
times until the
performance of the model on the new training dataset reaches an acceptable
threshold. In
some embodiments, the latest version of the trained risk prediction model 499
is updated at
490 with the new model trained at 470. The updated risk prediction model may
then be
deployed on prediction server 495. Existing training dataset 430 may also be
updated to
reflect the newly obtained data.
[0052] Referring now to Figs. 5-7, various processes for training inspection
risk estimation
systems according to embodiments of the present disclosure are shown. In
various
embodiments of the present disclosure, generating a trained risk estimation
system comprises
four primary steps: data collection, feature extraction, model training, and
risk prediction. In
some embodiments, data collection comprises creating an initial training
dataset using the
methods described above. In some embodiments, feature extraction comprises
extracting a
number of useful features from the initial training dataset. The features
extracted may be a
subset of a larger number of features that may be extracted from the initial
training dataset. In
some embodiments, the importance of each feature to the risk prediction
calculation is
measured. In some embodiments, the features with the least relevance to the
prediction
calculation are not used in the risk prediction model. In some embodiments, a
fixed number
Page 18 of 40
CA 3050952 2019-07-31

of features are extracted. In some embodiments, determining the relevance of a
feature to the
prediction calculation comprises measuring the correlation of the feature with
the risk
prediction results. In some embodiments, a dimensionality reduction technique
(e.g.,
principal component analysis or linear discriminant analysis) may be applied
to the extracted
features. In some embodiments, the feature extraction step comprises manual
feature
extraction. Model training comprises measuring the performance of a number of
machine
learning models on the extracted features. The model with the most desired
performance may
be selected to perform risk prediction.
[0053] Referring now to Fig. 5, a process for training an inspection risk
estimation system
according to embodiments of the present disclosure is shown. In some
embodiments, manual
feature extraction 502 is performed on an initial training dataset 501
comprising data related
to an inspection booking. Features may be extracted based on inspection data
during a
specific time window (e.g., one year). In some embodiments, a feature vector
corresponding
to each inspection's data are generated from the feature extraction step. In
some
embodiments, a label is assigned to each feature vector. In some embodiments,
the labels are
obtained from the initial training dataset 501. In some embodiments, the label
is a binary
value indicating whether the inspection passed or failed. In some embodiments,
the risk
estimation of an inspection is transformed into a binary classification
problem, wherein an
inspection can be classified as passing or failing. Various machine learning
models (e.g.,
support vector machine, decision tree, random forest, or neural networks) and
boosting
methods (e.g., Catboost or XGBoost) may be tested at 503 on the initial
training dataset.
[0054] In training the various machine learning models and boosting methods,
the initial
training dataset may be divided into a training dataset and a testing dataset.
For example,
Page 19 of 40
CA 3050952 2019-07-31

80% of the initial training dataset may be used to create a training dataset,
and the remaining
20% may be used to form a testing dataset. In some embodiments, the initial
training dataset
may be divided into a training dataset, a testing dataset, and a validation
dataset. In some
embodiments, the hyper-parameters of the machine learning models and boosting
methods are
tuned to achieve the most desired performance. The model with the most desired

performance may then be selected to provide risk estimation on input
inspection data. In
some embodiments, the selected model is deployed onto a prediction server to
providing for
future risk predictions.
[0055] In some embodiments of the present disclosure, a feature vector is
calculated from
inspection data. The feature vector is input into a risk prediction model and
a predicted
failure probability is outputted. The probability may be compared with a given
threshold to
determine whether the inspection should be classified as passing or not. In
some
embodiments, an inspection is considered likely to pass if the predicted
probability is greater
than or equal to the threshold. In some embodiments, a risk score is obtained
based on the
calculated probability. In some embodiments, the risk score comprises a value
in a
predetermined range, e.g., [0, 100]. In some embodiments, testing the risk
prediction model
comprises comparing the predicted inspection results with known data.
[0056] In some embodiments, a risk score R is obtained based on the calculated
probability p
using the following procedure:
[0057] A range [A, B] defining the upper and lower bounds of the risk score is
chosen. For
example, one may consider the risk score R to be within the range [0, 100],
where R = 0
represents a lowest possible risk of an inspection (e.g., the inspection is
almost certain to
pass), and R = 100 represents a highest possible risk of an inspection (e.g.,
the inspection is
Page 20 of 40
CA 3050952 2019-07-31

almost certain to fail). Given that the predicted probability p is within the
unit interval [1, 0],
one can determine a mapping F to assign a predicted probability to a
corresponding risk score
R:
F: [0, 1] ¨> [A, B]
Equation 1
[0058] For a given p,
F(p) = p ¨> R
Equation 2
[0059] F is chosen such that F(0) = A and F(1) = B. For example, a linear
mapping may be
used:
F (p) = Ax p + (1 ¨ p) x B
Equation 3
[0060] Referring now to Fig. 6, a process for training an inspection risk
estimation system
according to embodiments of the present disclosure is shown. In some
embodiments, features
are obtained from inspection data 601 using manual feature extraction 602. It
will be
appreciated that feature extraction may result in a large number of extracted
features for each
inspection, and thus, large feature vectors. The number of features extracted
may number in
the hundreds. Reducing the dimensionality of the feature vectors may result in
more efficient
training, deployment, and operation of the prediction model. In some
embodiments, the
dimensionality of a feature vector is reduced at 603 by calculating the
correlation of each
feature to the target variable, and only keeping those features with high
correlation to the
target variable. In some embodiments, the dimensionality of a feature vector
is reduced at
603 by applying a dimensionality reduction algorithm to the vector, such as
principal
Page 21 of 40
CA 3050952 2019-07-31

component analysis (PCA) or linear discriminant analysis (LDA). In some
embodiments, the
features computed in the resulting smaller-dimension vectors for a number of
inspections are
input into various machine learning and/or gradient boosting models at 604,
and the model
with the most desired performance is selected, as described above.
[0061] Referring now to Fig. 7, a process for training an inspection risk
estimation system
according to embodiments of the present disclosure is shown. In some
embodiments, features
are obtained from inspection data 701 using manual feature extraction 702. In
some
embodiments, the feature extraction step results in a feature vector. In some
embodiments,
the feature vector is input into a neural network at 703. In some embodiments,
the neural
network comprises a deep neural network. In some embodiments, the neural
network
comprises an input layer, a number of fully-connected hidden layers, and an
output later with
a predetermined activation function. In some embodiments, the activation
function comprises
a ReLU or sigmoid activation function, although it will be appreciated that a
variety of
activation functions may be suitable. The output of the neural network may be
considered as
a new feature vector, and may be input into various machine learning models at
704 using
similar steps to those described above. In some embodiments, the new feature
vector is of
smaller dimensionality than the input feature vector.
[0062] Table 1 lists a number of features that may be extracted from
inspection data using the
methods described above. In various exemplary embodiments, gradient boosting
on decision
trees is applied, for example using catboost. These features may have high
correlation with
the target variable. Note that features marked with an asterisk (*) may be
computed right
after an inspection booking is confirmed and becomes an assignment.
Page 22 of 40
CA 3050952 2019-07-31

Total available product quantity in the inspection booking
The standard deviation of available quantities among production items
The weekday of the expected inspection date (e.g., Monday, Tuesday...)
The number of unique measurement sizes among all production items of the
inspection (*)
Factory identification
The number of unique measurement units in the inspection (*)
The ratio between total available quantity and total order quantity in the
inspection
Brand Office Identification
Total order quantity in the inspection
The average fail rate of the factory under consideration during the last three
months
Total available quantity for solid packages
Style Difficulty Coverage: the number of styles present in the current
inspection booking which
can be frequently found in failed inspections (e.g., for all similar
inspections completed in the
last 6 months, the number of styles that are found in at least 50% of the
failed inspections that
are also present in the current inspection booking are counted)
The standard deviation of the number of defects in packaging procedure among
all inspections
managed by the brand office under consideration during the last 3 months (*)
The standard deviation of the total defects among all inspections managed by
the brand office
under consideration during the last 3 months
The difference, in number of days, between the date that an inspection booking
was initiated and
the date that it was confirmed to become an assignment by a brand or retailer.
In some
embodiments, it is the difference, in number of days, between the date that an
inspection
Page 23 of 40
CA 3050952 2019-07-31

booking was initiated and the date that a brand or retailer contacted the
factory to confirm the
final inspection date.(*)
The average number of defects in packaging procedure among all inspections
managed by the
brand office under consideration during the last 3 months
Table 1
[0063] It will be appreciated that a variety of additional features and
statistical measures may
be used in accordance with the present disclosure.
[0064] Referring now to Fig. 8, a schematic of an example of a computing node
is shown.
Computing node 10 is only one example of a suitable computing node and is not
intended to
suggest any limitation as to the scope of use or functionality of embodiments
described
herein. Regardless, computing node 10 is capable of being implemented and/or
performing
any of the functionality set forth hereinabove.
[0065] In computing node 10 there is a computer system/server 12, which is
operational with
numerous other general purpose or special purpose computing system
environments or
configurations. Examples of well-known computing systems, environments, and/or

configurations that may be suitable for use with computer system/server 12
include, but are
not limited to, personal computer systems, server computer systems, thin
clients, thick clients,
handheld or laptop devices, multiprocessor systems, microprocessor-based
systems, set top
boxes, programmable consumer electronics, network PCs, minicomputer systems,
mainframe
computer systems, and distributed cloud computing environments that include
any of the
above systems or devices, and the like.
Page 24 of 40
CA 3050952 2019-07-31

[0066] Computer system/server 12 may be described in the general context of
computer
system-executable instructions, such as program modules, being executed by a
computer
system. Generally, program modules may include routines, programs, objects,
components,
logic, data structures, and so on that perform particular tasks or implement
particular abstract
data types. Computer system/server 12 may be practiced in distributed cloud
computing
environments where tasks are performed by remote processing devices that are
linked through
a communications network. In a distributed cloud computing environment,
program modules
may be located in both local and remote computer system storage media
including memory
storage devices.
[0067] As shown in Fig. 8, computer system/server 12 in computing node 10 is
shown in the
form of a general-purpose computing device. The components of computer
system/server 12
may include, but are not limited to, one or more processors or processing
units 16, a system
memory 28, and a bus 18 that couples various system components including
system memory
28 to processor 16.
[0068] Bus 18 represents one or more of any of several types of bus
structures, including a
memory bus or memory controller, a peripheral bus, an accelerated graphics
port, and a
processor or local bus using any of a variety of bus architectures. By way of
example, and not
limitation, such architectures include Industry Standard Architecture (ISA)
bus, Micro
Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics
Standards
Association (VESA) local bus, Peripheral Component Interconnect (PCI) bus,
Peripheral
Component Interconnect Express (PCIe), and Advanced Microcontroller Bus
Architecture
(AMBA).
Page 25 of 40
CA 3050952 2019-07-31

[0069] Computer system/server 12 typically includes a variety of computer
system readable
media. Such media may be any available media that is accessible by computer
system/server
12, and it includes both volatile and non-volatile media, removable and non-
removable media.
[0070] System memory 28 can include computer system readable media in the form
of
volatile memory, such as random access memory (RAM) 30 and/or cache memory 32.

Computer system/server 12 may further include other removable/non-removable,
volatile/non-volatile computer system storage media. By way of example only,
storage
system 34 can be provided for reading from and writing to a non-removable, non-
volatile
magnetic media (not shown and typically called a "hard drive"). Although not
shown, a
magnetic disk drive for reading from and writing to a removable, non-volatile
magnetic disk
(e.g., a "floppy disk"), and an optical disk drive for reading from or writing
to a removable,
non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can
be
provided. In such instances, each can be connected to bus 18 by one or more
data media
interfaces. As will be further depicted and described below, memory 28 may
include at least
one program product having a set (e.g., at least one) of program modules that
are configured
to carry out the functions of embodiments of the disclosure.
[0071] Program/utility 40, having a set (at least one) of program modules 42,
may be stored in
memory 28 by way of example, and not limitation, as well as an operating
system, one or
more application programs, other program modules, and program data. Each of
the operating
system, one or more application programs, other program modules, and program
data or some
combination thereof, may include an implementation of a networking
environment. Program
modules 42 generally carry out the functions and/or methodologies of
embodiments as
described herein.
Page 26 of 40
CA 3050952 2019-07-31

[0072] Computer system/server 12 may also communicate with one or more
external devices
14 such as a keyboard, a pointing device, a display 24, etc.; one or more
devices that enable a
user to interact with computer system/server 12; and/or any devices (e.g.,
network card,
modem, etc.) that enable computer system/server 12 to communicate with one or
more other
computing devices. Such communication can occur via Input/Output (I/O)
interfaces 22. Still
yet, computer system/server 12 can communicate with one or more networks such
as a local
area network (LAN), a general wide area network (WAN), and/or a public network
(e.g., the
Internet) via network adapter 20. As depicted, network adapter 20 communicates
with the
other components of computer system/server 12 via bus 18. It should be
understood that
although not shown, other hardware and/or software components could be used in
conjunction
with computer system/server 12. Examples, include, but are not limited to:
microcode, device
drivers, redundant processing units, external disk drive arrays, RAID systems,
tape drives, and
data archival storage systems, etc.
[0073] The present disclosure may be embodied as a system, a method, and/or a
computer
program product. The computer program product may include a computer readable
storage
medium (or media) having computer readable program instructions thereon for
causing a
processor to carry out aspects of the present disclosure.
[0074] The computer readable storage medium can be a tangible device that can
retain and
store instructions for use by an instruction execution device. The computer
readable storage
medium may be, for example, but is not limited to, an electronic storage
device, a magnetic
storage device, an optical storage device, an electromagnetic storage device,
a semiconductor
storage device, or any suitable combination of the foregoing. A non-exhaustive
list of more
specific examples of the computer readable storage medium includes the
following: a portable
Page 27 of 40
CA 3050952 2019-07-31

computer diskette, a hard disk, a random access memory (RAM), a read-only
memory
(ROM), an erasable programmable read-only memory (EPROM or Flash memory), a
static
random access memory (SRAM), a portable compact disc read-only memory (CD-
ROM), a
digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically
encoded device
such as punch-cards or raised structures in a groove having instructions
recorded thereon, and
any suitable combination of the foregoing. A computer readable storage medium,
as used
herein, is not to be construed as being transitory signals per se, such as
radio waves or other
freely propagating electromagnetic waves, electromagnetic waves propagating
through a
waveguide or other transmission media (e.g., light pulses passing through a
fiber-optic cable),
or electrical signals transmitted through a wire.
[0075] Computer readable program instructions described herein can be
downloaded to
respective computing/processing devices from a computer readable storage
medium or to an
external computer or external storage device via a network, for example, the
Internet, a local
area network, a wide area network and/or a wireless network. The network may
comprise
copper transmission cables, optical transmission fibers, wireless
transmission, routers,
firewalls, switches, gateway computers and/or edge servers. A network adapter
card or
network interface in each computing/processing device receives computer
readable program
instructions from the network and forwards the computer readable program
instructions for
storage in a computer readable storage medium within the respective
computing/processing
device.
[0076] Computer readable program instructions for carrying out operations of
the present
disclosure may be assembler instructions, instruction-set-architecture (ISA)
instructions,
machine instructions, machine dependent instructions, microcode, firmware
instructions,
Page 28 of 40
CA 3050952 2019-07-31

state-setting data, or either source code or object code written in any
combination of one or
more programming languages, including an object oriented programming language
such as
Smalltalk, C++ or the like, and conventional procedural programming languages,
such as the
"C" programming language or similar programming languages. The computer
readable
program instructions may execute entirely on the user's computer, partly on
the user's
computer, as a stand-alone software package, partly on the user's computer and
partly on a
remote computer or entirely on the remote computer or server. In the latter
scenario, the
remote computer may be connected to the user's computer through any type of
network,
including a local area network (LAN) or a wide area network (WAN), or the
connection may
be made to an external computer (for example, through the Internet using an
Internet Service
Provider). In some embodiments, electronic circuitry including, for example,
programmable
logic circuitry, field-programmable gate arrays (FPGA), or programmable logic
arrays (PLA)
may execute the computer readable program instructions by utilizing state
information of the
computer readable program instructions to personalize the electronic
circuitry, in order to
perform aspects of the present disclosure.
[0077] Aspects of the present disclosure are described herein with reference
to flowchart
illustrations and/or block diagrams of methods, apparatus (systems), and
computer program
products according to embodiments of the disclosure. It will be understood
that each block of
the flowchart illustrations and/or block diagrams, and combinations of blocks
in the flowchart
illustrations and/or block diagrams, can be implemented by computer readable
program
instructions.
[0078] These computer readable program instructions may be provided to a
processor of a
general purpose computer, special purpose computer, or other programmable data
processing
Page 29 of 40
CA 3050952 2019-07-31

apparatus to produce a machine, such that the instructions, which execute via
the processor of
the computer or other programmable data processing apparatus, create means for

implementing the functions/acts specified in the flowchart and/or block
diagram block or
blocks. These computer readable program instructions may also be stored in a
computer
readable storage medium that can direct a computer, a programmable data
processing
apparatus, and/or other devices to function in a particular manner, such that
the computer
readable storage medium having instructions stored therein comprises an
article of
manufacture including instructions which implement aspects of the function/act
specified in
the flowchart and/or block diagram block or blocks.
[0079] The computer readable program instructions may also be loaded onto a
computer,
other programmable data processing apparatus, or other device to cause a
series of operational
steps to be performed on the computer, other programmable apparatus or other
device to
produce a computer implemented process, such that the instructions which
execute on the
computer, other programmable apparatus, or other device implement the
functions/acts
specified in the flowchart and/or block diagram block or blocks.
[0080] The flowchart and block diagrams in the Figures illustrate the
architecture,
functionality, and operation of possible implementations of systems, methods,
and computer
program products according to various embodiments of the present disclosure.
In this regard,
each block in the flowchart or block diagrams may represent a module, segment,
or portion of
instructions, which comprises one or more executable instructions for
implementing the
specified logical function(s). In some alternative implementations, the
functions noted in the
block may occur out of the order noted in the figures. For example, two blocks
shown in
succession may, in fact, be executed substantially concurrently, or the blocks
may sometimes
Page 30 of 40
CA 3050952 2019-07-31

be executed in the reverse order, depending upon the functionality involved.
It will also be
noted that each block of the block diagrams and/or flowchart illustration, and
combinations of
blocks in the block diagrams and/or flowchart illustration, can be implemented
by special
purpose hardware-based systems that perform the specified functions or acts or
carry out
combinations of special purpose hardware and computer instructions.
[0081] The descriptions of the various embodiments of the present disclosure
have been
presented for purposes of illustration, but are not intended to be exhaustive
or limited to the
embodiments disclosed. Many modifications and variations will be apparent to
those of
ordinary skill in the art without departing from the scope and spirit of the
described
embodiments. The terminology used herein was chosen to best explain the
principles of the
embodiments, the practical application or technical improvement over
technologies found in
the marketplace, or to enable others of ordinary skill in the art to
understand the embodiments
disclosed herein.
Page 31 of 40
CA 3050952 2019-07-31

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2019-07-31
Examination Requested 2019-07-31
(41) Open to Public Inspection 2019-10-11

Abandonment History

Abandonment Date Reason Reinstatement Date
2020-08-31 R86(2) - Failure to Respond 2021-08-11

Maintenance Fee

Last Payment of $100.00 was received on 2023-07-21


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if small entity fee 2024-07-31 $100.00
Next Payment if standard fee 2024-07-31 $277.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Advance an application for a patent out of its routine order $500.00 2019-07-31
Request for Examination $800.00 2019-07-31
Application Fee $400.00 2019-07-31
Extension of Time 2020-03-06 $200.00 2020-03-06
Maintenance Fee - Application - New Act 2 2021-08-02 $100.00 2021-07-09
Reinstatement - failure to respond to examiners report 2021-08-31 $204.00 2021-08-11
Maintenance Fee - Application - New Act 3 2022-08-02 $100.00 2022-07-22
Maintenance Fee - Application - New Act 4 2023-07-31 $100.00 2023-07-21
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
INSPECTORIO INC.
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Special Order - Applicant Revoked 2020-04-20 2 185
Extension of Time 2020-03-06 1 32
Office Letter 2020-05-27 1 197
Acknowledgement of Extension of Time 2020-05-27 2 209
Special Order - Applicant Revoked 2020-05-27 1 189
Amendment 2023-03-28 24 767
Reinstatement / Amendment 2021-08-11 7 272
Change to the Method of Correspondence 2021-08-11 7 272
Examiner Requisition 2022-02-23 4 207
Amendment 2022-06-07 27 1,008
Claims 2022-06-07 9 391
Examiner Requisition 2022-12-01 3 166
Claims 2023-03-28 8 379
Abstract 2019-07-31 1 12
Description 2019-07-31 31 1,338
Claims 2019-07-31 9 273
Drawings 2019-07-31 8 132
Representative Drawing 2019-09-03 1 9
Cover Page 2019-09-03 2 39
Acknowledgement of Grant of Special Order 2019-10-11 1 49
Examiner Requisition 2019-11-06 3 171
Examiner Requisition 2024-03-28 3 148