Language selection

Search

Patent 3052846 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3052846
(54) English Title: CHARACTER RECOGNITION METHOD, DEVICE, ELECTRONIC DEVICE AND STORAGE MEDIUM
(54) French Title: PROCEDE ET DISPOSITIF DE RECONNAISSANCE DE CARACTERES, DISPOSITIF ELECTRONIQUE ET SUPPORT DE STOCKAGE
Status: Examination Requested
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06V 30/19 (2022.01)
  • G06V 30/18 (2022.01)
(72) Inventors :
  • YANG, HU (China)
  • ZHANG, BO (China)
  • HAO, XUEWU (China)
  • YANG, KAIMING (China)
(73) Owners :
  • 10353744 CANADA LTD. (Canada)
(71) Applicants :
  • 10353744 CANADA LTD. (Canada)
(74) Agent: HINTON, JAMES W.
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2019-08-23
(41) Open to Public Inspection: 2020-02-23
Examination requested: 2022-09-16
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
201810967423.9 China 2018-08-23

Abstracts

English Abstract


The embodiment of the invention provides a character recognition method, a
device, an electronic
device and a storage medium, which relate to the technical field of image
processing. The method
comprises: acquiring a character image corresponding to the unique identifier
and a standard result
of the character image based on the unique identifier of the client;
extracting features of the
character image and generating a sample set based on the features of the
character image and the
standard result; the sample set being pushed to the client based on the unique
identification so that
the client recognizes an image of a character to be recognized through the
sample set. The
technical proposal of the embodiment of the invention can avoid the problem of
low identification
rate caused by different payment environments of different clients.


Claims

Note: Claims are shown in the official language in which they were submitted.


What is claimed is:
1. A character recognition method, comprising:
Obtaining a character image corresponding to the unique identifier and a
standard result of
the character image based on a unique identifier of the client;
Extracting features of the character image, generating a sample set based on
characteristics
of the character image and the standard result;
Pushing the sample set to the client based on the unique identifier to cause
the client to
identify a character image to be recognized by the sample set.
2. The character recognition method according to Claim 1, wherein the
obtaining a character
image corresponding to the unique identifier based on the unique identifier of
the client
comprises:
Obtaining a character recognition rate of the client based on the unique
identifier of the
client;
Determining whether the character recognition rate is less than a
predetermined threshold;
If the determination is less than the predetermined threshold, the
corresponding character
image is obtained based on the unique identifier of the client.
3. The character recognition method according to Claim 2, wherein the
character
recognition method further comprises:
Acquiring a recognition result of the character image corresponding to the
unique identifier
based on the unique identifier of the client;
The character recognition rate of the client is determined based on the
recognition result and
the standard result.
4. The character recognition method according to Claim 1, wherein extracting a
feature of
the character image, generating a sample set based on the feature of the
character image and the
standard result, comprises:
Extracting features of each character in the character image by a feature
extraction model;
Determining a target character corresponding to each of the characters in the
standard result;
Using the target character as a label of the feature of the respective
character;
The sample set is generated based on a feature of each character in the
character image and
a tag of the feature.
5. The character recognition method according to Claim 1, wherein the
character
18

recognition method further comprises:
Receiving the character image sent by the client, the recognition result of
the character
image, and a standard result;
Storing the character image in a first storage area;
The recognition result of the character image and the standard result are
stored in the second
storage area.
6. The character recognition method according to Claim 5, wherein the first
storage area is
an image storage unit of a target server, and the second storage area is a
relation data storage unit
of the target server.
7. A character recognition method, comprising:
Receiving, by the target server, a sample set pushed by the unique identifier
of the client,
where the sample set is a feature set generated based on a character image
sent by the client and
a standard result of the character image;
Obtaining a character image to be recognized, and extracting a feature vector
of each
character in the character image to be recognized;
Matching feature vectors of individual characters with feature vectors in the
sample set;
Characters in the character image to be recognized are identified based on the
matching
result.
8. The character recognition method according to Claim 7, wherein the
character
recognition method further comprises:
Receiving a standard result of the character image to be recognized input by a
user;
Sending the character image to be recognized, the recognition result of the
character image
to be recognized, and the standard result to the target server.
9. A character recognition apparatus, comprising:
An obtaining unit, configured to acquire, according to a unique identifier of
the client, a
character image corresponding to the unique identifier and a standard result
of the character
image;
A sample generating unit, configured to extract a feature of the character
image, and
generate a sample set based on the feature of the character image and the
standard result;
And a sample pushing unit, configured to push the sample set to the client
based on the
unique identifier, so that the client identifies the character image to be
recognized through the
19

sample set.
10. A character recognition apparatus, comprising:
A sample receiving unit, configured to receive a sample set pushed by the
target server by
using a unique identifier of the client, where the sample set is a feature set
generated based on a
character image sent by the client and a standard result of the character
image;
A feature extraction unit, configured to acquire a character image to be
recognized, and
extract a feature vector of each character in the character image to be
recognized;
A matching unit, configured to match a feature vector of each character with a
feature
vector in the sample set;
And an identifying unit, configured to identify a character in the character
image to be
recognized based on the matching result.
11. An electronic equipment, comprising:
A processor; and
A memory having computer readable instructions stored thereon, the computer
readable
instructions being executed by the processor to implement the character
recognition method of
any one of Claims 1 to 8.
12. A computer readable storage medium having stored thereon a computer
program, the
computer program being executed by a processor to implement the character
recognition method
according to any one of Claims 1 to 8.

Description

Note: Descriptions are shown in the official language in which they were submitted.


Character recognition method, device, electronic device and storage medium
Technical Field
[0001] The present invention relates to the field of image processing
technologies, and in
particular, to a character recognition method, a character recognition
apparatus, an electronic
equipment, and a computer readable storage medium.
Background Technology
[0002] With the development of Internet technology, payment methods are also
evolving, and
traditional payment methods need to be modified to meet people's convenience
for payment.
[0003] Currently, in a technical solution, in order to be compatible with the
original payment
client, the Windows application program interface is called on the original
payment client to
intercept a part of the payment interface, such as a payment amount area, and
OCR (Optical
Character) is adopted. Recognition, optical character recognition) technology
identifies the
content of the payment amount area. In this technical solution, because the
original cash register
software, operating system version, display resolution and the like of
different payment clients
are different, it is difficult to achieve the better recognition rate by using
the same sample library
for identification.
[0004] It is to be understood that the information disclosed in the Background
section above is
only used to enhance the understanding of the background of the invention, and
thus may include
information that does not constitute prior art known to those of ordinary
skill in the art.
Summary of the Invention
[0005] An object of embodiments of the present invention is to provide a
character recognition
method, a character recognition apparatus, an electronic equipment, and a
computer readable
storage medium, thereby at least partially offset one or more due to
limitations and defects of the
related art. Questions.
[0006] According to a first aspect of the present invention, a character
recognition method
includes: acquiring a character image corresponding to the unique identifier
and a standard result
of the character image based on a unique identifier of a client; and
extracting the a feature of the
character image, generating a sample set based on the characteristics of the
character image and
1
CA 3052846 2019-08-23

the standard result; pushing the sample set to the client based on the unique
identifier to cause
the client to be identified by the sample set Character images are recognized.
[0007] In some embodiments of the present invention, the obtaining, according
to the foregoing
solution, the character image corresponding to the unique identifier based on
the unique
identifier of the client, includes: acquiring a character recognition rate of
the client based on the
unique identifier of the client; determining whether the character recognition
rate is less than a
predetermined threshold; if the determination is less than the predetermined
threshold, acquiring
a corresponding character image based on the unique identifier of the client.
[0008] In some embodiments of the present invention, the character recognition
method further
includes: acquiring a recognition result of a character image corresponding to
the unique
identifier based on the unique identifier of the client, based on the
foregoing solution; And the
standard result determines the character recognition rate of the client.
[0009] In some embodiments of the present invention, extracting features of
the character
image based on the foregoing scheme, generating a sample set based on features
of the character
image and the standard result, including: extracting the character by a
feature extraction model a
feature of each character in the image; determining a target character
corresponding to the
respective character in the standard result; using the target character as a
feature of the character
of each character; based on characteristics and features of each character in
the character image
The tag generates the sample set.
[0010] In some embodiments of the present invention, the character recognition
method further
includes: receiving the character image sent by the client, the recognition
result of the character
image, and a standard result; The storage area stores the character image; the
recognition result
of the character image and the standard result are stored in the second
storage area.
[0011] In some embodiments of the present invention, based on the foregoing
aspect, the first
storage area is an image storage unit of a target server, and the second
storage area is a relational
data storage unit of the target server.
[0012] According to a second aspect of the embodiments of the present
invention, there is
provided another character recognition method, comprising: receiving a sample
set pushed by a
target server by a unique identifier of a client, the sample set being a
character sent based on the
client And a feature set generated by the image and the standard result of the
character image;
acquiring a character image to be recognized, and extracting a feature vector
of each character in
2
CA 3052846 2019-08-23

the character image to be recognized; performing feature vectors of each
character and feature
vectors in the sample set Matching; identifying characters in the character
image to be
recognized based on the matching result.
[0013] In some embodiments of the present invention, the character recognition
method further
includes: receiving a standard result of the character image to be recognized
input by a user; and
the character image to be recognized, the to-be-identified The recognition
result of the character
image and the standard result are transmitted to the target server.
[0014] According to a third aspect of the embodiments of the present
invention, there is
provided a character recognition apparatus, comprising: an obtaining unit,
configured to acquire
a character image corresponding to the unique identifier and a standard of the
character image
based on a unique identifier of a client a result; a sample generating unit,
configured to extract a
feature of the character image, generate a sample set based on the feature of
the character image
and the standard result; and a sample pushing unit, configured to push the
client to the client
based on the unique identifier The sample set is described to enable the
client to identify the
character image to be recognized by the sample set.
[0015] According to a fourth aspect of the present invention, a character
recognition apparatus
is provided, including: a sample receiving unit, configured to receive a
sample set pushed by a
target server by a unique identifier of a client, where the sample set is
based on the a character
set generated by the client and a feature set generated by the standard result
of the character
image; a feature extracting unit configured to acquire a character image to be
recognized, and
extract a feature vector of each character in the character image to be
recognized; And matching
the feature vector of each character with the feature vector in the sample
set; the identifying unit
is configured to identify the character in the character image to be
recognized based on the
matching result.
[0016] According to a fifth aspect of the embodiments of the present
invention, there is
provided an electronic equipment comprising: a processor; and a memory having
stored thereon
computer readable instructions, the computer readable instructions being The
character
recognition method as described in the first aspect above is implemented at
the time of
execution.
[0017] According to a sixth aspect of the embodiments of the present
invention, there is
provided a computer readable storage medium having stored thereon a computer
program, the
3
CA 3052846 2019-08-23

computer program being executed by a processor to implement character
recognition as
described in the first aspect above method.
[0018] In a technical solution provided by some embodiments of the present
invention, on the
one hand, generating a sample set of the client based on characteristics of a
character image of a
client and a standard result, since a corresponding sample set is generated
for each client, thereby
It can avoid the problem that the recognition rate is low due to different
payment environments
of different clients; on the other hand, the deployment cost can be reduced
because there is no
need to upgrade the original payment system.
[0019] The above general description and the following detailed description
are intended to be
illustrative and not restrictive.
Brief Description
[0020] Figures herein are incorporated in and constitute a part of this
specification, illustrate
embodiments consistent with the present invention, and together with the
description serve to
explain the principles of the invention. Obviously, the drawings in the
following description are
only some embodiments of the present invention, and those skilled in the art
can obtain other
drawings according to the drawings without any creative work. In the drawing:
[0021] Figure 1 shows a flow diagram of a character recognition method in
accordance with
some embodiments of the present invention;
[0022] Figure 2 shows a schematic diagram of an application of a character
recognition method
in accordance with some embodiments of the present invention;
[0023] Figure 3 shows a schematic diagram of setting a screenshot area in
accordance with
some embodiments of the present invention;
[0024] Figure 4 shows a flow diagram of automatically uploading a captured
image, in
accordance with some embodiments of the present invention;
[0025] Figure 5 illustrates a flow diagram of an automated training sample in
accordance with
some embodiments of the present invention;
[0026] Figure 6 shows a schematic diagram of a feature extraction model in
accordance with
some embodiments of the present invention;
[0027] Figure 7 shows a flow chart of a character recognition method according
to further
embodiments of the present invention;
4
CA 3052846 2019-08-23

[0028] Figure 8 shows a schematic block diagram of a character recognition
apparatus in
accordance with some embodiments of the present invention;
[0029] Figure 9 shows a schematic block diagram of a character recognition
apparatus
according to further embodiments of the present invention;
[0030] Figure 10 shows a block diagram of a computer system suitable for use
in implementing
an electronic equipment in accordance with an embodiment of the present
invention.
Description of the Preferred Examples
[0031] Example embodiments will now be described more fully with reference to
the
accompanying drawings. However, the exemplary embodiments can be embodied in a
variety of
forms and should not be construed as being limited to the embodiments set
forth herein. To those
skilled in the art. The same reference numerals in the drawings denote the
same or similar parts,
and the repeated description thereof will be omitted.
[0032] Furthermore, the described features, structures, or characteristics may
be combined in
any suitable manner in one or more embodiments. In the following description,
numerous
specific details are set forth However, one skilled in the art will appreciate
that the technical
solution of the present invention may be practiced without one or more of the
specific details, or
other methods, components, apparatus, steps, etc. may be employed. In other
instances,
well-known methods, apparatus, implementations, or operations are not shown or
described in
detail to avoid obscuring aspects of the invention.
[0033] The block diagrams shown in the figures are merely functional entities
and do not
necessarily have to correspond to physically separate entities. That is, these
functional entities
may be implemented in software form, or in one or more hardware modules or
integrated circuits,
or in different networks and/or processor devices and/or microcontroller
devices.
[0034] The flowcharts shown in the figures are merely illustrative, and not
all of the contents
and operations/steps are necessarily included, and are not necessarily
performed in the order
described. For example, some operations/steps may be decomposed, and some
operations/steps
may be combined or partially merged, so the actual execution order may vary
depending on the
actual situation.
[0035] Figure 1 shows a flow diagram of a character recognition method in
accordance with
some embodiments of the present invention. In the exemplary embodiment of the
present
CA 3052846 2019-08-23

invention, although the character recognition method is applied to the server
end of the cash
register system as an example, it should be understood that the character
recognition method can
also be applied to the server side of the license plate recognition system. It
can be applied to the
server side of other suitable character recognition systems, and the present
invention is not
particularly limited.
[0036] Referring to Figure 1, in step S110, a character image corresponding to
the unique
identifier and a standard result of the character image are acquired based on
a unique identifier of
the client.
[0037] In an exemplary embodiment, the client may be a cash register system of
a shopping
mall or a supermarket, such as a cash register computer, and the unique
identifier of the client is
a serial number of each client set on the server side, by which the client can
be uniquely
identified. The standard result of the character image is the standard result
input by the client. If
the client accurately recognizes the character image, the recognition result
is taken as the
standard result of the character image; if the client recognizes the character
image incorrectly, the
character is input by the user. The standard result of the image.
[0038] The server side stores the standard result of the character image and
the character image
sent by the client in the storage area based on the unique identifier of the
client, for example,
storing the standard result of the character image and the character image
sent by the client with
the unique identifier of the client as the primary key.
[0039] If the character image is small, the standard result of the character
image and the
character image may be directly stored in the database; if the character image
is large, the
character image may be stored on the cloud server, and the character image is
stored in the cloud
in the database. The storage path of the server and the standard results of
the character image. In
an exemplary embodiment, obtaining a character image corresponding to the
unique identifier
and a standard result of the character image from the server end based on the
unique identifier of
the client, for example, when the character image is stored on the cloud
server, may be from the
database according to the unique identifier of the client. Obtaining a storage
result of the
character image and a standard result of the character image, and acquiring a
corresponding
character image from the cloud server based on the acquired storage path.
[0040] In step S120, a feature of the character image is extracted, and a
sample set is generated
based on the feature of the character image and the standard result.
6
CA 3052846 2019-08-23

[0041] The image with the character image as the payment amount is taken as an
example. The
character image may be normalized first, and the normalized character image is
divided
according to the pixel distribution to obtain a plurality of single
characters, that is, 0-9. The
numeric character extracts the segmented single-character pixel feature to
obtain the
characteristic vector of each single character.
[0042] Further, a character corresponding to the divided single character in
the standard result
of the character image is used as the label of the single character, and the
sample set is formed
based on the feature vector of each single character and the corresponding
label. For example, if
the extracted single-character feature vector is x, and the single-character
label is y, and n is the
number of characters in the character image, the sample set may be {(xi, yi),
(x2, y2),...,(xn, yn)}.
The sample set is automatically generated by the characteristics of the
character image and the
standard result, which improves the efficiency of generating the sample set
and improves the data
processing efficiency.
[0043] In step S130, the sample set is pushed to the client based on the
unique identifier, so that
the client identifies the character image to be recognized through the sample
set.
[0044] In some embodiments, the client's network address, such as an IP
address, is obtained
based on the unique identifier of the client, and the sample set generated in
step S120 is pushed
to the client based on the acquired network address. By generating a
corresponding sample set
for each client, it is possible to avoid the problem that the recognition rate
is low due to different
payment environments of different clients.
[0045] In addition, the client matches and matches the feature of the
extracted character image
by the sample set. Since the sample set is generated based on the character
image of the client
and the standard result, the character recognition rate of the client can be
improved.
[0046] Figure 2 shows a schematic diagram of an application of a character
recognition method
in accordance with some embodiments of the present invention.
[0047] Referring to Figure 2, in step S21, the client 210, 212 uploads its
unique identifier, the
captured character image, the recognition result of the character image, and
the standard result to
the background server 220.
[0048] In step S22, the server 220 stores the unique identifier of the client,
the recognition
result of the character image, and the standard result on the relation
database 230, and stores the
unique identifier of the client and the intercepted character image on the
cloud server 240.
7
CA 3052846 2019-08-23

[0049] In step S23, according to the unique identification of the client, the
recognition result
and standard result of the character image sent by the client are obtained
from the relational
database 230. The character recognition rate of the client is calculated
according to the
recognition result and standard result of the character image.
[0050] In step S24, it is determined whether the character recognition rate of
the client is lower
than a predetermined recognition rate, and if it is lower than the
predetermined recognition rate,
the character sent by the client lower than the predetermined recognition rate
is downloaded from
the cloud server 240 according to the unique identifier of the client. The
image is sent to the
sample training server 250.
[0051] In step S25, the feature vector of each character is extracted from the
character image
corresponding to the unique identifier by a feature extraction model such as a
neural network
model, and each character in the standard result of the character image is
used as a label of the
extracted feature vector. A new sample set is generated based on the feature
vectors of the
characters and the tags.
[0052] In step S26, the new sample set generated by the sample training server
250 is pushed to
the background server 220 based on the unique identifier of the client.
[0053] In step S27, the new sample set generated by the sample training server
250 is pushed to
the corresponding client based on the unique identifier of the client.
[0054] Figure 3 shows a schematic diagram of setting a screenshot area in
accordance with
some embodiments of the present invention. Referring to Figure 3, after the
payment interface of
the cash register software is opened, the area of the receivable amount is
automatically identified
by the OCR method, and when the area of the receivable amount is recognized,
the area is
distinctively displayed on the payment interface, and the area is received.
After the user's
confirmation message, the area is used as the screenshot area. Since the
payment interface does
not change after each startup of the cash register software, only the
screenshot area needs to be
set once.
[0055] Figure 4 illustrates a flow diagram of automatically uploading a
captured image, in
accordance with some embodiments of the present invention.
[0056] Referring to Figure 4, a screenshot area is configured at the client
210, and a character
image of the screenshot area is intercepted when the user pays, and the
character image is
identified by a sample set pushed by the sample training server, for example,
the character image
8
CA 3052846 2019-08-23

is extracted. The feature vector of each character in the character matches
the extracted feature
vector with the feature vector in the sample set, and determines the content
of each character
according to the matching result. After the character image of the screenshot
area is identified,
the clipped character image, the recognition result of the character image,
and the standard result
are transmitted to the background server 220, which is input by the user when
the error is
recognized.
[0057] At the background server 220, the unique identifier of the client, the
recognition result
of the character image, and the standard result are stored on the relation
database 230, and the
unique identifier of the client and the intercepted character image are stored
on the cloud server
240. After the storage is completed, the stored result is returned to the
client 210.
[0058] Figure 5 shows a flow diagram of an automated training sample in
accordance with
some embodiments of the present invention.
[0059] Referring to Figure 5, before the sample training is performed, the
character recognition
rate of the client is obtained from the relational database 230 according to
the unique identifier of
the client, and the request for obtaining the character recognition rate is
low according to the
unique identifier of the client. In response to the image acquisition request,
the cloud server 240
searches for a character image corresponding to the unique identifier of the
client below the
predetermined recognition rate based on the unique identifier of the client in
response to the
image acquisition request, and returns the found character image to the sample
training server
250.
[0060] In the sample training server 250, the feature vector of each character
is extracted from
the character image corresponding to the unique identifier by a feature
extraction model such as
a neural network model, and each character in the standard result of the
character image is taken
as the extracted feature vector. The tag generates a new sample set based on
the feature vectors
of the characters and the tags. The generated new training set is then pushed
to the client 210 via
the background server 220, which uses the sample set for image recognition.
[0061] Figure 6 shows a schematic diagram of a feature extraction model in
accordance with
some embodiments of the present invention.
[0062] Referring to Figure 6, the feature extraction model is a Convolutional
Neural Network
(CNN) model. The CNN model may include an input layer, a convolutional layer
Cl, a sampling
layer S2, a convolutional layer C3, a sampling layer S4, and an output layer.
9
CA 3052846 2019-08-23

[0063] In Figure 6, the character image corresponding to the unique identifier
is downloaded
from the cloud server according to the unique identifier of the client, and
the downloaded image
is subjected to binarization and image cutting processing to obtain an image
of a single character,
and a single character is obtained. The image is input into the CNN model for
sample training.
[0064] Referring to Figure 6, a single character image, that is, a feature map
of 28 *28 is input
at the input layer, and the feature map of 28*28 is convoluted by the
convolution layer Cl to
form six 24*24 images. Feature map; six 24*24 feature maps are sampled by
sampling layer S2
to form six 12*12 feature maps; six 12*12 feature maps are convoluted by
convolution layer C3
12 12*8 feature maps are generated; then 12 8*8 feature maps are processed
through the
sampling layer S4 to generate 12 4*4 feature maps; then, 12 1*1 are generated
through the
output layer. A feature map of 1 generates feature vectors of the input single
character images
based on the 12 1*1 feature maps.
[0065] The CNN model adopts the technical features of Local Connection and
Weight Sharing,
which can significantly increase the number of parameters of image processing
and improve
image processing efficiency. By using local connections, each neuron is only
connected to a local
area of the upper layer, which reduces the parameters that need to be
processed. The spatial size
of the local area of the connection is called the receptive field of the
neuron. By using weight
sharing, the current layer uses the same weight and bias for each channel's
neurons in the depth
direction, which can reduce the number of parameters. For example, in a local
connection, each
neuron corresponds to 100 parameters, a total of 1000000. For each neuron, if
the 100
parameters of the 1,000,000 neurons are equal, then the number of parameters
becomes 100. The
use of local joins and weight sharing in the CNN model reduces the amount of
parameters,
greatly reduces training complexity and reduces the risk of overfitting.
[0066] It should be noted that, in the exemplary embodiment, although the
feature extraction
model is described by taking the CNN model as an example, the present
invention is not limited
thereto. For example, the feature extraction model may also be a support
vector machine model,
a template matching model, or the like. It is also within the scope of the
invention.
[0067] Figure 7 is a flow chart showing a character recognition method applied
to a cash
register system of a client such as a shopping mall or a supermarket,
according to further
embodiments of the present invention.
[0068] Referring to Figure 7, in step S710, a sample set pushed by a target
server through a
CA 3052846 2019-08-23

unique identifier of a client is generated, and the sample set is generated
based on a character
image sent by the client and a standard result of the character image. Feature
set.
[0069] The target server may be the sample training server 250 or the
background server 220
described above, and the sample set is a set of feature vectors of each
character of the character
image and a label of the feature vector, and the label is a standard result of
the character image. A
character corresponding to a single character of a character image.
[0070] In step S720, the character image to be recognized is acquired, and the
feature vector of
each character in the character image to be recognized is extracted.
[0071] In an exemplary embodiment, the character image is first normalized,
and the
normalized character image is divided according to the pixel distribution to
obtain a plurality of
single characters, that is, 0 to 9 numeric characters, and the divided
characters are extracted. A
single-character pixel feature yields a characteristic vector for each single
character.
[0072] In step S730, the feature vectors of the respective characters are
matched with the
feature vectors in the sample set.
[0073] In an example embodiment, the distance between the feature vector of
each character
and the feature vector in the sample set may be calculated, and the feature
vector in the sample
set closest to the character distance in the character image is taken as the
matched feature vector.
The distance between the feature vectors may be a Hamming distance, an
Euclidean distance, or
a cosine distance, but the distance in the exemplary embodiment of the present
invention is not
limited thereto, and for example, the distance may also be a Mahalanobis
distance, a Manhattan
distance, or the like.
[0074] In step S740, characters in the character image to be recognized are
identified based on
the matching result.
[0075] After obtaining the feature vector in the sample set that matches the
character in the
character image, the character in the character image to be recognized is
determined based on the
tag with the feature vector. For example, the matching feature vector is xi,
and the feature vector
has a label of yi, and the character of the character image to be recognized
is yi.
[0076] Further, in some embodiments, after the character image to be
recognized is recognized
as an error, a standard result of the character image to be recognized input
by the user may be
received; a character image to be recognized, a recognition result of the
character image to be
recognized, and Standard results are sent to the target server.
11
CA 3052846 2019-08-23

[0077] Further, in still other embodiments of the present invention, a
character recognition
apparatus is also provided. Referring to Figure 8, the character recognition
apparatus 800 may
include an acquisition unit 810, a sample generation unit 820, and a sample
push unit 830. The
obtaining unit 810 is configured to acquire a character image corresponding to
the unique
identifier and a standard result of the character image based on the unique
identifier of the client;
the sample generating unit 820 is configured to extract a feature of the
character image, based on
the feature of the character image And the standard result generating sample
set; the sample
pushing unit 830 is configured to push the sample set to the client based on
the unique identifier,
so that the client identifies the character image to be recognized through the
sample set.
[0078] In some embodiments of the present invention, based on the foregoing
solution, the
obtaining unit 810 includes: a character recognition rate obtaining unit,
configured to acquire a
character recognition rate of the client based on the unique identifier of the
client; and a
determining unit, configured to: Determining whether the character recognition
rate is less than a
predetermined threshold; and the image obtaining unit is configured to acquire
a corresponding
character image based on the unique identifier of the client if the
determination is less than the
predetermined threshold.
[0079] In some embodiments of the present invention, the character recognition
apparatus 800
further includes: a recognition result acquisition unit, configured to acquire
a character image
corresponding to the unique identifier based on the unique identifier of the
client, a recognition
result; a recognition rate determining unit configured to determine the
character recognition rate
of the client based on the recognition result and the standard result.
[0080] In some embodiments of the present invention, based on the foregoing
scheme, the
sample generation unit 820 includes: an extraction unit, configured to extract
features of each
character in the character image by a feature extraction model; and a
character determination unit,
configured to determine a target character corresponding to the respective
characters in the
standard result; a label generating unit, configured to use the target
character as a label of the
feature of each of the characters; and a sample set generating unit configured
to calculate each
character in the character image The feature and the tag of the feature
generate the sample set.
[0081] In some embodiments of the present invention, the character recognition
apparatus 800
further includes: a receiving unit, configured to receive the character image
sent by the client, the
recognition result of the character image, and a standard result; the first
storage unit is configured
12
CA 3052846 2019-08-23

to store the character image in the first storage area; and the second storage
unit is configured to
store the recognition result of the character image and the standard result in
the second storage
area.
[0082] In some embodiments of the present invention, based on the foregoing
aspect, the first
storage area is an image storage unit of a target server, and the second
storage area is a relation
data storage unit of the target server.
[0083] Since the respective functional modules of the character recognition
apparatus 800 of
the exemplary embodiment of the present invention correspond to the steps of
the exemplary
embodiment of the character recognition method illustrated in Figure 1
described above, details
are not described herein again.
[0084] Further, in still other embodiments of the present invention, a
character recognition
apparatus is also provided. Referring to Figure 9, the character recognition
apparatus may
include a sample receiving unit 910, a feature extracting unit 920, a matching
unit 930, and an
identifying unit 940. The sample receiving unit 910 is configured to receive a
sample set pushed
by the target server by using a unique identifier of the client, where the
sample set is a feature set
generated based on a character image sent by the client and a standard result
of the character
image; The extracting unit 920 is configured to acquire a character image to
be recognized, and
extract a feature vector of each character in the character image to be
recognized; the matching
unit 930 is configured to match the feature vector of each character with the
feature vector in the
sample set; 940 is configured to identify characters in the character image to
be recognized based
on the matching result.
[0085] In some embodiments of the present invention, the character recognition
apparatus 900
further includes: a standard result receiving unit, configured to receive a
standard result of the
character image to be recognized input by a user; and a sending unit, And
sending the character
image to be recognized, the recognition result of the character image to be
recognized, and the
standard result to the target server.
[0086] Since the respective functional modules of the character recognition
apparatus 900 of
the exemplary embodiment of the present invention correspond to the steps of
the exemplary
embodiment of the character recognition method of Figure 7, the details are
not described herein.
[0087] In an exemplary embodiment of the present invention, there is also
provided an
electronic equipment capable of implementing the above method.
13
CA 3052846 2019-08-23

[0088] Referring now to Figure 10, a block diagram of a computer system 1000
suitable for use
in implementing an electronic equipment in accordance with an embodiment of
the present
invention is shown. The computer system 1000 of the electronic equipment shown
in Figure 10
is only an example, and should not impose any limitation on the function and
scope of use of the
embodiments of the present invention.
[0089] As shown in Figure 10, the computer system 1000 includes a central
processing unit
(CPU) 1001 that can be loaded into a random access memory (RAM) 1003 according
to a
program stored in a read only memory (ROM) 1002 or from a storage portion
1008. The program
in the middle performs various appropriate actions and processes. In the RAM
1003, various
programs and data required for system operation are also stored. The CPU 1001
ROM 1002 and
the RAM 1003 are connected to each other through a bus 1004. An input/output
(I/O) interface
1005 is also coupled to bus 1004.
[0090] The following components are connected to the I/O interface 1005: an
input portion
1006 including a keyboard, a mouse, etc.; an output portion 1007 including a
cathode ray tube
(CRT), a liquid crystal display (LCD), and the like, and a speaker, etc.; The
storage portion 1008;
and a communication portion 1009 including a network interface card such as a
LAN card, a
modem, or the like. The communication section 1009 performs communication
processing via a
network such as the Internet. Driver 1010 is also coupled to I/0 interface
1005 as needed. The
removable medium 1011, such as disk, optical disk, magneto-optical disk,
semiconductor
memory, etc., is installed on the driver 1010 as required so that the computer
program read from
it can be installed into the storage section 1008 as required.
[0091] In particular, the processes described above with reference to the
flowcharts may be
implemented as a computer software program, in accordance with an embodiment
of the present
invention. For example, an embodiment of the invention includes a computer
program product
comprising a computer program carried on a computer readable medium, the
computer program
comprising program code for executing the method illustrated in the flowchart.
In such an
embodiment, the computer program can be downloaded and installed from the
network via the
communication portion 1009, and/or installed from the removable medium 1011.
When the
computer program is executed by the central processing unit (CPU) 1001, the
above-described
functions defined in the system of the present application are executed.
[0092] It should be noted that the computer readable medium illustrated by the
present
14
CA 3052846 2019-08-23

invention may be a computer readable signal medium or a computer readable
storage medium or
any combination of the two. The computer readable storage medium can be, for
example, but not
limited to, an electronic, magnetic, optical, electromagnetic, infrared, or
semiconductor system,
apparatus, or device, or any combination of the above. More specific examples
of computer
readable storage media may include, but are not limited to, electrical
connections having one or
more wires, portable computer disks, hard disks, random access memory (RAM),
read only
memory (ROM), erasable Programmable read only memory (EPROM or flash memory),
optical
fiber, portable compact disk read only memory (CD-ROM), optical storage
apparatus, magnetic
storage apparatus, or any suitable combination of the foregoing. In the
present invention, a
computer readable storage medium may be any tangible medium that can contain
or store a
program, which can be used by or in connection with an instruction execution
system, apparatus
or device. In the present invention, a computer readable signal medium may
include a data signal
that is propagated in the baseband or as part of a carrier, in which computer
readable program
code is carried. Such propagated data signals can take a variety of forms
including, but not
limited to, electromagnetic signals, optical signals, or any suitable
combination of the foregoing.
The computer readable signal medium can also be any computer readable medium
other than a
computer readable storage medium, which can transmit, propagate, or transport
a program for
use by or in connection with the instruction execution system, apparatus, or
device. Program
code embodied on a computer readable medium can be transmitted by any suitable
medium,
including but not limited to wireless, wire, fiber optic cable, RF, etc., or
any suitable combination
of the foregoing.
[0093] The flowchart and block diagram in the drawings illustrate the possible
architecture,
functions and operations of the systems, methods and computer program products
according to
various embodiments of the present invention. In this regard, each box in a
flowchart or block
diagram may represent a module, program segment, or part of a code that
contains one or more
executable instructions for implementing a specified logical function. It
should also be noted that
in some alternative implementations, the functions noted in the blocks may
also occur in a
different order than that illustrated in the drawings. For example, two
successively represented
blocks may in fact be executed substantially in parallel, and they may
sometimes be executed in
the reverse order, depending upon the functionality involved. It is also noted
that each block of
the block diagrams or flowcharts, and combinations of blocks in the block
diagrams or
CA 3052846 2019-08-23

flowcharts, can be implemented by a dedicated hardware-based system that
performs the
specified function or operation, or can be used A combination of dedicated
hardware and
computer instructions is implemented.
[0094] The units involved in the embodiments of the present invention may be
implemented by
software, or may be implemented by hardware, and the described unit may also
be disposed in
the processor. The names of these units do not in any way constitute a
limitation on the unit
itself.
[0095] In another aspect, the present application further provides a computer
readable medium,
which may be included in the electronic equipment described in the above
embodiments; or may
exist separately but not assembled In the electronic equipment. The computer-
readable medium
bears one or more programs that, when executed by one or more of the above-
mentioned
programs, enable the electronic device to implement a character recognition
method as described
in the above-mentioned embodiments.
[0096] For example, the electronic equipment may implement as shown in Figure
1: acquiring a
character image corresponding to the unique identifier and a standard result
of the character
image based on a unique identifier of the client; extracting features of the
character image
Generating a sample set based on the characteristics of the character image
and the standard
result; pushing the sample set to the client based on the unique identifier to
cause the client to
identify a character image to be recognized by the sample set.
[0097] It should be noted that although several modules or units of apparatus
or apparatus for
action execution are mentioned in the detailed description above, such
division is not mandatory.
In fact, the features and functions of the two or more modules or units
described above may be
embodied in one module or unit in accordance with the embodiments of the
invention.
Conversely, the features and functions of one of the modules or units
described above may be
further divided into multiple modules or units.
[0098] Through the description of the above embodiments, those skilled in the
art can easily
understand that the example embodiments described herein may be implemented by
software, or
may be implemented by software in combination with necessary hardware.
Therefore, the
technical solution according to the embodiment of the present invention may be
embodied in the
form of a software product, which may be stored in a non-volatile storage
medium (which may
be a CD-ROM, a USB flash drive, a mobile hard disk, etc.) or on a network. A
number of
16
CA 3052846 2019-08-23

instructions are included to cause a computing apparatus (which may be a
personal computer,
server, touch terminal, or network apparatus, etc.) to perform a method in
accordance with an
embodiment of the present invention.
[0099] Those skilled in the art upon consideration of the specification and
practice of the
invention disclosed herein, will readily appreciate other embodiments of the
present invention.
The present application is intended to cover any variations, uses, or
adaptations of the present
invention, which are in accordance with the general principles of the present
invention and
include common general knowledge or conventional technical means in the art
that are not
disclosed in the present invention. The specification and examples are to be
considered as
illustrative only.
[0100] It should be understood that the present invention is not limited to
the above has been
described and illustrated in the drawings precise structure, and may be
carried out without
departing from the scope of the various modifications and changes. The scope
of the invention is
limited only by the appended claims.
17
CA 3052846 2019-08-23

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2019-08-23
(41) Open to Public Inspection 2020-02-23
Examination Requested 2022-09-16

Abandonment History

Abandonment Date Reason Reinstatement Date
2023-06-27 R86(2) - Failure to Respond 2023-07-05

Maintenance Fee

Last Payment of $210.51 was received on 2023-12-15


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if small entity fee 2025-08-25 $100.00
Next Payment if standard fee 2025-08-25 $277.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $400.00 2019-08-23
Registration of a document - section 124 $100.00 2019-09-16
Maintenance Fee - Application - New Act 2 2021-08-23 $100.00 2021-06-25
Maintenance Fee - Application - New Act 3 2022-08-23 $100.00 2022-06-22
Request for Examination 2024-08-23 $814.37 2022-09-16
Advance an application for a patent out of its routine order 2022-11-08 $508.98 2022-11-08
Maintenance Fee - Application - New Act 4 2023-08-23 $100.00 2023-06-14
Reinstatement - failure to respond to examiners report 2024-06-27 $210.51 2023-07-05
Maintenance Fee - Application - New Act 5 2024-08-23 $210.51 2023-12-15
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
10353744 CANADA LTD.
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative Drawing 2020-01-24 1 14
Cover Page 2020-01-24 2 51
Special Order / Amendment / Prosecution Correspondence 2022-11-08 33 1,240
Request for Examination 2022-09-16 9 320
Prosecution Correspondence 2022-12-23 4 151
Acknowledgement of Grant of Special Order 2022-11-08 1 177
Claims 2022-11-08 16 812
Examiner Requisition 2023-02-24 3 175
Abstract 2019-08-23 1 20
Description 2019-08-23 17 934
Claims 2019-08-23 3 120
Drawings 2019-08-23 10 173
Amendment 2023-12-22 34 1,245
Claims 2023-12-22 13 681
Reinstatement / Amendment 2023-07-05 35 1,204
Claims 2023-07-05 13 620
Special Order - Applicant Revoked 2023-08-15 2 186
Examiner Requisition 2023-09-01 4 212