Language selection

Search

Patent 3164550 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3164550
(54) English Title: IMAGE INFORMATION PROCESSING METHOD FOR USE IN Q&A SYSTEM, DEVICE AND ELECTRONIC EQUIPMENT
(54) French Title: METHODE DE TRAITEMENT DES RENSEIGNEMENTS D'IMAGE A UTILISER DANS UN SYSTEME DE QUESTIONS ET REPONSES, DISPOSITIF ET MATERIEL ELECTRONIQUE
Status: Examination
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06V 30/19 (2022.01)
  • G06F 40/35 (2020.01)
  • G06V 30/24 (2022.01)
(72) Inventors :
  • CHU, ZHE (China)
  • CHEN, CHAO (China)
(73) Owners :
  • 10353744 CANADA LTD.
(71) Applicants :
  • 10353744 CANADA LTD. (Canada)
(74) Agent: JAMES W. HINTONHINTON, JAMES W.
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2022-06-21
(41) Open to Public Inspection: 2022-12-21
Examination requested: 2022-06-21
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
202110687218.9 (China) 2021-06-21

Abstracts

English Abstract


The present application discloses an image information processing method for
use in a
questioning and answering (hereinafter referred to as "Q&A") system, and
corresponding device
and electronic equipment, of which the method comprises: receiving a
consultation request sent
by a user, wherein the consultation request includes an image to be processed;
recognizing target
text data contained in the image to be processed according to a preset
recognizing rule;
determining a preset classification to which the target text data corresponds
as a target
classification according to an edit distance between the preprocessed target
text data and a preset
keyword to which each preset classification corresponds; and obtaining an
answering statement
to which the target classification corresponds from an answering statement
library and returning
the answering statement to the user.


Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS
What is claimed is:
1. An image information processing method for use in a Q&A system,
characterized in that the
method comprises:
receiving a consultation request sent by a user, wherein the consultation
request includes an
image to be processed;
recognizing target text data contained in the image to be processed according
to a preset
recognizing rule;
determining a preset classification to which the target text data corresponds
as a target
classification according to an edit distance between the preprocessed target
text data and a preset
keyword to which each preset classification corresponds; and
obtaining an answering statement to which the target classification
corresponds from an
answering statement library and returning the answering statement to the user.
2. The image information processing method for use in a Q&A system according
to Claim 1,
characterized in that the step of determining a preset classification to which
the target text data
corresponds as a target classification according to an edit distance between
the preprocessed
target text data and a preset keyword to which each preset classification
corresponds includes:
generating an edit distance between the target text data and the preset
keyword according to the
preprocessed target text data and the preset keyword to which each preset
classification
corresponds;
generating a weighted edit distance between the target text data and each
preset keyword
according to the edit distance between the target text data and the preset
keyword and a preset
weight to which the preset keyword corresponds;
determining a weight distance between the target text data and each preset
classification
according to the weighted edit distance between the target text data and each
preset keyword and
the preset keyword to which each preset classification corresponds; and
23
Date Recue/Date Received 2022-06-21

determining the preset classification with the smallest weight distance to the
target text data as a
target classification.
3. The image information processing method for use in a Q&A system according
to Claim 2,
characterized in that the preset classifications include other types, and the
method comprises:
determining that the preset classification to which the target text data
corresponds is the other
type when the weight distance between the target text data and each preset
classification is greater
than a preset threshold.
4. The image information processing method for use in a Q&A system according
to Claim 2,
characterized in that the target text data includes at least two text
information samples, and before
the step of determining a preset classification to which the target text data
corresponds as a target
classification according to an edit distance between the preprocessed target
text data and a preset
keyword to which each preset classification corresponds, the method further
comprises:
eliminating from the target text data any text information sample whose text
length is smaller
than a preset length threshold, and generating the preprocessed target text
data.
5. The image information processing method for use in a Q&A system according
to Claim 4,
characterized in that the step of generating an edit distance between the
target text data and the
preset keyword according to the preprocessed target text data and the preset
keyword to which
each preset classification corresponds includes:
generating an edit distance between each text information sample and the
preset keyword
according to a preset edit distance algorithm;
eliminating from all edit distances any edit distance that exceeds a preset
distance threshold; and
generating an edit distance between the target text data and the preset
keyword according to the
edit distances after elimination.
6. The image information processing method for use in a Q&A system according
to Claim 4,
characterized in that the preset recognizing rule includes a preset text
detection algorithm and a
24
Date Recue/Date Received 2022-06-21

preset text recognition algorithm, and that the step of recognizing target
text data contained in
the image to be processed according to a preset recognizing rule includes:
employing the preset text detection algorithm to recognize a text region
contained in the image
to be processed;
employing the preset text recognition algorithm to recognize the text
information samples
contained in the text region; and
determining the target text data contained in the image to be recognized
according to the text
information samples contained in the text region.
7. The image information processing method for use in a Q&A system according
to Claim 6,
characterized in that the step of employing the preset text detection
algorithm to recognize a text
region contained in the image to be processed includes:
employing a CTPN text detection algorithm to recognize a text region contained
in the image to
be processed; and that
the step of employing the preset text recognition algorithm to recognize the
text information
samples contained in the text region includes:
employing a CRNN+CTC neural network model to recognize the text information
samples
contained in the text region.
8. The image information processing method for use in a Q&A system according
to anyone of
Claims 1 to 7, characterized in that the preset keyword to which each preset
classification
corresponds is prestored in a keyword library stored in a preset document,
that the answering
statement library is prestored in the preset document, and the method
comprises:
receiving a rule updating request; and
updating the preset document according to categories to be updated and/or
answering statements
to be updated as included in the rule updating request.
9. An image information processing device for use in a Q&A system,
characterized in that the
device comprises:
Date Recue/Date Received 2022-06-21

a receiving module, for receiving a consultation request sent by a user,
wherein the consultation
request includes an image to be processed;
a recognizing module, for recognizing target text data contained in the image
to be processed
according to a preset recognizing rule;
a judging module, for determining a preset classification to which the target
text data corresponds
as a target classification according to an edit distance between the
preprocessed target text data
and a preset keyword to which each preset classification corresponds; and
an answering module, for obtaining an answering statement to which the target
classification
corresponds from an answering statement library and returning the answering
statement to the
user.
10. An electronic equipment, characterized in that the electronic equipment
comprises:
one or more processor(s); and
a memory, associated with the one or more processor(s) and storing a program
instruction that
executes the following operations when it is read and executed by the one or
more processor(s):
receiving a consultation request sent by a user, wherein the consultation
request includes an
image to be processed;
recognizing target text data contained in the image to be processed according
to a preset
recognizing rule;
determining a preset classification to which the target text data corresponds
as a target
classification according to an edit distance between the preprocessed target
text data and a preset
keyword to which each preset classification corresponds; and
obtaining an answering statement to which the target classification
corresponds from an
answering statement library and returning the answering statement to the user.
26
Date Recue/Date Received 2022-06-21

Description

Note: Descriptions are shown in the official language in which they were submitted.


IMAGE INFORMATION PROCESSING METHOD FOR USE IN Q&A SYSTEM,
DEVICE AND ELECTRONIC EQUIPMENT
BACKGROUND OF THE INVENTION
Technical Field
[0001] The present invention relates to the field of information processing
technology, and more
particularly to an image information processing method for use in a Q&A
system, and
corresponding device and electronic equipment.
Description of Related Art
[0002] In the traditional service industry, as a labor-intensive post, human
customer service is a
highly intensive and highly repetitive job over the entire time period.
Accordingly, in
order to reduce manpower cost and enhance efficiency, more and more
enterprises have
introduced the automatic Q&A system enabling automatic response with
corresponding
answering statements to questions raised by users, alleviating the pressure of
human
customer service to a certain degree, and enhancing accuracy, standardization,
and
stability of enterprise services.
[0003] However, it is usual for users to input such non-text information as
pictures to the
automatic Q&A system, if such common information carriers cannot be
recognized, many
inconveniences would be brought to the use by users. Therefore, there is an
urgent need
to propose an information processing method capable of making response to
image
information input by a user, so as to address the above technical problems
that have long
been pending in the state of the art.
1
Date Recue/Date Received 2022-06-21

SUMMARY OF THE INVENTION
[0004] To deal with prior-art deficiencies, a main objective of the present
invention it is to
provide an image information processing method for use in a Q&A system, and
corresponding device and electronic equipment, so as to solve the
aforementioned
technical problems prevalent in the state of the art.
[0005] To achieve the above objective, according to one aspect, the present
invention provides
an image information processing method for use in a Q&A system, the method
comprises:
[0006] receiving a consultation request sent by a user, wherein the
consultation request includes
an image to be processed;
[0007] recognizing target text data contained in the image to be processed
according to a preset
recognizing rule;
[0008] determining a preset classification to which the target text data
corresponds as a target
classification according to an edit distance between the preprocessed target
text data and
a preset keyword to which each preset classification corresponds; and
[0009] obtaining an answering statement to which the target classification
corresponds from an
answering statement library and returning the answering statement to the user.
[0010] In some embodiments, the step of determining a preset classification to
which the target
text data corresponds as a target classification according to an edit distance
between the
preprocessed target text data and a preset keyword to which each preset
classification
corresponds includes:
[0011] generating an edit distance between the target text data and the preset
keyword according
to the preprocessed target text data and the preset keyword to which each
preset
classification corresponds;
[0012] generating a weighted edit distance between the target text data and
each preset keyword
according to the edit distance between the target text data and the preset
keyword and a
preset weight to which the preset keyword corresponds;
2
Date Recue/Date Received 2022-06-21

[0013] determining a weight distance between the target text data and each
preset classification
according to the weighted edit distance between the target text data and each
preset
keyword and the preset keyword to which each preset classification
corresponds; and
[0014] determining the preset classification with the smallest weight distance
to the target text
data as a target classification.
[0015] In some embodiments, the preset classifications include other types,
and the method
comprises:
[0016] determining that the preset classification to which the target text
data corresponds is the
other type when the weight distance between the target text data and each
preset
classification is greater than a preset threshold.
[0017] In some embodiments, the target text data includes at least two text
information samples,
and, before the step of determining a preset classification to which the
target text data
corresponds as a target classification according to an edit distance between
the
preprocessed target text data and a preset keyword to which each preset
classification
corresponds, the method further comprises:
[0018] eliminating from the target text data any text information sample whose
text length is
smaller than a preset length threshold, and generating the preprocessed target
text data.
[0019] In some embodiments, the step of generating an edit distance between
the target text data
and the preset keyword according to the preprocessed target text data and the
preset
keyword to which each preset classification corresponds includes:
[0020] generating an edit distance between each text information sample and
the preset keyword
according to a preset edit distance algorithm;
[0021] eliminating from all edit distances any edit distance that exceeds a
preset distance
threshold; and
[0022] generating an edit distance between the target text data and the preset
keyword according
to the edit distances after elimination.
3
Date Recue/Date Received 2022-06-21

[0023] In some embodiments, the preset recognizing rule includes a preset text
detection
algorithm and a preset text recognition algorithm, and the step of recognizing
target text
data contained in the image to be processed according to a preset recognizing
rule
includes:
[0024] employing the preset text detection algorithm to recognize a text
region contained in the
image to be processed;
[0025] employing the preset text recognition algorithm to recognize the text
information samples
contained in the text region; and
[0026] determining the target text data contained in the image to be
recognized according to the
text information samples contained in the text region.
[0027] In some embodiments, the step of employing the preset text detection
algorithm to
recognize a text region contained in the image to be processed includes:
[0028] employing a CTPN text detection algorithm to recognize a text region
contained in the
image to be processed; and
[0029] the step of employing the preset text recognition algorithm to
recognize the text
information samples contained in the text region includes:
[0030] employing a CRNN+CTC neural network model to recognize the text
information
samples contained in the text region.
[0031] In some embodiments, the preset keyword to which each preset
classification corresponds
is prestored in a keyword library stored in a preset document, the answering
statement
library is prestored in the preset document, and the method comprises:
[0032] receiving a rule updating request; and
[0033] updating the preset document according to categories to be updated
and/or answering
statements to be updated as included in the rule updating request.
[0034] According to the second aspect, the present application provides an
image information
4
Date Recue/Date Received 2022-06-21

processing device for use in a Q&A system, the device comprises:
[0035] a receiving module, for receiving a consultation request sent by a
user, wherein the
consultation request includes an image to be processed;
[0036] a recognizing module, for recognizing target text data contained in the
image to be
processed according to a preset recognizing rule;
[0037] a judging module, for determining a preset classification to which the
target text data
corresponds as a target classification according to an edit distance between
the
preprocessed target text data and a preset keyword to which each preset
classification
corresponds; and
[0038] an answering module, for obtaining an answering statement to which the
target
classification corresponds from an answering statement library and returning
the
answering statement to the user.
[0039] According to the third aspect, the present application provides an
electronic equipment
that comprises:
[0040] one or more processor(s); and
[0041] a memory, associated with the one or more processor(s) and storing a
program instruction
that executes the following operations when it is read and executed by the one
or more
processor(s):
[0042] receiving a consultation request sent by a user, wherein the
consultation request includes
an image to be processed;
[0043] recognizing target text data contained in the image to be processed
according to a preset
recognizing rule;
[0044] determining a preset classification to which the target text data
corresponds as a target
classification according to an edit distance between the preprocessed target
text data and
a preset keyword to which each preset classification corresponds; and
[0045] obtaining an answering statement to which the target classification
corresponds from an
answering statement library and returning the answering statement to the user.
Date Recue/Date Received 2022-06-21

[0046] The present invention achieves the following advantageous effects.
[0047] The present application provides an image information processing method
for use in a
Q&A system, and the method comprises receiving a consultation request sent by
a user,
wherein the consultation request includes an image to be processed;
recognizing target
text data contained in the image to be processed according to a preset
recognizing rule;
determining a preset classification to which the target text data corresponds
as a target
classification according to an edit distance between the preprocessed target
text data and
a preset keyword to which each preset classification corresponds; and
obtaining an
answering statement to which the target classification corresponds from an
answering
statement library and returning the answering statement to the user. By
recognizing text
information contained in the image, and by determining a preset classification
to which
the image corresponds according to an edit distance between the text
information and a
keyword, it is made possible to return an answering statement of the
corresponding preset
classification to a user, whereby is realized automatic response to image
information input
by the user, are enhanced user convenience and efficiency, and is enhanced
accuracy of
the answer to the user.
[0048] As further disclosed by the present application, the preset keyword to
which each preset
classification corresponds is prestored in a keyword library stored in a
preset document,
the answering statement library is prestored in the preset document, and the
method
comprises: receiving a rule updating request; and updating the preset document
according
to keywords to be updated and/or answering statements to be updated as
included in the
rule updating request. When it is required to change response or keywords due
to change
in activity or sales promotion, the answering statement library or the keyword
library in
the preset document can be directly updated, and the Q&A system can reload the
corresponding preset document under hot start, whereby convenience and
timeliness in
changing the rules of the answering statement library or the keyword library
are enhanced.
6
Date Recue/Date Received 2022-06-21

[0049] It is not necessary for all products of the present invention to
possess all the above effects.
BRIEF DESCRIPTION OF THE DRAWINGS
[0050] In order to more clearly describe the technical solutions in the
embodiments of the present
invention, drawings required for the illustration of the embodiments will be
briefly
introduced below. Apparently, the drawings described below are merely directed
to some
embodiments of the present invention, and it is possible for persons
ordinarily skilled in
the art to acquire other drawings without spending creative effort in the
process based on
these drawings.
[0051] Fig. 1 is a flowchart illustrating answering provided by the
embodiments of the present
application;
[0052] Fig. 2 is a view schematically illustrating a text region provided by
the embodiments of
the present application;
[0053] Fig. 3 is a flowchart illustrating the method provided by the
embodiments of the present
application
[0054] Fig. 4 is a view illustrating the structure of the device provided by
the embodiments of
the present application; and
[0055] Fig. 5 is a view illustrating the structure of the electronic equipment
provided by the
embodiments of the present application.
DETAILED DESCRIPTION OF THE INVENTION
[0056] In order to make more lucid and clear the objectives, technical
solutions and advantages
of the present invention, the technical solutions in the embodiments of the
present
7
Date Recue/Date Received 2022-06-21

invention will be clearly and comprehensively described below with reference
to the
accompanying drawings in the embodiments of the present invention. Apparently,
the
embodiments as described are merely partial embodiments, rather than the
entire
embodiments, of the present invention. All other embodiments obtainable by
persons
ordinarily skilled in the art based on the embodiments in the present
invention without
spending any creative effort shall all be covered by the protection scope of
the present
invention.
[0057] Precisely as recited in the Description of Related Art, the prior-art
automatic Q&A system
cannot recognize pictures input by users, so it is difficult to
correspondingly respond to
the pictures. To solve this technical problem, the present application
provides an image
information processing method for use in a Q&A system enabling extraction of
text
information contained in a picture, classification of the picture according to
the text
information, and return of a correspondingly classified answering statement to
the user,
so that is realized automatic reply to picture information.
[0058] Embodiment 1
[0059] Specifically, as shown in Fig. 1, a process of applying the image
information processing
method for use in a Q&A system as disclosed by this embodiment of the present
application to respond to an image sent from a user includes the following.
[0060] S10 - receiving a consultation request sent by a user, wherein the
consultation request
includes an image to be processed.
[0061] Specifically, in addition to pictures, the consultation information can
further include such
information as text information and speech data. The image to be processed can
be a
screenshot of a mobile terminal or any other image.
8
Date Recue/Date Received 2022-06-21

[0062] S20 - employing a CTPN text detection algorithm to recognize a text
region contained in
the image to be processed.
[0063] The CTPN text detection algorithm can recognize text regions contained
in the image to
be processed. As shown in Fig. 2, the text region indicates a region of a text
frame that
contains words in the picture.
[0064] CTPN is a word recognizing network model. To enhance recognition speed,
the present
application employs ShuffleNet v2 to serve as the network structure of a
convolutional
neural network model (CNN) contained in CTPN that extracts features.
ShuffleNet can
greatly reduce the computational amount of the model at the same time of
maintaining
accuracy, and its basic unit is improved over a residual unit.
[0065] S30 ¨ employing a CRNN+CTC text recognition algorithm to recognize text
information
samples contained in each text region.
[0066] The CRNN+CTC text recognition algorithm includes a CRNN network model
and a CTC
algorithm. the CRNN network model includes a convolutional neural network
model
(CNN) and a bidirectional long short-term memory network model (LSTM), of
which the
long short-term memory network model (LSTM) here is a variant of the RNN
model.
Preferably, it is possible to use a dense convolutional network model
(DenseNet) to serve
as the convolutional neural network model (CNN).
[0067] The process of employing the text recognition algorithm for recognition
includes:
extracting an image convolution feature of the image to be processed through
the
convolutional neural network CNN model; extracting a sequence feature of the
image
convolution feature through the bidirectional long short-term memory network
LSTM
model; and employing the CTC algorithm to transform to the final recognition
result
according to the extracted sequence feature and through such operations as
duplicate
9
Date Recue/Date Received 2022-06-21

removal and integration, etc. The CTC algorithm is a type of loss function
capable of
solving the problem in which characters cannot be aligned.
[0068] Based on the above algorithms, an embodiment of the present application
provides an
end-to-end text recognition algorithm in which character cutting is not
required. Based
on the CRNN+CTC algorithm, it is possible in the embodiments of the present
application
to recognize each text region, and to obtain a text list, namely target text
data, which
contains plural text information samples.
[0069] S40 ¨ preprocessing the target text data, and eliminating therefrom any
text information
sample whose text length is smaller than a preset threshold.
[0070] By eliminating any text information sample whose text length is smaller
than a preset
threshold, it is possible to enhance the subsequent efficiency in calculating
the edit
distance, and to reduce the interference to the accuracy of subsequent
classification by
noise data in the target text data.
[0071] S50 ¨ generating a weight distance between any text information sample
retained after
preprocessing and each preset classification.
[0072] Specifically, the above process of generating the weight distance
includes the following.
[0073] S51 - generating an edit distance between the text information sample
and each preset
keyword of each preset classification according to a preset edit distance
algorithm.
[0074] It is possible to determine the preset keyword to which each preset
classification
corresponds based on the keyword library stored in the preset document.
[0075] Preset classifications can be classified in advance according to such
classifying rules as
Date Recue/Date Received 2022-06-21

business requirement. The business personnel can collect screenshots received
by the
Q&A system in advance, then determine preset classifications to which these
screenshots
respectively correspond, thereafter screen out representative texts appearing
for many
times in the screenshots under the same preset classifications to serve as the
preset
keywords to which the preset classifications correspond, and sort to obtain
the
corresponding keyword library according to the preset classifications and the
preset
keywords and store the corresponding keyword library in the preset document.
[0076] Based on the above classifying rule, the automatic Q&A system can
realize automatic
response to screenshots cut out by users. When it is judged that pictures sent
by target
users of the automatic Q&A system are not screenshots under most scenarios, it
is also
possible for the business personnel to collect pictures of corresponding
categories and to
classify corresponding preset classifications and keywords according to the
pictures.
[0077] S52 - generating a weighted edit distance between the text information
sample and each
preset keyword of each preset classification according to the edit distance
between the
text information sample and each preset keyword of each classification and a
preset
weight to which each preset keyword corresponds under the corresponding preset
classification.
[0078] Before the weighted edit distance is generated, it is possible to
eliminate any edit distance
that exceeds a preset distance threshold from all edit distances to which all
preset
keywords under the preset classification correspond based on the edit distance
between
the text information sample and each preset keyword of the preset
classification.
[0079] Specifically, after any edit distance that exceeds the preset distance
threshold has been
eliminated, the weighted edit distance between the text information sample and
the preset
keyword can be expressed as:
[0080] weighted edit distance = preset weight * edit distance.
11
Date Recue/Date Received 2022-06-21

[0081] S53 ¨ determining a weight distance between the text information sample
and each preset
classification according to the weighted edit distance between the text
information sample
and each preset keyword of each preset classification.
[0082] Specifically, taking for example a certain preset classification that
includes three preset
keywords, namely keyword 1, keyword 2, and keyword 3, the weight distance
between
the text information sample and the preset classification can be expressed as:
[0083] weight distance = weighted edit distance between text information
sample and keyword
1 + weighted edit distance between text information sample and keyword 2 +
weighted
edit distance between text information sample and keyword 3.
[0084] S54 - determining the preset classification with the smallest weight
distance to the target
text data as a target classification.
[0085] Specifically, when weight distances of the target text data to all
preset classifications are
all greater than the corresponding preset threshold, it can be determined that
the target
text data pertains to any other type in the preset classifications, i.e., it
is impossible to
determine the business requirement to which the corresponding image to be
processed
corresponds. When the preset classifications and the preset keywords are
stipulated
according to screenshots, images attributed to other types may be pictures
other than
screenshots.
[0086] The images determined as other types can be stored in a preset
database, so as to facilitate
the business personnel to periodically enquire and to set up corresponding
preset
categories and preset keywords, so that the effect in responding to users'
questions is
enhanced.
[0087] S55 ¨ obtaining an answering statement to which the target category
corresponds from
12
Date Recue/Date Received 2022-06-21

an answering statement library and returning the answering statement to the
user.
[0088] Specifically, the answering statement library can be prestored in the
preset document, in
which are stored answering statements to which each preset category
corresponds.
[0089] Due to business adjustment, it may be usually required for the business
personnel to
update the answering statement library and the keyword library, and the
updating process
includes the following.
[0090] S60 ¨ receiving an updating request sent by the business personnel.
[0091] The updating request can include categories to be updated and/or
answering statements
to be updated. The categories to be updated can include addition, deletion,
and
modification of keywords of a certain preset category, or addition and
deletion of the
certain preset category. The answering statements to be updated can include
addition of
answering statements to which a certain preset category corresponds, deletion
of
answering statements to which a certain preset category corresponds, and
modification of
answering statements to which a certain preset category corresponds.
[0092] S61 - updating the preset document according to the categories to be
updated and/or
answering statements to be updated as included in the rule updating request.
[0093] The Q&A system can reload the corresponding preset document under hot
start, whereby
convenience and timeliness in changing the rules of the answering statement
library and
the keyword library are enhanced.
[0094] Embodiment 2
[0095] Corresponding to the above embodiment, the present application provides
an image
13
Date Recue/Date Received 2022-06-21

information processing method for use in a Q&A system, as shown in Fig. 3, the
method
comprises the following steps.
[0096] 3100 - receiving a consultation request sent by a user, wherein the
consultation request
includes an image to be processed.
[0097] 3200 - recognizing target text data contained in the image to be
processed according to a
preset recognizing rule.
[0098] Preferably, the preset recognizing rule includes a preset text
detection algorithm and a
preset text recognition algorithm, and the step of recognizing target text
data contained in
the image to be processed according to a preset recognizing rule includes:
[0099] 3211 - employing the preset text detection algorithm to recognize a
text region contained
in the image to be processed;
[0100] 3212 - employing the preset text recognition algorithm to recognize the
text information
samples contained in the text region; and
[0101] 3213 - determining the target text data contained in the image to be
recognized according
to the text information samples contained in the text region.
[0102] Preferably, the step of employing the preset text detection algorithm
to recognize a text
region contained in the image to be processed includes:
[0103] 3214 - employing a CTPN text detection algorithm to recognize a text
region contained
in the image to be processed; and
[0104] the step of employing the preset text recognition algorithm to
recognize the text
information samples contained in the text region includes:
[0105] 3215 - employing a CRNN+CTC neural network model to recognize the text
information
samples contained in the text region.
[0106] 3300 - determining a preset classification to which the target text
data corresponds as a
14
Date Recue/Date Received 2022-06-21

target classification according to an edit distance between the preprocessed
target text
data and a preset keyword to which each preset classification corresponds.
[0107] Preferably, the step of determining a preset classification to which
the target text data
corresponds as a target classification according to an edit distance between
the
preprocessed target text data and a preset keyword to which each preset
classification
corresponds includes:
[0108] 3311 - generating an edit distance between the target text data and the
preset keyword
according to the preprocessed target text data and the preset keyword to which
each preset
classification corresponds;
[0109] 3312 - generating a weighted edit distance between the target text data
and each preset
keyword according to the edit distance between the target text data and the
preset keyword
and a preset weight to which the preset keyword corresponds;
[0110] 3313 - determining a weight distance between the target text data and
each preset
classification according to the weighted edit distance between the target text
data and
each preset keyword and the preset keyword to which each preset classification
corresponds; and
[0111] 3314 - determining the preset classification with the smallest weight
distance to the target
text data as a target classification.
[0112] Preferably, the preset classifications include other types, and the
method comprises:
[0113] 3315 - determining that the preset classification to which the target
text data corresponds
is the other type when the weight distance between the target text data and
each preset
classification is greater than a preset threshold.
[0114] Preferably, the target text data includes at least two text information
samples, and, before
the step of determining a preset classification to which the target text data
corresponds as
a target classification according to an edit distance between the preprocessed
target text
data and a preset keyword to which each preset classification corresponds, the
method
Date Recue/Date Received 2022-06-21

further comprises:
[0115] 3316 - eliminating from the target text data any text information
sample whose text length
is smaller than a preset length threshold, and generating the preprocessed
target text data.
[0116] Preferably, the step of generating an edit distance between the target
text data and the
preset keyword according to the preprocessed target text data and the preset
keyword to
which each preset classification corresponds includes:
[0117] 3318 - generating an edit distance between each text information sample
and the preset
keyword according to a preset edit distance algorithm;
[0118] 3319 - eliminating from all edit distances any edit distance that
exceeds a preset distance
threshold; and
[0119] 3320 - generating an edit distance between the target text data and the
preset keyword
according to the edit distances after elimination.
[0120] 3400 - obtaining an answering statement to which the target
classification corresponds
from an answering statement library and returning the answering statement to
the user.
[0121] Preferably, the preset keyword to which each preset classification
corresponds is
prestored in a keyword library stored in a preset document, the answering
statement
library is prestored in the preset document, and the method comprises:
[0122] 3500 - receiving a rule updating request; and
[0123] 3510 - updating the preset document according to categories to be
updated and/or
answering statements to be updated as included in the rule updating request.
[0124] Embodiment 3
[0125] Corresponding to Embodiment 1 and Embodiment 2, as shown in Fig. 4, the
present
application provides an image information processing device for use in a Q&A
system,
the device comprises:
16
Date Recue/Date Received 2022-06-21

[0126] a receiving module 410, for receiving a consultation request sent by a
user, wherein the
consultation request includes an image to be processed;
[0127] a recognizing module 420, for recognizing target text data contained in
the image to be
processed according to a preset recognizing rule;
[0128] a judging module 430, for determining a preset classification to which
the target text data
corresponds as a target classification according to an edit distance between
the
preprocessed target text data and a preset keyword to which each preset
classification
corresponds; and
[0129] an answering module 440, for obtaining an answering statement to which
the target
classification corresponds from an answering statement library and returning
the
answering statement to the user.
[0130] Preferably, the judging module 430 can be further employed for
generating an edit
distance between the target text data and the preset keyword according to the
preprocessed target text data and the preset keyword to which each preset
classification
corresponds; generating a weighted edit distance between the target text data
and each
preset keyword according to the edit distance between the target text data and
the preset
keyword and a preset weight to which the preset keyword corresponds;
determining a
weight distance between the target text data and each preset classification
according to
the weighted edit distance between the target text data and each preset
keyword and the
preset keyword to which each preset classification corresponds; and
determining the
preset classification with the smallest weight distance to the target text
data as a target
classification.
[0131] Preferably, the preset classifications include other types, and the
judging module 430 can
be further employed for determining that the preset classification to which
the target text
data corresponds is the other type when the weight distance between the target
text data
and each preset classification is greater than a preset threshold.
17
Date Recue/Date Received 2022-06-21

[0132] Preferably, the target text data includes at least two text information
samples, and the
judging module 430 can be further employed for eliminating from the target
text data any
text information sample whose text length is smaller than a preset length
threshold, and
generating the preprocessed target text data.
[0133] Preferably, the judging module 430 can be further employed for
generating an edit
distance between each text information sample and the preset keyword according
to a
preset edit distance algorithm; eliminating from all edit distances any edit
distance that
exceeds a preset distance threshold; and generating an edit distance between
the target
text data and the preset keyword according to the edit distances after
elimination.
[0134] Preferably, the recognizing module 420 can be further employed for
employing the preset
text detection algorithm to recognize a text region contained in the image to
be processed;
employing the preset text recognition algorithm to recognize the text
information samples
contained in the text region; and determining the target text data contained
in the image
to be recognized according to the text information samples contained in the
text region.
[0135] Preferably, the recognizing module 420 can be further used for
employing a CTPN text
detection algorithm to recognize a text region contained in the image to be
processed; and
employing a CRNN+CTC neural network model to recognize the text information
samples contained in the text region.
[0136] Preferably, the receiving module 410 can be further employed for
receiving a rule
updating request; and the device further comprises an updating module for
updating the
preset document according to categories to be updated and/or answering
statements to be
updated as included in the rule updating request.
[0137] Embodiment 4
18
Date Recue/Date Received 2022-06-21

[0138] Corresponding to all the foregoing embodiments, this embodiment of the
present
application provides an electronic equipment that comprises:
[0139] one or more processor(s); and a memory, associated with the one or more
processor(s)
and storing a program instruction that executes the following operations when
it is read
and executed by the one or more processor(s):
[0140] receiving a consultation request sent by a user, wherein the
consultation request includes
an image to be processed;
[0141] recognizing target text data contained in the image to be processed
according to a preset
recognizing rule;
[0142] determining a preset classification to which the target text data
corresponds as a target
classification according to an edit distance between the preprocessed target
text data and
a preset keyword to which each preset classification corresponds; and
[0143] obtaining an answering statement to which the target classification
corresponds from an
answering statement library and returning the answering statement to the user.
[0144] Fig. 5 exemplarily illustrates the framework of the electronic
equipment that can
specifically include a processor 1510, a video display adapter 1511, a
magnetic disk
driver 1512, an input/output interface 1513, a network interface 1514, and a
memory 1520.
The processor 1510, the video display adapter 1511, the magnetic disk driver
1512, the
input/output interface 1513, the network interface 1514, and the memory 1520
can be
communicably connected with one another via a communication bus 1530.
[0145] The processor 1510 can be embodied as a general CPU (Central Processing
Unit), a
microprocessor, an ASIC (Application Specific Integrated Circuit), or one or
more
integrated circuit(s) for executing relevant program(s) to realize the
technical solutions
provided by the present application.
[0146] The memory 1520 can be embodied in such a form as an ROM (Read Only
Memory), an
RAM (Random Access Memory), a static storage device, or a dynamic storage
device.
19
Date Recue/Date Received 2022-06-21

The memory 1520 can store an operating system 1521 for controlling the running
of an
electronic equipment 1500, and a basic input/output system (BIOS) 1522 for
controlling
lower-level operations of the electronic equipment 1500. In addition, the
memory 1520
can also store a web browser 1523, a data storage administration system 1524,
and an
icon font processing system 1525, etc. The icon font processing system 1525
can be an
application program that specifically realizes the aforementioned various step
operations
in the embodiments of the present application. To sum it up, when the
technical solutions
provided by the present application are to be realized via software or
firmware, the
relevant program codes are stored in the memory 1520, and invoked and executed
by the
processor 1510. The input/output interface 1513 is employed to connect with an
input/output module to realize input and output of information. The
input/output module
can be equipped in the device as a component part (not shown in the drawings),
and can
also be externally connected with the device to provide corresponding
functions. The
input means can include a keyboard, a mouse, a touch screen, a microphone, and
various
sensors etc., and the output means can include a display screen, a
loudspeaker, a vibrator,
an indicator light etc.
[0147] The network interface 1514 is employed to connect to a communication
module (not
shown in the drawings) to realize intercommunication between the current
device and
other devices. The communication module can realize communication in a wired
mode
(via USB, network cable, for example) or in a wireless mode (via mobile
network, WIFI,
Bluetooth, etc.).
[0148] The bus 1530 includes a passageway transmitting information between
various
component parts of the device (such as the processor 1510, the video display
adapter 1511,
the magnetic disk driver 1512, the input/output interface 1513, the network
interface 1514,
and the memory 1520).
[0149] Additionally, the electronic equipment 1500 may further obtain
information of specific
Date Recue/Date Received 2022-06-21

collection conditions from a virtual resource object collection condition
information
database 1541 for judgment on conditions, and so on.
[0150] As should be noted, although merely the processor 1510, the video
display adapter 1511,
the magnetic disk driver 1512, the input/output interface 1513, the network
interface 1514,
the memory 1520, and the bus 1530 are illustrated for the aforementioned
equipment, the
equipment may further include other component parts prerequisite for realizing
normal
running during specific implementation. In addition, as can be understood by
persons
skilled in the art, the aforementioned equipment may as well only include
component
parts necessary for realizing the solutions of the present application,
without including
the entire component parts as illustrated.
[0151] As can be known through the description to the aforementioned
embodiments, it is clearly
learnt by person skilled in the art that the present application can be
realized through
software plus a general hardware platform. Based on such understanding, the
technical
solutions of the present application, or the contributions made thereby over
the state of
the art, can be essentially embodied in the form of a software product, and
such a
computer software product can be stored in a storage medium, such as an
ROM/RAM, a
magnetic disk, an optical disk etc., and includes plural instructions enabling
a computer
equipment (such as a personal computer, a cloud server, or a network device
etc.) to
execute the methods described in various embodiments or some sections of the
embodiments of the present application.
[0152] The various embodiments are progressively described in the Description,
identical or
similar sections among the various embodiments can be inferred from one
another, and
each embodiment stresses what is different from other embodiments.
Particularly, with
respect to the system or system embodiment, since it is essentially similar to
the method
embodiment, its description is relatively simple, and the relevant sections
thereof can be
inferred from the corresponding sections of the method embodiment. The system
or
21
Date Recue/Date Received 2022-06-21

system embodiment as described above is merely exemplary in nature, units
therein
described as separate parts can be or may not be physically separate, parts
displayed as
units can be or may not be physical units, that is to say, they can be located
in a single
site, or distributed over a plurality of network units. It is possible to
select partial modules
or the entire modules to realize the objectives of the embodied solutions
based on
practical requirements. It is understandable and implementable by persons
ordinarily
skilled in the art without spending creative effort in the process. What the
above describes
is merely directed to preferred embodiments of the present invention, and is
not meant to
restrict the present invention. Any modification, equivalent substitution, and
improvement makeable within the spirit and scope of the present invention
shall all be
covered by the protection scope of the present invention.
22
Date Recue/Date Received 2022-06-21

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Examiner's Report 2024-08-06
Amendment Received - Response to Examiner's Requisition 2024-05-29
Amendment Received - Voluntary Amendment 2024-05-10
Examiner's Report 2024-01-11
Inactive: Q2 failed 2024-01-02
Amendment Received - Voluntary Amendment 2023-12-05
Amendment Received - Response to Examiner's Requisition 2023-12-05
Examiner's Report 2023-08-08
Inactive: Report - No QC 2023-08-04
Amendment Received - Voluntary Amendment 2023-06-28
Amendment Received - Response to Examiner's Requisition 2023-06-28
Examiner's Report 2023-02-28
Inactive: Report - QC passed 2023-02-24
Inactive: Cover page published 2023-02-07
Advanced Examination Determined Compliant - paragraph 84(1)(a) of the Patent Rules 2023-01-11
Letter sent 2023-01-11
Application Published (Open to Public Inspection) 2022-12-21
Inactive: Office letter 2022-11-17
Letter sent 2022-07-21
Filing Requirements Determined Compliant 2022-07-21
Inactive: First IPC assigned 2022-07-15
Inactive: IPC assigned 2022-07-15
Inactive: IPC assigned 2022-07-15
Inactive: IPC assigned 2022-07-15
Filing Requirements Determined Compliant 2022-07-14
Letter sent 2022-07-14
Priority Claim Requirements Determined Compliant 2022-07-13
Letter Sent 2022-07-13
Request for Priority Received 2022-07-13
Application Received - Regular National 2022-06-21
Request for Examination Requirements Determined Compliant 2022-06-21
Amendment Received - Voluntary Amendment 2022-06-21
Amendment Received - Voluntary Amendment 2022-06-21
Inactive: Advanced examination (SO) fee processed 2022-06-21
Inactive: Advanced examination (SO) 2022-06-21
Inactive: Pre-classification 2022-06-21
All Requirements for Examination Determined Compliant 2022-06-21
Inactive: QC images - Scanning 2022-06-21

Abandonment History

There is no abandonment history.

Maintenance Fee

The last payment was received on 2023-12-15

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Fee History

Fee Type Anniversary Year Due Date Paid Date
Request for examination - standard 2026-06-22 2022-06-21
Advanced Examination 2022-06-21 2022-06-21
Application fee - standard 2022-06-21 2022-06-21
MF (application, 2nd anniv.) - standard 02 2024-06-21 2023-12-15
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
10353744 CANADA LTD.
Past Owners on Record
CHAO CHEN
ZHE CHU
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Claims 2024-05-09 27 1,481
Claims 2023-06-27 57 3,117
Claims 2023-12-04 56 2,986
Drawings 2023-12-04 4 244
Description 2022-06-20 22 942
Claims 2022-06-20 4 177
Abstract 2022-06-20 1 22
Drawings 2022-06-20 4 313
Representative drawing 2023-02-06 1 34
Claims 2022-06-20 54 2,958
Examiner requisition 2024-08-05 4 194
Examiner requisition 2024-01-10 3 176
Amendment / response to report 2024-05-09 87 3,465
Courtesy - Acknowledgement of Request for Examination 2022-07-12 1 424
Courtesy - Filing certificate 2022-07-20 1 568
Courtesy - Filing certificate 2022-07-13 1 568
Amendment / response to report 2023-06-27 123 4,896
Examiner requisition 2023-08-07 5 235
Amendment / response to report 2023-12-04 121 4,955
New application 2022-06-20 8 250
Amendment / response to report 2022-06-20 55 2,157
Courtesy - Office Letter 2022-11-16 2 227
Courtesy - Advanced Examination Request - Compliant (SO) 2023-01-10 1 162
Examiner requisition 2023-02-27 4 198