Sélection de la langue

Search

Sommaire du brevet 3064116 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 3064116
(54) Titre français: SYSTEMES ET PROCEDES POUR EXECUTER UNE CONVERSATION INTERACTIVE AUTOMATISEE AVEC UN UTILISATEUR
(54) Titre anglais: SYSTEMS AND METHODS FOR PERFORMING AUTOMATED INTERACTIVE CONVERSATION WITH A USER
Statut: Rapport envoyé
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • G06F 40/20 (2020.01)
  • G06F 40/30 (2020.01)
  • G06F 40/40 (2020.01)
  • G06N 3/02 (2006.01)
  • G06Q 40/02 (2012.01)
(72) Inventeurs :
  • CHARTON, ERIC (Canada)
  • BONNELL, MATTHEW (Canada)
  • MARCEAU, LOUIS (Canada)
  • GUYMONT, JONATHAN (Canada)
(73) Titulaires :
  • BANQUE NATIONALE DU CANADA (Canada)
(71) Demandeurs :
  • BANQUE NATIONALE DU CANADA (Canada)
(74) Agent: ROBIC AGENCE PI S.E.C./ROBIC IP AGENCY LP
(74) Co-agent:
(45) Délivré:
(22) Date de dépôt: 2019-12-06
(41) Mise à la disponibilité du public: 2020-06-07
Requête d'examen: 2019-12-06
Licence disponible: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Non

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
3026936 Canada 2018-12-07

Abrégés

Abrégé anglais


A dialogue system is a computer system that converses with a human via a user
interface. In
some embodiments, a dialogue system is provided that may increase the
probability of finding a
satisfactory response with relatively little iteration of dialogue between a
user and the dialogue
system. The number of responses (e.g. in the form of alternative questions)
may be progressively
increased during the interaction with a user, and this may have the effect of
increasing the overall
robustness of the dialogue system.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CLAIMS:
1. A computer-implemented method for performing automated interactive
conversation with a
user, the method comprising:
c. providing a user interface at which the user can provide a natural language
input;
d. processing the natural language input with a data processing unit
comprising at least one
processor executing instructions, the instructions configured for:
vi. deriving from the natural language input a possible question the user
might be
asking;
vii. conveying the possible question to the user through the user
interface;
viii. processing user input provided at the user interface indicating that
the possible
question is incorrect;
ix. deriving a series of alternate questions that the user might be asking;
x. presenting the series of alternate questions to the user through the
user interface.
2. The computer-implemented method of claim 1, wherein the instructions are
further configured
for: extracting at least one keyword from the natural language input and
deriving the series of
alternate questions based on the at least one keyword.
3. The computer-implemented method of claim 2, wherein deriving the possible
question the
user might be asking comprises determining a user intent from the natural
language input.
4. The computer-implemented method of claim 3, wherein deriving the possible
question the
user might be asking is performed without the at least one keyword.
5. The computer-implemented method of claim 3 or claim 4, wherein an algorithm
for extracting
the at least one keyword and/or an algorithm for determining the user intent
is modified based on
an indication from the user of which one of the alternate questions is a
correct question.
6. The computer-implemented method of any one of claims 1 to 5, wherein the
instructions are
further configured for: receiving an indication, from the user, that a
particular question of the
27

series of alternate questions is correct, and presenting to the user through
the user interface an
answer to the particular question.
7. The computer-implemented method of claim 6, further comprising generating
the answer to
the particular question using user-specific financial information.
8. The computer-implemented method of claim 7, wherein the particular question
is a finance-
related question, and the user-specific financial information relates to
financial transactions
previously performed by the user and/or accounts held by the user.
9. The computer-implemented method according to claims 3, 4 or 5, wherein the
user intent
associated to the possible question and to the alternate questions are each
associated with a
confidence score, and wherein in step ii, the possible question first
formulated is derived from
the user intent having the highest confidence score, and wherein in step iv,
the series of alternate
questions are derived from user intents having the next highest confidence
scores.
10. The computer-implemented method according to claim 5, wherein the
algorithm for
extracting the at least one keyword and/or an algorithm for determining the
user intent is a neural
network algorithm, and wherein weights and biases of the neural network
algorithm are adjusted
based on the indication that one of the alternate questions is the correct
question.
11. The computer-implemented method according to claim 8, wherein the answer
is modulated
based on the financial transactions previously performed by the user and/or
accounts held by the
user, whereby different users are provided different answers in response to
the possible question
or alternate questions, once confirmed.
12. The computer-implemented method according to any one of claims 1 to 11,
wherein steps iii
to v are repeated, until the user provides an indication, via the user
interface, that one of the
alternate questions corresponds to the original question.
28

13. The computer-implemented method according to any one of claims 1 to 12,
wherein in step
iv the number of alternate questions in progressively increased during
interaction with the user.
14. A computer-implemented method for performing automated interactive
conversation with a
user, the method comprising:
a. providing a user interface at which the user can provide a natural language
input;
b. processing the natural language input with a data processing unit
comprising at least one
processor executing instructions, the instructions configured for:
i. extracting keywords from the natural language input;
ii. determining from the keywords a user intent associated with the user
intent;
iii. deriving from the user intent a possible question the user might be
asking;
iv. conveying the possible question to the user through the user interface;
v. processing user input provided at the user interface indicating that the
possible
question is correct, thereby confirming that the possible question has been
correctly determined;
vi. determining the answer associated with the confirmed question;
vii. presenting the answer to the user through the user interface.
15. The computer-implemented method according to claim 14, comprising a step
of determining
a confidence score associated with the user intent, and wherein step iv of
conveying the possible
question is performed when the user intent associated therewith has a
confidence score above a
predetermined threshold.
16. A system for performing automated interactive conversation with a user,
the system
comprising:
c. a user interface configured to receive a natural language input from the
user;
d. a data processing unit to process the natural language input, the data
processing unit
configured to:
vi. derive from the natural language input a possible question the user
might be
asking;
29

vii. convey the possible question to the user through the user interface;
viii. process user input at the user interface indicating whether the
possible question is
incorrect;
ix. derive a series of alternate questions that the user might be asking;
x. present the series of alternate questions to the user through the user
interface.
17. The system of claim 16, wherein the data processing unit comprises a
keyword extractor
from extracting keywords from the natural language input and an intent
classifier for determining
from the keywords and/or natural language input a user intent and a confidence
score associated
with the user intent, wherein the data processing unit is configured to derive
the series of
alternate questions based on the at least one keyword.
18. The system according to claim 18, wherein the data processing unit
comprises:
a response generator for performing steps i. to v., the response generator
comprising:
a question identifier module deriving the possible question and alternate
questions
from the keywords and/or the user intent;
an answer identifier module processing user input at the user interface
indicating
whether the possible question or one of the alternate questions is correct;
and
an answer generator module presenting the answer associated to one of the
alternate questions, once confirmed as correct by the user.
19. The system of claim 17, wherein the data processing unit is configured to
derive the possible
question the user might be asking without using the at least one keyword.
20. The system of any one of claims 16 to 19, wherein the data processing unit
is configured to
modify an algorithm for extracting the at least one keyword and/or modify an
algorithm for
determining the user intent based on an indication from the user of which one
of the alternate
questions is a correct question.

21. The system of claim 20, wherein the algorithm is a neural network
algorithm, and wherein
weights and biases of the neural network algorithm are adjusted based on the
indication that one
of the alternate questions is the correct question.
22. The system of claim 18, wherein the answer generator module is configured
to generate the
answer to the possible or alternate question using user-specific financial
information.
23. The system of claim 22, wherein the possible or alternate question is a
finance-related
question, and the user-specific financial information relates to financial
transactions previously
performed by the user and/or accounts held by the user.
31

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


Robic Ref. No. 19958-0008
SYSTEMS AND METHODS FOR PERFORMING AUTOMATED INTERACTIVE
CONVERSATION WITH A USER
FIELD
[1] The following relates to a computer-implemented dialogue system for
conversing
with a human.
BACKGROUND
[2] A dialogue system is a computer system that converses with a human via
a user
interface. Many dialogue systems utilize a computer program to conduct the
conversation using
auditory and/or textual methods. A common name for such a computer program is
a `chatboe. A
chatbot may be implemented using a natural language processing system.
131 An organization may use a dialogue system to help support and
scale their
customer relation efforts. A dialogue system may be used to provide a wide
variety of
information to many different users. For example, the dialogue system may be
used to perform
automated interactive conversation with users in order to provide answers to
questions posed by
the users. Questions originating from different users may be very different in
nature, and the
questions may be received and answered at any time of day or night. The users
of the dialogue
system may be customers or potential customers of the organization.
[4] Current dialogue systems have a technical problem in that they
are often not
robust. 'Robustness' refers to the ability of the dialogue system to
satisfactorily answer a
question posed by a human user. Some current dialogue systems may provide less
than 50%
correct/satisfactory answers. If the dialogue system returns an incorrect or
unsatisfactory answer
too often, then the dialogue system will not be adopted by human users. Also,
the organization's
reputation may be negatively impacted.
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
SUMMARY
151 One way to try to increase the robustness of a dialogue system is
to invest
significant resources in the writing of questions and answers, and/or to
invest significant
resources in enriching the ability of the dialogue system to recognize
intentions. Technical
implementations often focus less on linguistics and more on improvements to
algorithms /
models. Achieving satisfactory results may be expensive. In some cases, the
results are not even
satisfactory because of a technical challenge: the combinatory complexity of
human language is
boundless, and so it is difficult in technical implementation to predict the
natural language a user
could use to ask a particular question. The system may be highly based on
recursion, and manual
dialogue tree manufacture may not be a viable solution.
[6] Instead, in some embodiments disclosed herein, a dialogue system
is provided
that may increase the probability of finding a satisfactory response with
relatively little iteration
of dialogue between a user and the system. An interactive process is
introduced to help facilitate
the exchange between the user and the dialogue system to try to increase the
level of robustness
of the dialogue system.
171 In some embodiments, the number of responses in the form of
questions may be
progressively increased during an interaction with a user. This may have the
effect of increasing
the overall robustness of the dialogue system. For example, the responses that
are progressively
increased may be questions that the system determines the user may be asking.
[8] Another problem with dialogue systems is that a response
formulated by a
dialogue system is not customized based on the user. For example, if two
different users ask the
exact same question, e.g. "What is the monthly fee for your savings account",
the answer would
be the same. However, one user may actually be entitled to a preferable
monthly fee compared to
another user, e.g. based on the volume of monthly financial transactions
associated with the
user's bank accounts or based on the number of accounts held by the user.
191 In some embodiments, a technical solution is provided in which a
response
returned by a dialogue system is generated based on financial information
specific to the user.
2
CA 3064116 2019-12-06

Robic Ref No. 19958-0008
The response may be in reply to a user's finance related question or finance
related action. The
response may be a question or an answer or an action.
1101 According to a possible embodiment, a computer-implemented method
for
performing automated interactive conversation with a user is provided. The
method comprises:
a. providing a user interface at which the user can provide a natural language
input;
b. processing the natural language input with a data processing unit
comprising at least one
processor executing instructions, the instructions configured for:
i. deriving from the natural language input a possible question the user
might be
asking;
ii. conveying the possible question to the user through the user interface;
iii. processing user input provided at the user interface indicating that
the possible
question is incorrect;
iv. deriving a series of alternate questions that the user might be asking;
v. presenting the series of alternate questions to the user through the
user interface.
1111 According to another embodiment, a computer-implemented method for

performing automated interactive conversation with a user is provided. The
method comprises:
a. providing a user interface at which the user can provide a natural language
input;
b. processing the natural language input with a data processing unit
comprising at least one
processor executing instructions, the instructions configured for:
i. extracting keywords from the natural language input;
ii. determining from the keywords a user intent;
iii. deriving from the user intent a possible question the user might be
asking;
iv. conveying the possible question to the user through the user interface;
v. processing user input provided at the user interface indicating that the
possible
question is correct, thereby confirming that the possible question has been
correctly determined;
vi. determining the answer associated with the confirmed question; and
vii. presenting the answer to the user through the user interface.
3
CA 3064116 2019-12-06

Robic Ref No. 19958-0008
[12] According to another embodiment, a system for performing automated
interactive
conversation with a user is provided. The system comprises:
a. a user interface configured to receive a natural language input from the
user;
b. a data processing unit to process the natural language input, the data
processing unit
configured to:
i. derive from the natural language input a possible question the user
might be
asking;
ii. convey the possible question to the user through the user interface;
iii. process user input at the user interface indicating whether the
possible question is
incorrect;
iv. derive a series of alternate questions that the user might be asking;
v. present the series of alternate questions to the user through the user
interface.
BRIEF DESCRIPTION OF THE DRAWINGS
[13] Embodiments will be described, by way of example only, with
reference to the
accompanying figures wherein:
[14] FIG. 1 is a block diagram of a computer implemented system for
performing
automated interactive conversation with a user, according to one embodiment;
[15] FIG.1A is block diagram of a portion of the computer implemented
system
relating to keyword extraction and intent classification, according to one
embodiment.
[16] FIG. 1B is block diagram of a portion of the computer implemented
system
relating to the response generator for generating a first response, according
to an embodiment.
[17] FIG. IC is block diagram of a portion of the computer implemented
system
relating to the response generator for generating alternate responses,
according to a possible
embodiment.
4
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[18] FIG. ID illustrates a flowchart of the exchanges occurring between the
front end
and back end of the system, according to a possible embodiment.
[19] FIG. 2 illustrates a flowchart of a computer-implemented method for
performing
automated interactive conversation with a user, according to one embodiment;
[20] FIGs. 3 and 4 illustrate example message exchanges on a user
interface;
[21] FIG. 5 illustrates a flowchart of a computer-implemented method for
interacting
with a user, according to one embodiment;
[22] FIG. 6 illustrates a flowchart of a computer-implemented method for
performing
automated interactive conversation with a user, according to another
embodiment; and
[23] FIG. 7 illustrates a flowchart of a computer-implemented method for
interacting
with a user, according to another embodiment.
[24] FIG. 8 illustrates an exemplary flowchart of a portion of the computer-

implemented method wherein the user's profile is taken into consideration.
DETAILED DESCRIPTION
[25] For illustrative purposes, specific embodiments and examples will be
explained in
greater detail below in conjunction with the figures.
[26] FIG. 1 is a block diagram of a computer implemented system 102 for
performing
automated interactive conversation with a user, according to one embodiment.
The system 102
implements a dialogue system, also commonly referred to as "chatbot".
[27] The system 102 includes a user interface 104 for receiving a natural
language
input originating from the user, and for providing a response to the user. The
attributes of the
user interface 104 are implementation specific and depend on how the user is
interacting with the
system 102. Two examples of a user interface 104 are illustrated in FIG. 1. In
one example, the
user interface 104 interfaces with a telephone handset belonging to the user.
The telephone
handset includes a transmitter through which the user speaks and a receiver
through which the
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
user hears the response. A speech recognition module 106 is included as part
of the system 102
in order to convert from speech to text. As another example, the user
interface 104 may interface
with a graphical user interface (GUI) on a computing device, such as on the
user's mobile
device. The user may use a keyboard or touchscreen to provide a text input,
and the response
would be presented as text on the display screen hosting the GUI. The user
interface 104 is the
component of the system 102 that interfaces with users and is meant to refer
to the components
of the interface that belong to the system 102, rather than to the user
device. Throughout the
description, a "response" is in reply to a communication sent to or capture
through the user
interface 102, while an "answer" is the resolution to the original question
asked by the user. In
the context of the present application, the "answer" is the information the
user is looking for. A
"response" is thus more generic than the "answer".
[28] The system 102 further includes a data processing unit 110, which may
implement a natural language processing system. The data processing unit 110
includes a
keyword extractor 112, an intent classifier 114, response generator 116, and a
learning
component 118.
[29] Still referring to FIG.1, and also to FIG.1A, the keyword extractor
112 receives a
natural language input originating from the user. The input received at the
keyword extractor 112
is a string of text. In general, the string of text includes multiple words,
although in some cases it
could be that the string of text is only a single word. The string of text may
convey a question
asked by the user, or a user instruction, or a user's response to a question
that was asked by the
system 102. The keyword extractor 112 attempts to extract words and/or phrases
from the string
of text. If any keywords are extracted, the extracted keywords are stored in a
memory, e.g.
memory 122.
[30] In some embodiments, the keyword extractor 112 may recognize
properties that
indicate a particular word in the string of text may be a keyword, such as the
use of a date,
capital letter, brand name, recognized phrase, etc. Examples of keyword
extraction algorithms
that may be implemented by the keyword extractor 112 are described in:
(1) Jean-Louis, L., Gagnon, M., and Charton, E., "A knowledge-base oriented
approach for
automatic keyword extraction" ComputaciOn y Sistemas, 2013, vol. 17, no 2, p.
187-196; and
6
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
(2) Bechet, F., and Charton, E., "Unsupervised knowledge acquisition for
extracting named
entities from speech", in Acoustics Speech and Signal Processing (ICASSP),
2010 IEEE
International Conference on, pp. 5338-5341, March 2010.
[31] In some embodiments, a keyword extraction algorithm is used involving
named
entity recognition based on knowledge representation of the semantic domain
covered by the
dialog system application. For example, the semantic domain can relate to
banking in general or
to more specific domains such as insurance, loans, investment, trading, etc.
[32] The intent classifier 114 also receives the natural language input
originating from
the user in the form of a string of text and analyses the string of text to
determine the intent of the
user. In some embodiments, the words in the string of text are compared to a
library of intents
and entity values. For example, if the user asked the question "What is the
rate on your cashback
Mastercard?", then the intent classifier 114 may match the word "rate" to an
intent "get rate" that
is stored in a library of intents. The intent classifier 114 may determine
that the entity value
relating to that intent is "cashback" by the presence of the word "cashback".
In such a scenario,
the intent classifier 114 therefore determines that the user is asking for a
cashback rate. The
presence of word "Mastercard" may cause the intent classifier 114 to determine
that the cashback
rate requested by the user is the cashback rate for the Mastercard-1'm brand
credit card. The intent
classifier 114 may associate a confidence value with the determined intent.
The confidence value
will be referred to as a "confidence score", and it quantifies how confident
the intent classifier
114 is regarding the correctness of its determined intent. For example, the
intent determined by
the intent classifier 114 may be "get cashback rate for MastercardTM brand
credit card".
However, this intent is not necessarily correct, e.g. there is some ambiguity
from the string of
text as to whether the rate requested is cashback rate or another type of rate
instead (e.g. interest
rate for the MastercardTM brand credit card). Therefore, the confidence score
may not be 100%,
but may instead have a lower value, e.g. 75%.
[33] The intent classifier 114 may be implemented with a neural network.
The neural
network receives as input text string from the user interface 104, and outputs
a plurality of
intents, each associated with a confidence value or confidence score. For
example, the first intent
"get cashback rate for MastercardTM brand credit card" can be associated with
a confidence score
7
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
of 75%, a second intent "get interest rate for MastercardTM brand credit card"
can be associated
with a confidence score of 60% and so on. The confidence score is thus an
estimation of the
likelihood or probability that the neural network has correctly interpreted
the user intent from the
text string.
[34] One example of an algorithm that may be implemented by the intent
classifier 114
is described in: Serban, I. V., Sordoni, A., Bengio, Y., Courville, A. C., and
Pineau, J., "Building
End-To-End Dialogue Systems Using Generative Hierarchical Neural Network
Models", in
Association for the Advancement of Artificial Intelligence (AAAI), Vol. 16,
pp. 3776-3784,
February 2016.
[35] In other embodiments, the intent classifier 114 instead works by
simply looking
for matches between words in the natural language input and words in
prewritten questions that
are stored in memory 122.
[36] Referring to FIG.!, FIG.1B and 1D the response generator 116 receives
one or
more intents from the intent classifier 114, determines the question that the
user is possibly
asking based on the intent(s), and returns the possible question to the user
for verification. The
keyword extractor 112 and the intent classifier 114 can form part of a
question analyser module
158, as illustrated in FIG. ID. In possible embodiments, a single intent is
provided as an input to
the response generator 116, corresponding to the intent having the highest
confidence score. In
some embodiments, if the intent has a confidence score above a predetermined
confidence
threshold, the response generator 116 may respond with a reformulated question
to obtain
validation from the user, the reformulated question being an equivalent of
what the question
identifier 160 has determined as the initial/original question. In other
embodiments, the response
generator 116 may respond with a reformulated question regardless of the value
of the
confidence score. The answer to the possible question may also be returned at
this step of the
process, along with the reformulated question for validation.
[37] Still referring to FIG.!, FIG 1B and FIG.1D , a first response
generator module
116a is called or executed, which comprises a question identifier module 160,
an answer
identifier module 162 and an answer generator module 164. More specifically,
the question
identifier 160 processes the intent to determine a question that most likely
matches the question
8
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
conveyed by the text string. The reformulated question is sent to the user
interface, to validate
whether it has been correctly determined. The answer provided by the user
through the user
interface 104 is analysed by the question identifier 160, which may also
require involvement of
the intent classifier 114. If the user has confirmed correctness of the
reformulated question, the
first response generator 116a then identifies, formulates and returns the
answer, via the answer
identifier 162 and the answer generator module 164. The question identifier
module 160 can thus
generate a single or alternate questions, and can analyse the response from
the user indicating
whether the question identifier 160 has correctly determined the original
question.
[38] In some embodiments, answers to the verified questions may be stored
in memory
and simply retrieved using a mapping between the verified question and the
answer. In other
embodiments, the answer identifier 162 may need to send a request over a
network to obtain the
answer. For example, if the verified question is "what is the cashback rate
for MastercardTM
brand credit card", then the answer identifier 162 may query a database
storing the cashback rate
in order to obtain the cashback rate, and then formulate and send the response
to the user, e.g.
"The cashback rate for our Mastercard is 1%".
[39] Still referring to FIGs.1 and 1D, and also to FIG. 1C, if the user has
indicated that
the reformulated question is incorrect, alternate questions are generated by
the alternate response
generator 116b, which can comprise or interact with the same modules 158
(including 114), 160,
162, 164 previously described. In this case, the extracted keywords, if any,
are fed to the intent
classifier 114, and the question identifier module 160 generates a list of
alternate questions based
on the intents having the highest confidence scores. The list of alternate
questions is then sent to
the user interface 104 for confirmation by the user. The question identifier
module 160 analyses
the user's response, which can be a selection of one of the alternate
questions, or an indication
that none of the alternate questions matched his original question. An answer
is provided by the
answer generator module 164, once the original question has been validated.
Advantageously,
the question identifier 160 forces the user to confirm or select the correct
question from the list,
before providing an answer, which not only increases the success rate of
providing
satisfactory/useful answers, but also allows the learning component 118 to
continuously improve
the process of identifying the initial question.
9
CA 3064116 2019-12-06

Robic Ref No. 19958-0008
[40] The learning component 118 thus adapts the keyword extractor 112
and/or intent
classifier 114 based on the answers provided by the user, as discussed in more
detail later. For
example, the learning component 118 may readjust the weights and biases
applied by the neural
network for determining intents based on text string inputs and/or extracted
keywords.
[41] Operation of the intent classifier 114, response generators 116a or
116b, and
learning component 118 will be explained in more detail below in relation to
FIG. 2.
[42] The system 102 further includes a memory 122 for storing information
used by
the data processing unit 110. For example, the memory 122 may store a library
of intents, the
extracted keywords from the keyword extractor 112, responses or partial
responses
preprogramed for use by the response generator 116, etc. The memory 122 can
comprise a
combination of RAM and ROM and can be part of a single server or distributed
across several
servers and/or databases, either locally or remotely on cloud-based servers.
[43] The data processing unit 110 and its components (e.g. the keyword
extractor 112,
intent classifier 114, response generator 116, and learning component 118) may
be implemented
by one or more processors that execute instructions (software) stored in
memory. The memory in
which the instructions are stored may be memory 122 or another memory not
illustrated. The
instructions, when executed, cause the data processing unit 110 and its
components to perform
the operations described herein, e.g. extracting keywords from the user input,
classifying intent,
computing a confidence score, formulating the response to send to the user,
updating one or
more algorithms based on input from the user, etc. In some embodiments, the
one or more
processors consist of a central processing unit (CPU).
[44] Alternatively, some or all of the data processing unit 110 and its
components may
be implemented using dedicated circuitry, such as an application specific
integrated circuit
(ASIC), a graphics processing unit (GPU), or a programmed field programmable
gate array
(FPGA) for performing the operations of the data processing unit 110 and its
components.
[45] In some embodiments, in order to try to increase the robustness of the
dialogue
system, an interactive process is used in which the number of responses may be
progressively
increased during an interaction with a user. Example embodiments are provided
below.
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[46] FIG. 2 illustrates a flowchart of a computer-implemented method for
performing
automated interactive conversation with a user, e.g. in order to provide an
answer to a question
from the user, according to one embodiment.
[47] In step 202, a natural language input originating from a user is
received via the
user interface 104. The natural language input is a string of text that
conveys a question.
[48] In step 204, the keyword extractor 112 attempts to extract keywords
from the
natural language input. If one or more keywords are extracted, then they are
stored in memory
122.
[49] In step 206 the intent classifier 114 determines an intent from the
natural language
input. The intent classifier 114 also determines the confidence score for its
determined intent. In
step 207, if the confidence score is below a threshold, then the method
proceeds to step 221.
Otherwise, if the confidence score is above the threshold, it indicates that
the system 102 is
confident enough in its determined intent to return a single question for
verification, and the
method proceeds to step 208.
[50] In step 208, the response generator 116a returns the question for
verification by
the user, via the question identifier module 160 described above.
[51] In step 209, the data processing unit 110 determines whether the
question returned
in step 208 was verified as correct by the user. Step 209 may include
receiving a natural
language input from the user, and the intent classifier 114 determining from
the intent of the
natural language input whether or not the user has verified the correctness of
the question, via the
question identifier module 160.
[52] For example, the original natural language input received in step 202
may ask
"What is the rate on your cashback MasterCard?" The intent classifier 114 may
determine with
high enough confidence that the intent is "get cashback rate for MasterCard",
and so in step 208
the response generator 116, via the question identifier module 160, returns "I
think I understand
your question. Can you verify for me that your question is: What is the
cashback rate for the
MasterCard credit card?" The user replies "Yes". The input "Yes" is determined
in step 209 to
be verifying that the question is correct, via the question identifier module
160.
11
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[53] If the question is verified as correct, then in step 210 the response
generator 116
returns the answer to the question, via the answer identifier 162 and answer
generator 164, and
the method ends. Optionally, the user's answer confirming correctness of the
response may be
used by to the intent classifier 114 and/or the question identifier 162 to
increase their
effectiveness for future questions that may be similar. However, if the
question is not verified as
correct, then the method proceeds to step 211.
[54] When a question is derived by the intent classifier 114 from the
natural language
input, and the confidence score is above the threshold, then the derived
question is referred to as
a "likely question". A "likely question" is a question that the system 102
determines was likely
conveyed by the natural language input. In step 208, it is the likely question
that is returned.
However, it is only a "likely" question because it is not necessarily the
actual question that was
asked, e.g. if the intent determined by the intent classifier 114 does not
correctly reflect the
user's intent.
[55] If step 211 is reached, it means that the initial question presented
to the user for
verification in steps 208 and 209 is not verified as correct. In step 211, the
data processing unit
110 determines whether one or more keywords were recognized and extracted by
the keyword
extractor 112 in step 204. If no keywords were recognized, then the method
proceeds from step
211 to step 230. Step 230 is explained later. Otherwise, if one or more
keywords were
recognized and extracted, then the method proceeds from step 211 to 212.
[56] In step 212, the intent classifier 114 identifies n alternative
intents based on the
keywords extracted in step 204, where n is a natural number. n may vary
depending upon how
many alternative intents can be determined, and n may also be capped. For
example, if only one
alternative intent is determined by the intent classifier 114, then n is
limited to n = 1. As another
example, if five alternative intents are determined by the intent classifier
114, then n may be
capped at four, e.g. only the top four alternative intents are identified.
[57] An alternative intent is identified by the intent classifier 114 as
follows: the
keywords are processed, but instead of identifying the most likely intent
(identified in step 206),
a different intent is identified that is determined to be less likely, e.g.
has a lower confidence
score. For example, the user's question may be "What is the rate on your
cashback Mastercard?"
12
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
The intent classifier 114 determines two possible intents: (1) the user is
requesting the cashback
rate for the MastercardTM brand credit card, and the confidence score of this
determined intent is
75%; or (2) the user is requesting the interest rate for the Mastercard rm
brand credit card, and the
confidence score of this determined intent is 65%. The intent identified in
step 206 is the one
with the higher confidence score, which in this example is cashback rate. The
(n = 1) alternative
intent identified in step 212 is the one with the lower confidence score,
which in this example is
interest rate. Each alternative intent corresponds to an alternative question
the user might be
asking, which is derived from the natural language input conveying the
question that was
received at step 202.
[58] The n alternative intents correspond to n alternative questions, and
in step 214 the
n alternative questions are generated by the question identifier module 160
and returned to the
user via the user interface 104.
[59] In step 215 it is determined whether one of the n alternative
questions is identified
as correct by the user, by the question identifier 160. Step 215 may be
performed by determining
the intent of an input received from the user after the n alternative
questions are presented to the
user. For example, if the user responds "First question", then the question
identifier 160
determines that the first question of the n alternative questions is the
correct question.
[60] If one of the n alternative questions is identified as correct, then
in step 216 the
response generator 116 returns, via the answer identifier 162 and the answer
generator 164, the
corresponding answer and the method ends. If none of the n alternative
questions is identified as
correct, then the method proceeds to step 230. Step 230 is described later.
The intent classifier
and question identifier module can also be at this point readjusted, for
example by providing the
user's answer to a neural network, for modifying weights and biases of layers
of the neural
network.
[61] Returning to step 207, if the intent determined by the intent
classifier 114 in step
207 is below a threshold, then the method proceeds to step 221. If step 221 is
reached, it means
that an intent has been determined from the natural language input, but the
intent classifier 114 is
not particularly confident that the determined intent is correct.
13
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[62] Therefore, in step 221, the data processing unit 110 determines
whether one or
more keywords were recognized and extracted by the keyword extractor 112 in
step 204. If no
keywords were recognized, then the method proceeds from step 221 to step 230.
Step 230 is
explained later. Otherwise, if one or more keywords were recognized and
extracted, then the
method proceeds from step 221 to step 222. In step 222, the intent classifier
114 identifies the k
most likely intents, where k is a natural number greater than or equal to one.
k does not need to
have any relation to n, but in some embodiments k = n or k = n + 1. k may vary
depending
upon how many intents can be determined, and k may also be capped. The k
intents returned
may be the k intents having the highest confidence scores. For example, the
user's question may
be "What is the big deal about your Mastercard?" The intent classifier 114
determines two
possible intents: (1) the user is requesting a summary of the features of the
MastercardTM brand
credit card, and the confidence score of this determined intent is 45%; or (2)
the user is
requesting information on promotional offers for signing up for the
MastercardTM brand credit
card, and the confidence score of this determined intent is 35%. Neither
intent has a high enough
confidence score to proceed to step 208, but in step 222 both intents (k = 2)
are identified. Each
intent corresponds to an alternative question the user might be asking, which
is derived from the
natural language input conveying the question that was received at step 202.
1631 The k intents correspond to k questions, generated by the question
identifier
module 160, and in step 224 the k questions are returned to the user via the
user interface 104.
[64] In step 225 it is determined whether one of the k questions is
identified as correct
by the user, via the question identifier module 160. Step 225 may be performed
by determining
the intent of an input received from the user after the k questions are
presented to the user. If one
of the k questions is identified as correct, then in step 226 the response
generator 116 returns, via
the answer identifier module 162 and the answer generator module 164, the
corresponding
answer and the method ends. If none of the k questions is identified as
correct, then the method
proceeds to step 230. Again here, the user's question selection can be fed
back to the intent
classifier 114 and/or the question identifier 160 to improve accuracy of the
validation/reformulated questions.
14
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[65] If step 230 is reached in the method of FIG. 2 it means that the
system 102 is not
able to determine the question the user is asking. In step 230 the response
generator 116 sends a
reply to the user indicating this, e.g. "Sorry, I do not understand your
question. Please try to
rephrase your question".
[66] FIGID thus summarizes the back and forth validation process described
in
relation with F1G.2, occurring between the front end 120 and the back end 140
of the system
102. Questions and responses are sent to and captured via the user interface
104 and the back end
modules 158, 116a and 116bprocess the user's responses and generate the
reformulated
question(s). As explained previously, the question entered by the user is
first analysed by the
question analyser 158, and intents, keywords and/or text strings are feed to
either one of the first
response generator 116a and alternate response generator 116b, for instance
depending on the
confidence score of the intent. In some embodiments, when the confidence score
is above a
given threshold, a single validation question is returned to the user
interface. In other
embodiments, the validation question with the highest confidence score is
returned to the user
interface. In other embodiments, several alternate questions are proposed, one
of which can be
selected by the user via the user interface if it correctly reflects the
original question asked.
[67] FIG. 3 illustrates an example message exchange on a user interface
104,
according to one embodiment. The message exchange corresponds to steps 202,
204, 206, 207,
208, 209, 211, 212, 214, 215, and 216 of FIG. 2. The number of responses (in
the form of
questions) is progressively increased during the interaction with the user. In
particular, initially
only one question is presented for verification at 382. However, upon
receiving user feedback
indicating that the initial question is incorrect, n = 3 alternative questions
are provided at 384.
The user indicates that the first one of the three alternative questions is
correct at 386, and the
answer corresponding to that question is returned at 388.
[68] FIG. 4 illustrates an example message exchange on a user interface
104,
according to another embodiment. The message exchange corresponds to steps
202, 204, 206,
207, 221, 222, 224, 225, and 226 of FIG. 2. The confidence score relating to
the most likely
intent does not exceed the threshold, and so the k = 3 most likely responses
are returned at 392.
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
The user indicates that the first one of the three questions is correct at
396, and the answer
corresponding to that question is returned at 398.
1691 Returning to FIG. 2, optionally, in step 234, the learning
component 118 updates
the keyword extractor 112 and/or the intent classifier 114 to reflect the
user's response that
indicates which question is the correct question. For example, the learning
component 118 may
receive the output of the "Yes" branch of step 215 and/or step 225, which
indicates the correct
question, and the learning component 118 may use this indication to update or
train the intent
classifier 114 and/or the keyword extractor 112. Two examples follow.
[70] One example: The user initially asks the question "What is the big
deal about your
Mastercard?" The system does not determine an intent with a high enough
confidence score and
so three questions are returned to the user, as shown at 392 of FIG. 4. The
user replies that the
first question is the correct, i.e. the correct question is "What are the
features of the Mastercard
credit card?". The learning component 118 then updates the keyword extractor
112 and/or intent
classifier 114 to add the vocabulary "big deal" and to indicate that "big
deal" is a synonym to
"features". Then, if in the future a user asks a question including "big
deal", e.g. "What is the big
deal regarding your savings account", then the intent classifier 114 will more
confidently
determine that the user intent is that the user wants to learn about the
features of the savings
account.
[71] Another example: The user initially asks the question "What is the
rate on your
cashback Mastercard?" The system initially returns the incorrect question, as
shown at 382 of
FIG. 3, and so three alternative questions are returned to the user, as shown
at 384 of FIG. 3. The
user replies at 386 that the first question is correct, i.e. the correct
question is "What is the
interest rate on the Mastercard credit card". The learning component 118 then
updates the intent
classifier 114 to increase the confidence score of the entity value "interest
rate" when "rate" is
used in the user's question. Then, if in the future a user asks a similar
question, e.g. "What is the
rate on your Visa card", then the intent classifier will more confidently
determine that the user
intent is that the user wants to know the interest rate for the VisaTm brand
credit card.
[72] An example of a learning algorithm that may be implemented by the
learning
component 118 is : Schatzmann. J., Weilhammer, K., Stuttle, M., & Young. S.
(2006), -A survey
16
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
of statistical user simulation techniques for reinforcement-learning of
dialogue management
strategies-. The knowledge engineering review, 21(2), 97-126.
[73] In alternative embodiments, steps 208 and 209 of FIG. 2 may be
modified to
instead just return an answer to the determined question, and ask for
validation that the returned
answer is correct, in which case step 210 is not needed. For example, box 382
in FIG. 3 may
instead be: "The cashback rate on the MasterCard credit card is 1%. Did that
answer your
question?". If the user answers "Yes" then the method ends, whereas if the
user answers "No,
that did not answer my question", then the method proceeds to step 211.
[74] In some embodiments, the original question asked in the natural
language input
received at step 202 may actually consist of more than one question, in which
case the system
102 may extract and process each question separately, or the intent classifier
114 may try to
determine an overall intent. For example, if the natural language input from
the user in step 202
is "Does your bank offer multiple credit cards? What is the rate of each
one?", then the intent
classifier 114 may determine that the intent is that the user wants a
comparison of the rate of
each of the bank's credit cards.
[75] In some embodiments, the natural language input received at step 202
may not be
a question, but may instead be a request or an instruction to perform an
action. For example, the
input may be a request for information. The reply may then be a question that
confirms whether
particular information is being requested. For example, the natural language
input received in
step 202 may be "Provide me with the rate on your cashback MasterCard", and
the initial
question returned in step 208 may be "Please confirm that you are asking: What
is the cashback
rate on the MasterCard credit card?" Similarly, the alternative questions in
steps 214 and 224
may ask whether particular information is being requested.
[76] In some embodiments, the natural language input received at step 202
may be an
instruction to perform an action, e.g. "open a new account", in which case the
question(s)
returned may relate to clarification or confirmation before proceeding, e.g.
in step 214 "Do you
mean any one of the following actions: (1) Open a new savings account?; or (2)
open a new
chequing account?; or (3) open a new student account?".
17
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[77] In some embodiments, when a user asks a question or requests an
action, the
response returned by the system 102 may be formulated based on information
specific to the
user. In some embodiments, the response may be in reply to a user's finance
related question or
finance related action. The response may be a function of the user's financial
information, e.g.
the user's prior financial transactions. The response may be a question or an
answer or an action.
[78] FIG. 5 illustrates a flowchart of a computer-implemented method for
interacting
with a user, according to one embodiment. In step 452, the data processing
unit 110 receives, in
text form, a natural language input originating from a user via the user
interface 104. The natural
language input conveys a finance related question or a finance related action
to be performed. As
an example, the user may be asking "What is the monthly fee for your savings
account?" (a
finance related question), or the user may be instructing "Please open a new
savings account" (a
finance related action).
[79] In step 454, an intent is determined from the natural language input,
possibly
using keywords extracted from the natural language input. In step 456, the
response generator
116 formulates a response (e.g. a question, an answer, or an action) based on
the intent.
However, the response formulated by the response generator 116 is based on
user-specific
financial information, as explained below.
[80] Stored in memory 122 is the identity of the user. The system 102 knows
and
stores the identity of the user because the identity of the user has been
previously provided to the
system 102. As one example, the user may have previously provided their bank
card number to
the system 102, which is used to uniquely identify the user. As another
example, the system 102
may be part of an online banking platform, and the user is signed into their
online banking, such
that the system 102 is aware of the identity of the user.
[81] Stored in a data structure, e.g. a database, is user-specific
financial information.
User-specific financial information is financial information that is specific
or unique to the user.
A non-exhaustive list of user-specific financial information includes any one,
some, or all of the
following: prior financial transactions performed by the user, e.g. a stored
record of previous
financial transactions; and/or quantity, quality, or type of financial
transactions performed by the
user; and/or user account balances; and/or number or type of accounts held by
a user (examples
18
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
of accounts include banking accounts, mortgage accounts, investment accounts,
etc.); and/or
credit information for the user; and/or information relating to banking
products utilized by the
user, e.g. whether the user has a mortgage, a credit card, investments, etc.
The data structure may
be stored in memory 122 or at another location, e.g. a database connected to
data processing unit
110 via a network.
[82] There are multiple candidate responses that may be returned to the
user, which are
selected or weighted based on the user-specific financial information. Some
examples are
provided below.
[83] Example: The natural language input originating from the user in step
452
conveys the following finance-related question: "What is the monthly fee for
your savings
account?" The intent determined in step 454 is that the user is requesting the
monthly fee for a
savings account. The response generator 116 determines the following, e.g. by
querying a
database: the standard monthly fee is $10 per month, but the fee is reduced to
$5 per month if the
user has a mortgage account or an investment account with the bank, and the
fee is reduced to $0
per month if the user has both a mortgage account and an investment account
with the bank.
Therefore, there are three candidate responses: $10, $5, or $0. The response
generator 116 uses
the user identification stored in memory 122 to query a database that lists
the accounts held by
the user. The accounts held by the user include a mortgage account, but not an
investment
account, and so the response returned to the user in step 456 is that the
monthly fee is $5, or the
response may be a question, e.g. "We can offer you a savings account for a
monthly fee of only
$5, are you interested?".
[84] Another example: The natural language input originating from the user
in step
452 conveys the following finance-related question: "What is this month's fee
for my savings
account?" The intent determined in step 454 is that the user wants to know
this month's fee for
the user's savings account. The fee is a function of the number of financial
transactions
performed by the user involving the user's savings account, e.g. $1 fee for
every transfer into or
out of the savings account in the month. The response generator 116 uses the
user identification
stored in memory 122 to query a database that lists the number of transactions
that month. The
19
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
database returns a value indicating that there were three transfers since the
beginning of the
month, and so the response returned to the user in step 456 is that the fee
will be $3.
[85] Another example: The natural language input originating from the user
in step
452 conveys the following finance-related action: "transfer $100 from my
savings account to my
chequing account". The intent determined in step 454 is that $100 is to be
transferred from the
user's savings account to the user's chequing account. The response generator
116 determines
that the user has two savings accounts ("A" and "B"), and so there are two
candidate responses:
either transfer the $100 from the user's savings account A or transfer the
$100 from the user's
savings account B. The response generator 116 uses the user identification
stored in memory 122
to query the account balances for savings accounts A and B and determines that
savings account
B has no money in it. In response, the response generator 116 performs the
transfer from savings
account A, perhaps after sending a question to the user confirming that the
money is to be
transferred from savings account A.
[86] In some embodiments, the method of FIG. 2 may be modified to
incorporate
generating a response based on user-specific financial information. For
example, the answer
returned in step 210 and/or 216, and/or 226 may be based on user-specific
financial information.
In a variation of FIG. 2, answers (instead of questions) may be returned in
steps 208/209,
214/215, and 224/225 (in which case steps 210, 216 and 226 are not needed).
The initial answer
returned in step 208/209 may be formulated based on financial information
specific to the user. If
in step 209 the user found the answer to be unsatisfactory (e.g. incorrect),
then the alternative
intents or answers (e.g. of step 214/215) may or may not be based on the
user's financial
information.
[87] An example: The natural language input originating from the user
conveys the
following finance-related question: "What is the rate of your savings
account?" The intent
determined is that the user is requesting the interest rate of a savings
account, and the confidence
score is high enough to immediately supply an answer to the question. The
standard interest rate
for a savings account is 1% but can be offered at 1.5% if the user has a
mortgage account with
the bank. The response generator 116 uses the user identification stored in
memory 122 to query
a database that lists the accounts held by the user. The accounts held by the
user include a
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
mortgage account, and so the response returned to the user is that the
interest rate is 1.5%. It is
then determined that the user is not satisfied with the answer, e.g. the user
actually wanted to
know the fee for the savings account. n alternative intents are therefore
identified, and n
corresponding alternative answers are returned to the user. However, the n
corresponding
alternative answers are not formulated based on user-specific financial
information because the
system 102 is now not as confident about whether the alternative answers even
reflect the
question actually asked by the user. This is because the confidence scores
associated with the
alternative intents are lower than the confidence score associated with intent
initially determined.
[88] In some embodiments, the response may only be formulated based on user-

specific financial information if the confidence score of the intent
associated with the response is
above a particular threshold. For example, if the intent has a confidence
score of 90% or above,
then modify the corresponding response based on the user-specific financial
information;
otherwise, do not modify the corresponding response based on the user-specific
financial
information.
[89] FIG. 6 illustrates a flowchart of a computer-implemented method for
performing
automated interactive conversation with a user, according to one embodiment.
The automated
interactive conversation may be performed in order to provide an answer to a
question from the
user.
[90] In step 502, a user interface 104 is provided at which the user can
provide a
natural language input. The natural language input is processed by the data
processing unit 110.
The data processing unit 110 comprises at least one processor executing
instructions. The
instructions are configured to cause the data processing unit 110 to perform
the remaining steps
of FIG. 6.
[91] In step 504, the data processing unit 110 derives, from the natural
language input,
a possible question the user might be asking. An example of step 504 is
described earlier in
relation to steps 202 to 208 of FIG. 2.
21
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[92] In step 506, the data processing unit 110 conveys the possible
question to the user
through the user interface 104 for verification by the user. An example of
step 506 is described
earlier in relation to steps 208 and 209 of FIG. 2.
[93] In step 508, the data processing unit 110 processes user input at the
user interface
104 indicating that the possible question is incorrect (e.g. the "No" branch
of step 209 of FIG. 2).
[94] In step 510, the data processing unit 110 derives a series of
alternate questions
that the user might be asking (e.g. step 212 of FIG. 2). In some embodiments,
step 510 may only
be performed if at least one keyword was recognized and extracted from the
natural language
input. In some embodiments, at least one keyword is recognized and extracted
from the natural
language input, and step 510 includes deriving the series of alternate
questions based on the at
least one keyword.
[95] In step 512, the data processing unit presents the series of alternate
questions to
the user through the user interface 104.
[96] In some embodiments, deriving the possible question the user might be
asking in
step 504 includes determining a user intent from the natural language input.
In some
embodiments, deriving the possible question the user might be asking is
performed without an
extracted keyword.
[97] In some embodiments, an algorithm for extracting at least one keyword
and/or an
algorithm for determining user intent is modified based on an indication from
the user of which
one of the alternate questions is a correct question.
[98] In some embodiments, the method further includes receiving an
indication, from
the user, that a particular question of the series of alternate questions is
correct, and presenting to
the user through the user interface an answer to the particular question. In
some embodiments,
the method includes generating the answer to the particular question using
user-specific financial
information. In some embodiments, the particular question is a finance-related
question, and the
user-specific financial information relates to financial transactions
previously performed by the
user and/or accounts held by the user.
22
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[99] FIG. 7 illustrates a flowchart of a computer-implemented method for
interacting
with a user, according to one embodiment, wherein the question from the user
relates to finance
and wherein responses returned by the system to the user interface are based
on user-specific
information. FIG.8 provides an example of possible sub steps that can be
performed when the
response to a user's question depends on the user's profile.
[100] In step 552 of FIG.7, a user interface 104 is provided at which the
user can
provide a natural language input conveying a finance related question or a
finance related action
to be performed. The natural language input is processed by the data
processing unit 110. The
data processing unit 110 comprises at least one processor executing
instructions. The instructions
are configured to cause the data processing unit 110 to perform the remaining
steps of FIG. 7.
[101] In step 554, the data processing unit 110 derives, from the natural
language input,
a possible finance related question or possible finance related action. In
step 556, the data
processing unit 110 obtains a series of candidate answers, for example using a
query lookup
table, each of which is an answer to the possible finance related question or
the possible finance
related action. In step 558, the data processing unit 110 selects one of the
candidate responses on
the basis of user-specific financial information.
[102] In step 560, the data processing unit 110 presents the selected
candidate response
to the user through the user interface 104. Other examples of financial
related questions and
responses are provided earlier when describing FIG. 5 and related embodiments.
[103] In some embodiments, the candidate responses are a series of answers,
each
answer corresponding to a respective possible finance related question. In
other embodiments,
the candidate responses are a series of actions, each action corresponding to
a respective possible
finance related action instructed by the user. In other embodiments, the
candidate responses are a
series of questions. The questions may each correspond to possible finance
related question
being asked. In some embodiments, the user-specific financial information
relates to financial
transactions previously performed by the user and/or accounts held by the
user. In some
embodiments, the user-specific financial information is retrieved using an
identifier of the user
stored in memory.
23
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[104] For instance, as per the example presented in FIG. 8, the confirmed
or validated
question can be: "What is the interest rate on the Master CardTM. In the
example of FIG. 8, the
answer relates to the interest rate of a credit card, as determined by
querying a lookup table, as in
step 802. For some questions, once confirmed, the answer will be independent
of the user's
profile, and therefor the answer will be the same for all users. This scenario
is reflected by the
left side of the flowchart, where the answer, corresponding in the example to
a 4% credit card
interest rate, is the same for all users. In other cases, the answer can vary
depending on the
user's profile, such as based on the user's financial profile, as per step
804. For example, a user
with three or more bank accounts can be offered a lower interest rate than
users having a single
bank account. An ID database can thus be queried, as per step 806, to uniquely
identify the user
with a user ID. Once the user has been uniquely identified, financial
databases can be in turn
queried to determine the personal or financial profile associated with the
unique ID (step 808).
The financial profile can include for example the number of accounts, cards,
financial services,
amount per account and loans associated with the user ID. ln the example, for
users having more
than three accounts linked to their unique identifier, the system can be
configured such that a
lower interest rate is offered to these users. The rules for modulating the
answers based on
financial information parameters can be stored for instance in a lookup table
for answers
associated with financial information, as in step 810. In the example, the
user has four different
accounts, and thus has access to a reduced interest rate of 3%, instead of 4%.
The answer is
returned to the answer generator module 164 at step 812.
[105] The predominant modus operandi of dialogue systems consists in
determining the
question asked by a user, and in returning an answer corresponding to the
determined question.
Despite recent advances and developments in the field, the success rate of
dialogue systems is
still surprisingly low, wherein the success rate corresponds to the ability to
correctly determine
the question asked by a user, and to return a satisfactory/accurate answer
following a dialogue
session with the user. The proposed system and method improve on existing
dialogue systems
and methods by prompting users to confirm the system's "understanding" of
their question, such
as by reformulating the possible question. In cases where the level of
confidence in the
determined question is low, or if the initial attempt at determining the
original question has
failed, the system generates several candidate questions (alternate questions)
to be validated by
24
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
the user, instead of simply providing what would most likely be an inaccurate
answer, as a
typical chatbot would. This proposed method and system have proved to
significantly increase
the success rate in providing accurate answers to user's questions.
[106] A prototype of the proposed automated interactive
conservation/dialogue system
has been tested and compared with an existing commercially available system.
In the
experiment, the same 100 predetermined questions with known answers were asked
of both the
proposed system 102 and an existing commercially available system . With
respect to the
proposed system 102, with reference to FIG. 3, dialogue box 382 is considered
a first attempt by
the proposed system 102, and dialog box 384 is considered a second attempt by
the proposed
system 102. The dialogue session of FIG. 3 would thus be attributed a score of
0.5. The existing
commercially available system on the other hand provides an answer to the
question in its first
attempt and if the first attempt is incorrect, the existing commercially
available system provides
one or more alternative answers. This method of responding with a first answer
and then
subsequently with other alternative answers is common amongst existing chatbot
dialogue
systems. If the commercially available system provided a good answer at the
first attempt a score
of 1 is assigned, and if a correct answer was provided at the second attempt,
a score of 0.5 was
assigned. The assigned scores for all 100 questions was summed and the result
divided by 100,
for each system. The results are shown in the table below:
System tested Accuracy score
Proposed system and method 77.2%
Existing commercially available system 51.3%
[107] As can be appreciated, an increase of 50.4% in the accuracy of the
dialogue
process is achieved with the proposed system and method.
[108] Although the foregoing has been described with reference to certain
specific
embodiments, various modifications thereof will be apparent to those skilled
in the art without
departing from the scope of the claims appended hereto.
CA 3064116 2019-12-06

Robic Ref. No. 19958-0008
[109]
Moreover, any module, component, or device exemplified herein that executes
instructions may include or otherwise have access to a non-transitory
computer/processor
readable storage medium or media for storage of information, such as
computer/processor
readable instructions, data structures, program modules, and/or other data. A
non-exhaustive list
of examples of non-transitory computer/processor readable storage media
includes magnetic
cassettes, magnetic tape, magnetic disk storage or other magnetic storage
devices, optical disks
such as compact disc read-only memory (CD-ROM), digital video discs or digital
versatile disc
(DVDs), Blu-ray DiscTM, or other optical storage, volatile and non-volatile,
removable and non-
removable media implemented in any method or technology, random access memory
(RAM),
read-only memory (ROM), electrically erasable programmable read-only memory
(EEPROM),
flash memory or other memory technology. Any such non-transitory
computer/processor storage
media may be part of a device or accessible or connectable thereto. Any
application or module
herein described may be implemented using computer/processor
readable/executable instructions
that may be stored or otherwise held by such non-transitory computer/processor
readable storage
media.
26
CA 3064116 2019-12-06

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , États administratifs , Taxes périodiques et Historique des paiements devraient être consultées.

États administratifs

Titre Date
Date de délivrance prévu Non disponible
(22) Dépôt 2019-12-06
Requête d'examen 2019-12-06
(41) Mise à la disponibilité du public 2020-06-07

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Dernier paiement au montant de 100,00 $ a été reçu le 2023-11-07


 Montants des taxes pour le maintien en état à venir

Description Date Montant
Prochain paiement si taxe applicable aux petites entités 2024-12-06 100,00 $
Prochain paiement si taxe générale 2024-12-06 277,00 $

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des paiements

Type de taxes Anniversaire Échéance Montant payé Date payée
Le dépôt d'une demande de brevet 2019-12-06 400,00 $ 2019-12-06
Requête d'examen 2023-12-06 800,00 $ 2019-12-06
Enregistrement de documents 100,00 $ 2020-02-18
Taxe de maintien en état - Demande - nouvelle loi 2 2021-12-06 100,00 $ 2021-11-25
Taxe de maintien en état - Demande - nouvelle loi 3 2022-12-06 100,00 $ 2022-11-15
Taxe de maintien en état - Demande - nouvelle loi 4 2023-12-06 100,00 $ 2023-11-07
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
BANQUE NATIONALE DU CANADA
Titulaires antérieures au dossier
S.O.
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(yyyy-mm-dd) 
Nombre de pages   Taille de l'image (Ko) 
Nouvelle demande 2019-12-06 3 85
Abrégé 2019-12-06 1 13
Description 2019-12-06 26 1 268
Revendications 2019-12-06 5 177
Dessins 2019-12-06 10 171
Dessins représentatifs 2020-05-05 1 6
Page couverture 2020-05-05 2 38
Demande d'examen 2021-02-09 4 199
Modification 2021-05-25 15 598
Revendications 2021-05-25 4 161
Demande d'examen 2022-05-18 5 195
Modification 2022-09-16 19 792
Revendications 2022-09-16 4 223
Description 2022-09-16 26 1 788
Modification 2023-12-07 25 1 130
Revendications 2023-12-07 5 249
Description 2023-12-07 28 2 108
Demande d'examen 2024-05-23 4 183
Demande d'examen 2023-08-11 3 171