Sélection de la langue

Search

Sommaire du brevet 3171020 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 3171020
(54) Titre français: SYSTEMES ET PROCEDES POUR EXECUTER UNE CONVERSATION INTERACTIVE AUTOMATISEE AVEC UN UTILISATEUR
(54) Titre anglais: SYSTEMS AND METHODS FOR PERFORMING AUTOMATED INTERACTIVE CONVERSATION WITH A USER
Statut: Acceptée
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • G6F 40/40 (2020.01)
  • G6F 40/35 (2020.01)
  • G6N 3/02 (2006.01)
(72) Inventeurs :
  • CHARTON, ERIC (Canada)
  • BONNELL, MATTHEW (Canada)
  • MARCEAU, LOUIS (Canada)
  • GUYMONT, JONATHAN (Canada)
(73) Titulaires :
  • BANQUE NATIONALE DU CANADA
(71) Demandeurs :
  • BANQUE NATIONALE DU CANADA (Canada)
(74) Agent: ROBIC AGENCE PI S.E.C./ROBIC IP AGENCY LP
(74) Co-agent:
(45) Délivré:
(22) Date de dépôt: 2019-12-06
(41) Mise à la disponibilité du public: 2020-06-07
Requête d'examen: 2022-08-23
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Non

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
3026936 (Canada) 2018-12-07

Abrégés

Abrégé anglais


A dialogue system is a computer system that converses with a human via a user
interface. In
some embodiments, a dialogue system is provided that may increase the
probability of finding a
satisfactory response with relatively little iteration of dialogue between a
user and the dialogue
system. The number of responses (e.g. in the form of alternative questions)
may be progressively
increased during the interaction with a user, and this may have the effect of
increasing the overall
robustness of the dialogue system.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


Claims
1. A computer-implemented method for performing automated interactive
conversation with
a user, the method comprising:
a. providing a user interface at which the user can provide a natural language
input;
and
b. processing the natural language input with a data processing unit
comprising at
least one processor executing instructions, the instructions configured for:
i. extracting at least one keyword from the natural language input;
ii. determining from the at least one keyword a user intent associated with
the
natural language input;
iii. deriving from the user intent a possible question the user might be
asking;
iv. conveying the possible question to the user through the user interface;
v. processing user input provided at the user interface indicating that the
possible question is correct, thereby confirming that the possible question
has been correctly determined;
vi. determining an answer associated with the confirmed question; and
vii. presenting the answer to the user through the user interface.
2. The computer-implemented method of claim 1, wherein the natural language
input is a
spoken utterance, further comprising the step of converting the spoken
utterance to a text
string.
3. The computer-implemented method of claim 1, wherein the natural language
input is a
text string.
4. The computer-implemented method of claim 2 or 3, wherein extracting the at
least one
keyword comprises recognizing in the text string at least one property
indicative of at
least one word of the text string corresponding to at least one keyword.
5. The computer-implemented method of any one of claims 2 to 4, wherein
extracting the at
least one keyword comprises recognizing named entities in the text string.
6. The computer-implemented method of any one of claims 2 to 5, wherein
extracting the at
least one keyword comprises matching at least one word of the text string to a
Date Recue/Date Received 2022-08-23

corresponding entry in a library of intents, and wherein determining the user
intent
comprises retrieving an intent in the corresponding entry in the library of
intents.
7. The computer-implemented method of claim 2 or 3, wherein extracting the at
least one
keyword and determining the user intent comprise using a neural network
algorithm, the
neural network algorithm taking as input the text string and giving as output
a set of
intents.
8. The computer-implemented method of claim 7, wherein weights and biases of
the neural
network algorithm are adjusted based on a confirmation that the possible
question has
been correctly determined.
9. The computer-implemented method of claim 2 or 3, wherein extracting the at
least one
keyword and determining the user intent comprise matching at least one word of
the text
string to a corresponding entry in a list of prewritten questions, and wherein
deriving the
possible question comprises retrieving a prewritten question in the
corresponding entry in
the list of prewritten questions.
10. The computer-implemented method of any one of claims 1 to 9, comprising a
step of
determining a confidence score associated with the user intent, and wherein
conveying
the possible question is performed when the user intent associated therewith
has the
confidence score above a predetermined threshold.
11. The computer-implemented method of any one of claims 1 to 10, wherein the
user input
indicating that the possible question is correct is a natural language user
input, and
wherein processing the user input comprises the step of determining a
confirmation intent
from the natural language user input.
12. The computer-implemented method of any one of claims 1 to 11, wherein the
natural
language input corresponds to more than one individual questions, further
comprising
segmenting the natural language input into the more than one individual
questions and
performing step b once for each of the individual questions.
13. The computer-implemented method of any one of claims 1 to 11, wherein the
natural
language input corresponds to more than one individual questions, wherein
determining
the user intent comprises determining an overall intent.
31
Date Recue/Date Received 2022-08-23

14. The computer-implemented method of any one of claims 1 to 13, further
comprising
determining an identity of the user.
15. The computer-implemented method of claim 14, further comprising generating
the
answer to the confirmed question using user-specific financial information.
16. The computer-implemented method of claim 15, wherein the confirmed
question is a
finance-related question, and the user-specific financial information relates
to financial
transactions previously performed by the user.
17. The computer-implemented method according to claim 16, wherein the answer
is
modulated based on the financial transactions previously performed by the
user, whereby
different users are provided different answers in response to the confirmed
question.
18. A system for performing automated interactive conversation with a user,
the system
comprising a memory and:
a. a user interface at which the user can provide a natural language input;
and
b. a data processing unit to process the natural language input, the data
processing
unit configured to:
i. extract at least one keyword from the natural language input;
ii. determine from the at least one keyword a user intent associated with
the
natural language input;
iii. derive from the user intent a possible question the user might be
asking;
iv. convey the possible question to the user through the user interface;
v. process user input provided at the user interface indicating that the
possible question is correct, thereby confirming that the possible question
has been correctly determined;
vi. determine the answer associated with the confirmed question; and
vii. present the answer to the user through the user interface.
19. The system of claim 18, wherein the natural language input is a spoken
utterance, further
comprising a speech recognition module configured to convert the spoken
utterance into
a text string.
20. The system of claim 18, wherein the natural language input is a text
string.
32
Date Recue/Date Received 2022-08-23

21. The system of claim 19 or 20, wherein the data processing unit comprises a
keyword
extractor configured for extracting the at least one keyword from the text
string.
22. The system of claim 21, wherein the keyword extractor is configured to
recognize in the
text string at least one property indicative of at least one word of the text
string
corresponding to at least one keyword.
23. The system of claim 21 or 22, wherein the keyword extractor comprises a
named entity
recognizer configured to recognize named entities in the text string.
24. The system of any one of claims 21 to 23, wherein the data processing unit
further
comprises an intent classifier configured for determining the user intent from
the at least
one keyword.
25. The system of claim 24, wherein the intent classifier is further
configured to determine a
confidence score associated with the user intent, and wherein the data
processing unit is
further configured to convey the possible question when the user intent
associated
therewith has the confidence score above a predetermined threshold.
26. The system of claim 24 or 25, wherein the data processing unit further
comprises a
question analyzer module, the question analyzer module comprising the keyword
extractor and the intent classifier.
27. The system of claim 26, wherein a library of intents is stored in the
memory, and wherein
the question analyzer module is configured to extract the at least one keyword
matching
at least one word of the text string to a corresponding entry in the library
of intents and to
determine the user intent by retrieving an intent in the corresponding entry
in the library
of intents.
28. The system of claim 26, wherein the question analyzer module comprises a
neural
network algorithm, the neural network algorithm taking as input the text
string and giving
as output a set of intents.
29. The system of claim 28, wherein the data processing unit further comprises
a learning
component configured to adjust weights and biases of the neural network
algorithm based
on a confirmation that the possible question has been correctly determined.
33
Date Recue/Date Received 2022-08-23

30. The system of any one of claims 24 to 29, wherein the data processing unit
further
comprises a response generator configured to determine the possible question
from the
user intent and to determine the answer associated with the confirmed
question.
31. The system of claim 30, wherein the response generator comprises a
question identifier
module configured to determine the possible question from the user intent.
32. The system of claim 31, wherein the question identifier module is further
configured to
process the user input indicating that the possible question is correct.
33. The system of claim 32, wherein the question identifier module is further
configured to
send the user input to the intent classifier for determining a confirmation
intent from the
user input.
34. The system of any one of claims 30 to 33, wherein the response generator
comprises an
answer identifier module configured to determine the answer associated with
the
confirmed question.
35. The system of claim 34, wherein a mapping between verified questions and
corresponding answers is stored in the memory, and wherein the answer
identifier
module is configured to retrieve the confirmed question in the mapping and
produce the
corresponding answer from the mapping.
36. The system of claim 34, wherein the answer identifier module is configured
to retrieve
the answer from a database.
37. The system of claim 34, wherein the answer identifier module is configured
to retrieve
the answer over a network.
38. The system of any one of claims 30 to 37, wherein the response generator
comprises an
answer generator module configured to generate a natural language text string
corresponding to the determined answer.
34
Date Recue/Date Received 2022-08-23

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


SYSTEMS AND METHODS FOR PERFORMING AUTOMATED INTERACTIVE
CONVERSATION WITH A USER
FIELD
111 The following relates to a computer-implemented dialogue system for
conversing
with a human.
BACKGROUND
[2] A dialogue system is a computer system that converses with a human
via a user
interface. Many dialogue systems utilize a computer program to conduct the
conversation using
auditory and/or textual methods. A common name for such a computer program is
a `chatboe. A
chatbot may be implemented using a natural language processing system.
131 An organization may use a dialogue system to help support and scale
their
customer relation efforts. A dialogue system may be used to provide a wide
variety of
information to many different users. For example, the dialogue system may be
used to perform
automated interactive conversation with users in order to provide answers to
questions posed by
the users. Questions originating from different users may be very different in
nature, and the
questions may be received and answered at any time of day or night. The users
of the dialogue
system may be customers or potential customers of the organization.
[4] Current dialogue systems have a technical problem in that they are
often not
robust. 'Robustness' refers to the ability of the dialogue system to
satisfactorily answer a
question posed by a human user. Some current dialogue systems may provide less
than 50%
correct/satisfactory answers. If the dialogue system returns an incorrect or
unsatisfactory answer
too often, then the dialogue system will not be adopted by human users. Also,
the organization's
reputation may be negatively impacted.
1
Date Recue/Date Received 2022-08-23

SUMMARY
1151 One way to try to increase the robustness of a dialogue system is
to invest
significant resources in the writing of questions and answers, and/or to
invest significant
resources in enriching the ability of the dialogue system to recognize
intentions. Technical
implementations often focus less on linguistics and more on improvements to
algorithms /
models. Achieving satisfactory results may be expensive. In some cases, the
results are not even
satisfactory because of a technical challenge: the combinatory complexity of
human language is
boundless, and so it is difficult in technical implementation to predict the
natural language a user
could use to ask a particular question. The system may be highly based on
recursion, and manual
dialogue tree manufacture may not be a viable solution.
[6] Instead, in some embodiments disclosed herein, a dialogue system
is provided
that may increase the probability of finding a satisfactory response with
relatively little iteration
of dialogue between a user and the system. An interactive process is
introduced to help facilitate
the exchange between the user and the dialogue system to try to increase the
level of robustness
of the dialogue system.
171 In some embodiments, the number of responses in the form of
questions may be
progressively increased during an interaction with a user. This may have the
effect of increasing
the overall robustness of the dialogue system. For example, the responses that
are progressively
increased may be questions that the system determines the user may be asking.
181 Another problem with dialogue systems is that a response
formulated by a
dialogue system is not customized based on the user. For example, if two
different users ask the
exact same question, e.g. "What is the monthly fee for your savings account",
the answer would
be the same. However, one user may actually be entitled to a preferable
monthly fee compared to
another user, e.g. based on the volume of monthly financial transactions
associated with the
user's bank accounts or based on the number of accounts held by the user.
191 In some embodiments, a technical solution is provided in which a
response
returned by a dialogue system is generated based on financial information
specific to the user.
2
Date Recue/Date Received 2022-08-23

The response may be in reply to a user's finance related question or finance
related action. The
response may be a question or an answer or an action.
[10] According to a possible embodiment, a computer-implemented method
for
performing automated interactive conversation with a user is provided. The
method comprises:
a. providing a user interface at which the user can provide a natural language
input;
b. processing the natural language input with a data processing unit
comprising at least one
processor executing instructions, the instructions configured for:
i. deriving from the natural language input a reformulated question the
user might
be asking;
ii. conveying the reformulated question to the user through the user
interface;
iii. processing user input provided at the user interface indicating that
the
reformulated question is incorrect;
iv. deriving a series of alternate questions that the user might be asking;
v. presenting the series of alternate questions to the user through the
user interface.
[11] In some embodiments, the instructions are further configured for:
extracting at
least one keyword from the natural language input and deriving the series of
alternate questions
based on the at least one keyword.
[12] In some embodiments, deriving the reformulated question the user
might be
asking comprises determining a user intent from the natural language input.
[13] In some embodiments, deriving the reformulated question the user
might be
asking is performed without the at least one keyword.
[14] In some embodiments, an algorithm for extracting the at least one
keyword and/or
an algorithm for determining the user intent is modified based on an
indication from the user of
which one of the alternate questions is a correct question.
3
Date Recue/Date Received 2022-08-23

[15] In some embodiments, the instructions are further configured for:
receiving an
indication, from the user, that a particular question of the series of
alternate questions is correct,
and presenting to the user through the user interface an answer to the
particular question.
[16] In some embodiments, the method further comprises generating the
answer to the
particular question using user-specific financial information.
[17] In some embodiments, the particular question is a finance-related
question, and
the user-specific financial information relates to financial transactions
previously performed by
the user and/or accounts held by the user.
[18] In some embodiments, the user intent associated to the reformulated
question and
to the alternate questions are each associated with a confidence score, and
wherein in step ii, the
reformulated question first formulated is derived from the user intent having
the highest
confidence score, and wherein in step iv, the series of alternate questions
are derived from user
intents having the next highest confidence scores.
[19] In some embodiments, the algorithm for extracting the at least one
keyword
and/or an algorithm for determining the user intent is a neural network
algorithm, and wherein
weights and biases of the neural network algorithm are adjusted based on the
indication that one
of the alternate questions is the correct question.
[20] In some embodiments, the answer is modulated based on the financial
transactions previously performed by the user and/or accounts held by the
user, whereby
different users are provided different answers in response to the reformulated
question or
alternate questions, once confirmed.
[21] In some embodiments, steps iii to v are repeated, until the user
provides an
indication, via the user interface, that one of the alternate questions
corresponds to the original
question.
[22] In some embodiments, in step iv the number of alternate questions in
progressively increased during interaction with the user.
4
Date Recue/Date Received 2022-08-23

[23] According to another possible embodiment, a computer-implemented
method for
performing automated interactive conversation with a user is provided. The
method comprises:
a. providing a user interface at which the user can provide a natural language
input;
b. processing the natural language input with a data processing unit
comprising at least one
processor executing instructions, the instructions configured for:
i. extracting keywords from the natural language input;
ii. determining from the keywords a user intent;
iii. deriving from the user intent a possible question the user might be
asking;
iv. conveying the possible question to the user through the user interface;
v. processing user input provided at the user interface indicating that the
possible
question is correct, thereby confirming that the possible question has been
correctly determined;
vi. determining an answer associated with the confirmed question; and
vii. presenting the answer to the user through the user interface.
[24] According to yet another possible embodiment, a system for
performing
automated interactive conversation with a user is provided. The system
comprises:
a. a user interface configured to receive a natural language input from the
user;
b. a data processing unit to process the natural language input, the data
processing unit
configured to:
i. derive from the natural language input a reformulated question the user
might be
asking;
ii. convey the reformulated question to the user through the user
interface;
iii. process user input at the user interface indicating whether the
reformulated
question is incorrect;
iv. derive a series of alternate questions that the user might be asking;
v. present the series of alternate questions to the user through the user
interface.
Date Recue/Date Received 2022-08-23

[25] In some embodiments, the data processing unit comprises a keyword
extractor
from extracting keywords from the natural language input and an intent
classifier for determining
from the keywords and/or natural language input a user intent and a confidence
score associated
with the user intent, wherein the data processing unit is configured to derive
the series of
alternate questions based on the at least one keyword.
[26] In some embodiments, the data processing unit comprises: a response
generator
for performing steps i. to v., the response generator comprising: a question
identifier module
deriving the reformulated question and alternate questions from the keywords
and/or the user
intent; an answer identifier module processing user input at the user
interface indicating whether
the reformulated question or one of the alternate questions is correct; and an
answer generator
module presenting the answer associated to one of the alternate questions,
once confirmed as
correct by the user.
[27] In some embodiments, the data processing unit is configured to derive
the
reformulated question the user might be asking without using the at least one
keyword.
[28] In some embodiments, the data processing unit is configured to modify
an
algorithm for extracting the at least one keyword and/or modify an algorithm
for determining the
user intent based on an indication from the user of which one of the alternate
questions is a
correct question.
[29] In some embodiments, the algorithm is a neural network algorithm, and
wherein
weights and biases of the neural network algorithm are adjusted based on the
indication that one
of the alternate questions is the correct question.
[30] In some embodiments, the answer generator module is configured to
generate the
answer to the reformulated or alternate question using user-specific financial
information.
[31] In some embodiments, the reformulated or alternate question is a
finance-related
question, and the user-specific financial information relates to financial
transactions previously
performed by the user and/or accounts held by the user.
6
Date Recue/Date Received 2022-08-23

[32] According to a further possible embodiment, a system for performing
automated
interactive conversation with a user is provided. The system comprises a
memory and:
a. a user interface at which the user can provide a natural language input;
and
b. a data processing unit to process the natural language input, the data
processing unit
configured to:
i. extract at least one keyword from the natural language input;
ii. determine from the at least one keyword a user intent associated with
the
natural language input;
iii. derive from the user intent a possible question the user might be
asking;
iv. convey the possible question to the user through the user interface;
v. process user input provided at the user interface indicating that the
possible
question is correct, thereby confirming that the possible question has been
correctly determined;
vi. determine the answer associated with the confirmed question; and
vii. present the answer to the user through the user interface.
BRIEF DESCRIPTION OF THE DRAWINGS
[33] Embodiments will be described, by way of example only, with
reference to the
accompanying figures wherein:
[34] FIG. 1 is a block diagram of a computer implemented system for
performing
automated interactive conversation with a user, according to one embodiment;
[35] FIG.1A is block diagram of a portion of the computer implemented
system
relating to keyword extraction and intent classification, according to one
embodiment.
[36] FIG. 1B is block diagram of a portion of the computer implemented
system
relating to the response generator for generating a first response, according
to an embodiment.
7
Date Recue/Date Received 2022-08-23

[37] FIG. 1C is block diagram of a portion of the computer implemented
system
relating to the response generator for generating alternate questions,
according to a possible
embodiment.
[38] FIG. 1C illustrates a flowchart of the exchanges occurring between the
front end
and back end of the system, according to a possible embodiment.
[39] FIG. 2 illustrates a flowchart of a computer-implemented method for
performing
automated interactive conversation with a user, according to one embodiment;
[40] FIGs. 3 and 4 illustrate example message exchanges on a user
interface;
[41] FIG. 5 illustrates a flowchart of a computer-implemented method for
interacting
with a user, according to one embodiment;
[42] FIG. 6 illustrates a flowchart of a computer-implemented method for
performing
automated interactive conversation with a user, according to another
embodiment; and
[43] FIG. 7 illustrates a flowchart of a computer-implemented method for
interacting
with a user, according to another embodiment.
[44] FIG. 8 illustrates an exemplary flowchart of a portion of the computer-
implemented method wherein the user's profile is taken into consideration.
DETAILED DESCRIPTION
[45] For illustrative purposes, specific embodiments and examples will be
explained in
greater detail below in conjunction with the figures.
[46] FIG. 1 is a block diagram of a computer implemented system 102 for
performing
automated interactive conversation with a user, according to one embodiment.
The system 102
implements a dialogue system, also commonly referred to as "chatbot".
[47] The system 102 includes a user interface 104 for receiving a natural
language
input originating from the user, and for providing a response to the user. The
attributes of the
8
Date Recue/Date Received 2022-08-23

user interface 104 are implementation specific and depend on how the user is
interacting with the
system 102. Two examples of a user interface 104 are illustrated in FIG. 1. In
one example, the
user interface 104 interfaces with a telephone handset belonging to the user.
The telephone
handset includes a transmitter through which the user speaks and a receiver
through which the
user hears the response. A speech recognition module 106 is included as part
of the system 102
in order to convert from speech to text. As another example, the user
interface 104 may interface
with a graphical user interface (GUI) on a computing device, such as on the
user's mobile
device. The user may use a keyboard or touchscreen to provide a text input,
and the response
would be presented as text on the display screen hosting the GUI. The user
interface 104 is the
component of the system 102 that interfaces with users and is meant to refer
to the components
of the interface that belong to the system 102, rather than to the user
device. Throughout the
description, a "response" is in reply to a communication sent to or capture
through the user
interface 102, while an "answer" is the resolution to the original question
asked by the user. In
the context of the present application, the "answer" is the information the
user is looking for. A
"response" is thus more generic than the "answer".
[48] The system 102 further includes a data processing unit 110, which may
implement a natural language processing system. The data processing unit 110
includes a
keyword extractor 112, an intent classifier 114, response generator 116, and a
learning
component 118.
[49] Still referring to FIG.1, and also to FIG.1A, the keyword extractor
112 receives a
natural language input originating from the user. The input received at the
keyword extractor 112
is a string of text. In general, the string of text includes multiple words,
although in some cases it
could be that the string of text is only a single word. The string of text may
convey a question
asked by the user, or a user instruction, or a user's response to a question
that was asked by the
system 102. The keyword extractor 112 attempts to extract words and/or phrases
from the string
of text. If any keywords are extracted, the extracted keywords are stored in a
memory, e.g.
memory 122.
[50] In some embodiments, the keyword extractor 112 may recognize
properties that
indicate a particular word in the string of text may be a keyword, such as the
use of a date,
9
Date Recue/Date Received 2022-08-23

capital letter, brand name, recognized phrase, etc. Examples of keyword
extraction algorithms
that may be implemented by the keyword extractor 112 are described in:
(1) Jean-Louis, L., Gagnon, M., and Charton, E., "A knowledge-base oriented
approach for
automatic keyword extraction" Computacion y Sistemas, 2013, vol. 17, no 2, p.
187-196; and
(2) Bechet, F., and Charton, E., "Unsupervised knowledge acquisition for
extracting named
entities from speech", in Acoustics Speech and Signal Processing (ICASSP),
2010 IEEE
International Conference on, pp. 5338-5341, March 2010.
[51] In some embodiments, a keyword extraction algorithm is used involving
named
entity recognition based on knowledge representation of the semantic domain
covered by the
dialog system application. For example, the semantic domain can relate to
banking in general or
to more specific domains such as insurance, loans, investment, trading, etc.
[52] The intent classifier 114 also receives the natural language input
originating from
the user in the form of a string of text and analyzes the string of text to
determine the intent of
the user. In some embodiments, the words in the string of text are compared to
a library of
intents and entity values. For example, if the user asked the question "What
is the rate on your
cashback Mastercard?", then the intent classifier 114 may match the word
"rate" to an intent "get
rate" that is stored in a library of intents. The intent classifier 114 may
determine that the entity
value relating to that intent is "cashback" by the presence of the word
"cashback". In such a
scenario, the intent classifier 114 therefore determines that the user is
asking for a cashback rate.
The presence of word "Mastercard" may cause the intent classifier 114 to
determine that the
cashback rate requested by the user is the cashback rate for the Mastercard m
brand credit card.
The intent classifier 114 may associate a confidence value with the determined
intent. The
confidence value will be referred to as a "confidence score", and it
quantifies how confident the
intent classifier 114 is regarding the correctness of its determined intent.
For example, the intent
determined by the intent classifier 114 may be "get cashback rate for
Mastercard m brand credit
card". However, this intent is not necessarily correct, e.g. there is some
ambiguity from the string
of text as to whether the rate requested is cashback rate or another type of
rate instead (e.g.
interest rate for the Mastercard brand credit card). Therefore, the confidence
score may not be
100%, but may instead have a lower value, e.g. 75%.
Date Recue/Date Received 2022-08-23

[53] The intent classifier 114 may be implemented with a neural network.
The neural
network receives as input text string from the user interface 104, and outputs
a plurality of
intents, each associated with a confidence value or confidence score. For
example, the first intent
"get cashback rate for MastercardTm brand credit card" can be associated with
a confidence score
of 75%, a second intent "get interest rate for Mastercard m brand credit card"
can be associated
with a confidence score of 60% and so on. The confidence score is thus an
estimation of the
likelihood or probability that the neural network has correctly interpreted
the user intent from the
text string.
[54] One example of an algorithm that may be implemented by the intent
classifier 114
is described in: Serban, I. V., Sordoni, A., Bengio, Y., Courville, A. C., and
Pineau, J., "Building
End-To-End Dialogue Systems Using Generative Hierarchical Neural Network
Models", in
Association for the Advancement of Artificial Intelligence (AAAI), Vol. 16,
pp. 3776-3784,
February 2016.
[55] In other embodiments, the intent classifier 114 instead works by
simply looking
for matches between words in the natural language input and words in
prewritten questions that
are stored in memory 122.
[56] Referring to FIG.1, FIG. 1B and 1D the response generator 116 receives
one or
more intents from the intent classifier 114, determines the question that the
user is possibly
asking based on the intent(s), and returns the possible question to the user
for verification. The
keyword extractor 112 and the intent classifier 114 can form part of a
question analyzer module
158, as illustrated in FIG. 1D. In possible embodiments, a single intent is
provided as an input to
the response generator 116, corresponding to the intent having the highest
confidence score. In
some embodiments, if the intent has a confidence score above a predetermined
confidence
threshold, the response generator 116 may respond with a reformulated question
to obtain
validation from the user, the reformulated question being an equivalent of
what the question
identifier 160 has determined as the initial/original question. In other
embodiments, the response
generator 116 may respond with a reformulated question regardless of the value
of the
confidence score. The answer to the possible question may also be returned at
this step of the
process, along with the reformulated question for validation. Still referring
to FIG.1, FIG 1B and
11
Date Recue/Date Received 2022-08-23

FIG.1D , a first response generator module 116a is called or executed, which
comprises a
question identifier module 160, an answer identifier module 162 and an answer
generator module
164. More specifically, the question identifier 160 processes the intent to
determine a question
that most likely matches the question conveyed by the text string. The
reformulated question is
sent to the user interface, to validate whether it has been correctly
determined. The answer
provided by the user through the user interface 104 is analyzed by the
question identifier 160,
which may also require involvement of the intent classifier 114. If the user
has confirmed
correctness of the reformulated question, the first response generator 116a
then identifies,
formulates and returns the answer, via the answer identifier 162 and the
answer generator
module 164. The question identifier module 160 can thus generate a single or
alternate questions,
and can analyze the response from the user indicating whether the question
identifier 160 has
correctly determined the original question.
[57] In some embodiments, answers to the verified questions may be stored
in memory
and simply retrieved using a mapping between the verified question and the
answer. In other
embodiments, the answer identifier 162 may need to send a request over a
network to obtain the
answer. For example, if the verified question is "what is the cashback rate
for Mastercard'
brand credit card", then the answer identifier 162 may query a database
storing the cashback rate
in order to obtain the cashback rate, and then formulate and send the response
to the user, e.g.
"The cashback rate for our Mastercard is 1%".
[58] Still referring to FIGs.1 and 1D, and also to FIG. 1C, if the user has
indicated that
the reformulated question is incorrect, alternate questions are generated by
the alternate response
generator 116b, which can comprise or interact with the same modules 158
(including 114), 160,
162, 164 previously described. In this case, the extracted keywords, if any,
are fed to the intent
classifier 114, and the question identifier module 160 generates a list of
alternate questions based
on the intents having the highest confidence scores. The list of alternate
questions is then sent to
the user interface 104 for confirmation by the user. The question identifier
module 160 analyzes
the user's response, which can be a selection of one of the alternate
questions, or an indication
that none of the alternate questions matched his original question. An answer
is provided by the
answer generator module 164, once the original question has been validated.
Advantageously,
12
Date Recue/Date Received 2022-08-23

the question identifier 160 forces the user to confirm or select the correct
question from the list,
before providing an answer, which not only increases the success rate of
providing
satisfactory/useful answers, but also allows the learning component 118 to
continuously improve
the process of identifying the initial question.
[59] The learning component 118 thus adapts the keyword extractor 112
and/or intent
classifier 114 based on the answers provided by the user, as discussed in more
detail later. For
example, the learning component 118 may readjust the weights and biases
applied by the neural
network for determining intents based on text string inputs and/or extracted
keywords.
[60] Operation of the intent classifier 114, response generators 116a or
116b, and
learning component 118 will be explained in more detail below in relation to
FIG. 2.
[61] The system 102 further includes a memory 122 for storing information
used by
the data processing unit 110. For example, the memory 122 may store a library
of intents, the
extracted keywords from the keyword extractor 112, responses or partial
responses
preprogramed for use by the response generator 116, etc. The memory 122 can
comprise a
combination of RAM and ROM and can be part of a single server or distributed
across several
servers and/or databases, either locally or remotely on cloud-based servers.
[62] The data processing unit 110 and its components (e.g. the keyword
extractor 112,
intent classifier 114, response generator 116, and learning component 118) may
be implemented
by one or more processors that execute instructions (software) stored in
memory. The memory in
which the instructions are stored may be memory 122 or another memory not
illustrated. The
instructions, when executed, cause the data processing unit 110 and its
components to perform
the operations described herein, e.g. extracting keywords from the user input,
classifying intent,
computing a confidence score, formulating the response to send to the user,
updating one or
more algorithms based on input from the user, etc. In some embodiments, the
one or more
processors consist of a central processing unit (CPU).
[63] Alternatively, some or all of the data processing unit 110 and its
components may
be implemented using dedicated circuitry, such as an application specific
integrated circuit
13
Date Recue/Date Received 2022-08-23

(ASIC), a graphics processing unit (GPU), or a programmed field programmable
gate array
(FPGA) for performing the operations of the data processing unit 110 and its
components.
[64] In some embodiments, in order to try to increase the robustness of the
dialogue
system, an interactive process is used in which the number of responses may be
progressively
increased during an interaction with a user. Example embodiments are provided
below.
[65] FIG. 2 illustrates a flowchart of a computer-implemented method for
performing
automated interactive conversation with a user, e.g. in order to provide an
answer to a question
from the user, according to one embodiment.
[66] In step 202, a natural language input originating from a user is
received via the
user interface 104. The natural language input is a string of text that
conveys a question.
[67] In step 204, the keyword extractor 112 attempts to extract keywords
from the
natural language input. If one or more keywords are extracted, then they are
stored in memory
122.
[68] In step 206 the intent classifier 114 determines an intent from the
natural language
input. The intent classifier 114 also determines the confidence score for its
determined intent. In
step 207, if the confidence score is below a threshold, then the method
proceeds to step 221.
Otherwise, if the confidence score is above the threshold, it indicates that
the system 102 is
confident enough in its determined intent to return a single question for
verification, and the
method proceeds to step 208.
[69] In step 208, the response generator 116a returns the question for
verification by
the user, via the question identifier module 160 described above.
[70] In step 209, the data processing unit 110 determines whether the
question returned
in step 208 was verified as correct by the user. Step 209 may include
receiving a natural
language input from the user, and the intent classifier 114 determining from
the intent of the
natural language input whether or not the user has verified the correctness of
the question, via the
question identifier module 160.
14
Date Recue/Date Received 2022-08-23

[71] For example, the original natural language input received in step 202
may ask
"What is the rate on your cashback MasterCard?" The intent classifier 114 may
determine with
high enough confidence that the intent is "get cashback rate for MasterCard",
and so in step 208
the response generator 116, via the question identifier module 160, returns "I
think I understand
your question. Can you verify for me that your question is: What is the
cashback rate for the
MasterCard credit card?" The user replies "Yes". The input "Yes" is determined
in step 209 to
be verifying that the question is correct, via the question identifier module
160.
[72] If the question is verified as correct, then in step 210 the response
generator 116
returns the answer to the question, via the answer identifier 162 and answer
generator 164, and
the method ends. Optionally, the user's answer confirming correctness of the
response may be
used by to the intent classifier 114 and/or the question identifier 162 to
increase their
effectiveness for future questions that may be similar. However, if the
question is not verified as
correct, then the method proceeds to step 211.
[73] When a question is derived by the intent classifier 114 from the
natural language
input, and the confidence score is above the threshold, then the derived
question is referred to as
a "likely question". A "likely question" is a question that the system 102
determines was likely
conveyed by the natural language input. In step 208, it is the likely question
that is returned.
However, it is only a "likely" question because it is not necessarily the
actual question that was
asked, e.g. if the intent determined by the intent classifier 114 does not
correctly reflect the
user's intent.
[74] If step 211 is reached, it means that the initial question presented
to the user for
verification in steps 208 and 209 is not verified as correct. In step 211, the
data processing unit
110 determines whether one or more keywords were recognized and extracted by
the keyword
extractor 112 in step 204. If no keywords were recognized, then the method
proceeds from step
211 to step 230. Step 230 is explained later. Otherwise, if one or more
keywords were
recognized and extracted, then the method proceeds from step 211 to 212.
[75] In step 212, the intent classifier 114 identifies n alternative
intents based on the
keywords extracted in step 204, where n is a natural number. n may vary
depending upon how
many alternative intents can be determined, and n may also be capped. For
example, if only one
Date Recue/Date Received 2022-08-23

alternative intent is determined by the intent classifier 114, then n is
limited to n = 1. As another
example, if five alternative intents are determined by the intent classifier
114, then n may be
capped at four, e.g. only the top four alternative intents are identified.
[76] An alternative intent is identified by the intent classifier 114 as
follows: the
keywords are processed, but instead of identifying the most likely intent
(identified in step 206),
a different intent is identified that is determined to be less likely, e.g.
has a lower confidence
score. For example, the user's question may be "What is the rate on your
cashback Mastercard?"
The intent classifier 114 determines two possible intents: (1) the user is
requesting the cashback
rate for the Mastercard' brand credit card, and the confidence score of this
determined intent is
75%; or (2) the user is requesting the interest rate for the Mastercard m
brand credit card, and the
confidence score of this determined intent is 65%. The intent identified in
step 206 is the one
with the higher confidence score, which in this example is cashback rate. The
(n = 1) alternative
intent identified in step 212 is the one with the lower confidence score,
which in this example is
interest rate. Each alternative intent corresponds to an alternative question
the user might be
asking, which is derived from the natural language input conveying the
question that was
received at step 202.
[77] The n alternative intents correspond to n alternative questions, and
in step 214 the
n alternative questions are generated by the question identifier module 160
and returned to the
user via the user interface 104.
[78] In step 215 it is determined whether one of the n alternative
questions is identified
as correct by the user, by the question identifier 160. Step 215 may be
performed by determining
the intent of an input received from the user after the n alternative
questions are presented to the
user. For example, if the user responds "First question", then the question
identifier 160
determines that the first question of the n alternative questions is the
correct question.
[79] If one of the n alternative questions is identified as correct, then
in step 216 the
response generator 116 returns, via the answer identifier 162 and the answer
generator 164, the
corresponding answer and the method ends. If none of the n alternative
questions is identified as
correct, then the method proceeds to step 230. Step 230 is described later.
The intent classifier
16
Date Recue/Date Received 2022-08-23

and question identifier module can also be at this point readjusted, for
example by providing the
user's answer to a neural network, for modifying weights and biases of layers
of the neural
network.
[80] Returning to step 207, if the intent determined by the intent
classifier 114 in step
207 is below a threshold, then the method proceeds to step 221. If step 221 is
reached, it means
that an intent has been determined from the natural language input, but the
intent classifier 114 is
not particularly confident that the determined intent is correct.
[81] Therefore, in step 221, the data processing unit 110 determines
whether one or
more keywords were recognized and extracted by the keyword extractor 112 in
step 204. If no
keywords were recognized, then the method proceeds from step 221 to step 230.
Step 230 is
explained later. Otherwise, if one or more keywords were recognized and
extracted, then the
method proceeds from step 221 to step 222. In step 222, the intent classifier
114 identifies the k
most likely intents, where k is a natural number greater than or equal to one.
k does not need to
have any relation to n, but in some embodiments k = n or k = n + 1. k may vary
depending
upon how many intents can be determined, and k may also be capped. The k
intents returned
may be the k intents having the highest confidence scores. For example, the
user's question may
be "What is the big deal about your Mastercard?" The intent classifier 114
determines two
possible intents: (1) the user is requesting a summary of the features of the
Mastercard' brand
credit card, and the confidence score of this determined intent is 45%; or (2)
the user is
requesting information on promotional offers for signing up for the Mastercard
brand credit
card, and the confidence score of this determined intent is 35%. Neither
intent has a high enough
confidence score to proceed to step 208, but in step 222 both intents (k = 2)
are identified. Each
intent corresponds to an alternative question the user might be asking, which
is derived from the
natural language input conveying the question that was received at step 202.
[82] The k intents correspond to k questions, generated by the question
identifier
module 160, and in step 224 the k questions are returned to the user via the
user interface 104.
[83] In step 225 it is determined whether one of the k questions is
identified as correct
by the user, via the question identifier module 160. Step 225 may be performed
by determining
17
Date Recue/Date Received 2022-08-23

the intent of an input received from the user after the k questions are
presented to the user. If one
of the k questions is identified as correct, then in step 226 the response
generator 116 returns, via
the answer identifier module 162 and the answer generator module 164, the
corresponding
answer and the method ends. If none of the k questions is identified as
correct, then the method
proceeds to step 230. Again here, the user's question selection can be fed
back to the intent
classifier 114 and/or the question identifier 160 to improve accuracy of the
validation/reformulated questions.
[84] If step 230 is reached in the method of FIG. 2 it means that the
system 102 is not
able to determine the question the user is asking. In step 230 the response
generator 116 sends a
reply to the user indicating this, e.g. "Sony, I do not understand your
question. Please try to
rephrase your question".
[85] FIG.1D thus summarizes the back and forth validation process described
in
relation with FIG.2, occurring between the front end 120 and the back end 140
of the system
102. Questions and responses are sent to and captured via the user interface
104 and the back end
modules 158, 116a and 116bprocess the user's responses and generate the
reformulated
question(s). As explained previously, the question entered by the user is
first analyzed by the
question analyzer 158, and intents, keywords and/or text strings are feed to
either one of the first
response generator 116a and alternate response generator 116b, depending on
the confidence
score of the intent. In some embodiments, when the confidence score is above a
given threshold,
a single validation question is returned to the user interface. In other
embodiments, the validation
question with the highest confidence score is returned to the user interface.
In other
embodiments, several alternate questions are proposed, one of which can be
selected by the user
via the user interface if it correctly reflects the original question asked.
[86] FIG. 3 illustrates an example message exchange on a user interface
104,
according to one embodiment. The message exchange corresponds to steps 202,
204, 206, 207,
208, 209, 211, 212, 214, 215, and 216 of FIG. 2. The number of responses (in
the form of
questions) is progressively increased during the interaction with the user. In
particular, initially
only one question is presented for verification at 382. However, upon
receiving user feedback
indicating that the initial question is incorrect, n = 3 alternative questions
are provided at 384.
18
Date Recue/Date Received 2022-08-23

The user indicates that the first one of the three alternative questions is
correct at 386, and the
answer corresponding to that question is returned at 388.
[87] FIG. 4 illustrates an example message exchange on a user interface
104,
according to another embodiment. The message exchange corresponds to steps
202, 204, 206,
207, 221, 222, 224, 225, and 226 of FIG. 2. The confidence score relating to
the most likely
intent does not exceed the threshold, and so the k = 3 most likely responses
are returned at 392.
The user indicates that the first one of the three questions is correct at
396, and the answer
corresponding to that question is returned at 398.
[88] Returning to FIG. 2, optionally, in step 234, the learning component
118 updates
the keyword extractor 112 and/or the intent classifier 114 to reflect the
user's response that
indicates which question is the correct question. For example, the learning
component 118 may
receive the output of the "Yes" branch of step 215 and/or step 225, which
indicates the correct
question, and the learning component 118 may use this indication to update or
train the intent
classifier 114 and/or the keyword extractor 112. Two examples follow.
[89] One example: The user initially asks the question "What is the big
deal about your
Mastercard?" The system does not determine an intent with a high enough
confidence score and
so three questions are returned to the user, as shown at 392 of FIG. 4. The
user replies that the
first question is the correct, i.e. the correct question is "What are the
features of the Mastercard
credit card?". The learning component 118 then updates the keyword extractor
112 and/or intent
classifier 114 to add the vocabulary "big deal" and to indicate that "big
deal" is a synonym to
"features". Then, if in the future a user asks a question including "big
deal", e.g. "What is the big
deal regarding your savings account", then the intent classifier 114 will more
confidently
determine that the user intent is that the user wants to learn about the
features of the savings
account.
[90] Another example: The user initially asks the question "What is the
rate on your
cashback Mastercard?" The system initially returns the incorrect question, as
shown at 382 of
FIG. 3, and so three alternative questions are returned to the user, as shown
at 384 of FIG. 3. The
user replies at 386 that the first question is correct, i.e. the correct
question is "What is the
interest rate on the Mastercard credit card". The learning component 118 then
updates the intent
19
Date Recue/Date Received 2022-08-23

classifier 114 to increase the confidence score of the entity value "interest
rate" when "rate" is
used in the user's question. Then, if in the future a user asks a similar
question, e.g. "What is the
rate on your Visa card", then the intent classifier will more confidently
determine that the user
intent is that the user wants to know the interest rate for the Visa brand
credit card.
[91] An example of a learning algorithm that may be implemented by the
learning
component 118 is: Schatzmann, J., Weilhammer, K., Stuttle, M., & Young, S.
(2006), "A survey
of statistical user simulation techniques for reinforcement-learning of
dialogue management
strategies", The knowledge engineering review, 2/(2), 97-126.
[92] In alternative embodiments, steps 208 and 209 of FIG. 2 may be
modified to
instead just return an answer to the determined question, and ask for
validation that the returned
answer is correct, in which case step 210 is not needed. For example, box 382
in FIG. 3 may
instead be: "The cashback rate on the MasterCard credit card is 1%. Did that
answer your
question?". If the user answers "Yes" then the method ends, whereas if the
user answers "No,
that did not answer my question", then the method proceeds to step 211.
[93] In some embodiments, the original question asked in the natural
language input
received at step 202 may actually consist of more than one question, in which
case the system
102 may extract and process each question separately, or the intent classifier
114 may try to
determine an overall intent. For example, if the natural language input from
the user in step 202
is "Does your bank offer multiple credit cards? What is the rate of each
one?", then the intent
classifier 114 may determine that the intent is that the user wants a
comparison of the rate of
each of the bank's credit cards.
[94] In some embodiments, the natural language input received at step 202
may not be
a question, but may instead be a request or an instruction to perform an
action. For example, the
input may be a request for information. The reply may then be a question that
confirms whether
particular information is being requested. For example, the natural language
input received in
step 202 may be "Provide me with the rate on your cashback MasterCard", and
the initial
question returned in step 208 may be "Please confirm that you are asking: What
is the cashback
rate on the MasterCard credit card?" Similarly, the alternative questions in
steps 214 and 224
may ask whether particular information is being requested.
Date Recue/Date Received 2022-08-23

[95] In some embodiments, the natural language input received at step 202
may be an
instruction to perform an action, e.g. "open a new account", in which case the
question(s)
returned may relate to clarification or confirmation before proceeding, e.g.
in step 214 "Do you
mean any one of the following actions: (1) Open a new savings account?; or (2)
open a new
chequing account?; or (3) open a new student account?".
[96] In some embodiments, when a user asks a question or requests an
action, the
response returned by the system 102 may be formulated based on information
specific to the
user. In some embodiments, the response may be in reply to a user's finance
related question or
finance related action. The response may be a function of the user's financial
information, e.g.
the user's prior financial transactions. The response may be a question or an
answer or an action.
[97] FIG. 5 illustrates a flowchart of a computer-implemented method for
interacting
with a user, according to one embodiment. In step 452, the data processing
unit 110 receives, in
text form, a natural language input originating from a user via the user
interface 104. The natural
language input conveys a finance related question or a finance related action
to be performed. As
an example, the user may be asking "What is the monthly fee for your savings
account?" (a
finance related question), or the user may be instructing "Please open a new
savings account" (a
finance related action).
[98] In step 454, an intent is determined from the natural language input,
possibly
using keywords extracted from the natural language input. In step 456, the
response generator
116 formulates a response (e.g. a question, an answer, or an action) based on
the intent.
However, the response formulated by the response generator 116 is based on
user-specific
financial information, as explained below.
[99] Stored in memory 122 is the identity of the user. The system 102 knows
and
stores the identity of the user because the identity of the user has been
previously provided to the
system 102. As one example, the user may have previously provided their bank
card number to
the system 102, which is used to uniquely identify the user. As another
example, the system 102
may be part of an online banking platform, and the user is signed into their
online banking, such
that the system 102 is aware of the identity of the user.
21
Date Recue/Date Received 2022-08-23

[100] Stored in a data structure, e.g. a database, is user-specific
financial information.
User-specific financial information is financial information that is specific
or unique to the user.
A non-exhaustive list of user-specific financial information includes any one,
some, or all of the
following: prior financial transactions performed by the user, e.g. a stored
record of previous
financial transactions; and/or quantity, quality, or type of financial
transactions performed by the
user; and/or user account balances; and/or number or type of accounts held by
a user (examples
of accounts include banking accounts, mortgage accounts, investment accounts,
etc.); and/or
credit information for the user; and/or information relating to banking
products utilized by the
user, e.g. whether the user has a mortgage, a credit card, investments, etc.
The data structure may
be stored in memory 122 or at another location, e.g. a database connected to
data processing unit
110 via a network.
[101] There are multiple candidate responses that may be returned to the
user, which are
selected or weighted based on the user-specific financial information. Some
examples are
provided below.
[102] Example: The natural language input originating from the user in step
452
conveys the following finance-related question: "What is the monthly fee for
your savings
account?" The intent determined in step 454 is that the user is requesting the
monthly fee for a
savings account. The response generator 116 determines the following, e.g. by
querying a
database: the standard monthly fee is $10 per month, but the fee is reduced to
$5 per month if the
user has a mortgage account or an investment account with the bank, and the
fee is reduced to $0
per month if the user has both a mortgage account and an investment account
with the bank.
Therefore, there are three candidate responses: $10, $5, or $0. The response
generator 116 uses
the user identification stored in memory 122 to query a database that lists
the accounts held by
the user. The accounts held by the user include a mortgage account, but not an
investment
account, and so the response returned to the user in step 456 is that the
monthly fee is $5, or the
response may be a question, e.g. "We can offer you a savings account for a
monthly fee of only
$5, are you interested?".
[103] Another example: The natural language input originating from the user
in step
452 conveys the following finance-related question: "What is this month's fee
for my savings
22
Date Recue/Date Received 2022-08-23

account?" The intent determined in step 454 is that the user wants to know
this month's fee for
the user's savings account. The fee is a function of the number of financial
transactions
performed by the user involving the user's savings account, e.g. $1 fee for
every transfer into or
out of the savings account in the month. The response generator 116 uses the
user identification
stored in memory 122 to query a database that lists the number of transactions
that month. The
database returns a value indicating that there were three transfers since the
beginning of the
month, and so the response returned to the user in step 456 is that the fee
will be $3.
[104] Another example: The natural language input originating from the user
in step
452 conveys the following finance-related action: "transfer $100 from my
savings account to my
chequing account". The intent determined in step 454 is that $100 is to be
transferred from the
user's savings account to the user's chequing account. The response generator
116 determines
that the user has two savings accounts ("A" and "B"), and so there are two
candidate responses:
either transfer the $100 from the user's savings account A or transfer the
$100 from the user's
savings account B. The response generator 116 uses the user identification
stored in memory 122
to query the account balances for savings accounts A and B and determines that
savings account
B has no money in it. In response, the response generator 116 performs the
transfer from savings
account A, perhaps after sending a question to the user confirming that the
money is to be
transferred from savings account A.
[105] In some embodiments, the method of FIG. 2 may be modified to
incorporate
generating a response based on user-specific financial information. For
example, the answer
returned in step 210 and/or 216, and/or 226 may be based on user-specific
financial information.
In a variation of FIG. 2, answers (instead of questions) may be returned in
steps 208/209,
214/215, and 224/225 (in which case steps 210, 216 and 226 are not needed).
The initial answer
returned in step 208/209 may be formulated based on financial information
specific to the user. If
in step 209 the user found the answer to be unsatisfactory (e.g. incorrect),
then the alternative
intents or answers (e.g. of step 214/215) may or may not be based on the
user's financial
information.
[106] An example: The natural language input originating from the user
conveys the
following finance-related question: "What is the rate of your savings
account?" The intent
23
Date Recue/Date Received 2022-08-23

determined is that the user is requesting the interest rate of a savings
account, and the confidence
score is high enough to immediately supply an answer to the question. The
standard interest rate
for a savings account is 1% but can be offered at 1.5% if the user has a
mortgage account with
the bank. The response generator 116 uses the user identification stored in
memory 122 to query
a database that lists the accounts held by the user. The accounts held by the
user include a
mortgage account, and so the response returned to the user is that the
interest rate is 1.5%. It is
then determined that the user is not satisfied with the answer, e.g. the user
actually wanted to
know the fee for the savings account. n alternative intents are therefore
identified, and n
corresponding alternative answers are returned to the user. However, the n
corresponding
alternative answers are not formulated based on user-specific financial
information because the
system 102 is now not as confident about whether the alternative answers even
reflect the
question actually asked by the user. This is because the confidence scores
associated with the
alternative intents are lower than the confidence score associated with intent
initially determined.
[107] In some embodiments, the response may only be formulated based on
user-
specific financial information if the confidence score of the intent
associated with the response is
above a particular threshold. For example, if the intent has a confidence
score of 90% or above,
then modify the corresponding response based on the user-specific financial
information;
otherwise, do not modify the corresponding response based on the user-specific
financial
information.
[108] FIG. 6 illustrates a flowchart of a computer-implemented method for
performing
automated interactive conversation with a user, according to one embodiment.
The automated
interactive conversation may be performed in order to provide an answer to a
question from the
user.
[109] In step 502, a user interface 104 is provided at which the user can
provide a
natural language input. The natural language input is processed by the data
processing unit 110.
The data processing unit 110 comprises at least one processor executing
instructions. The
instructions are configured to cause the data processing unit 110 to perform
the remaining steps
of FIG. 6.
24
Date Recue/Date Received 2022-08-23

[110] In step 504, the data processing unit 110 derives, from the natural
language input,
a possible question the user might be asking. An example of step 504 is
described earlier in
relation to steps 202 to 208 of FIG. 2.
[111] In step 506, the data processing unit 110 conveys the possible
question to the user
through the user interface 104 for verification by the user. An example of
step 506 is described
earlier in relation to steps 208 and 209 of FIG. 2.
[112] In step 508, the data processing unit 110 processes user input at the
user interface
104 indicating that the possible question is incorrect (e.g. the "No" branch
of step 209 of FIG. 2).
[113] In step 510, the data processing unit 110 derives a series of
alternate questions
that the user might be asking (e.g. step 212 of FIG. 2). In some embodiments,
step 510 may only
be performed if at least one keyword was recognized and extracted from the
natural language
input. In some embodiments, at least one keyword is recognized and extracted
from the natural
language input, and step 510 includes deriving the series of alternate
questions based on the at
least one keyword.
[114] In step 512, the data processing unit presents the series of
alternate questions to
the user through the user interface 104.
[115] In some embodiments, deriving the possible question the user might be
asking in
step 504 includes determining a user intent from the natural language input.
In some
embodiments, deriving the possible question the user might be asking is
performed without an
extracted keyword.
[116] In some embodiments, an algorithm for extracting at least one keyword
and/or an
algorithm for determining user intent is modified based on an indication from
the user of which
one of the alternate questions is a correct question.
[117] In some embodiments, the method further includes receiving an
indication, from
the user, that a particular question of the series of alternate questions is
correct, and presenting to
the user through the user interface an answer to the particular question. In
some embodiments,
the method includes generating the answer to the particular question using
user-specific financial
Date Recue/Date Received 2022-08-23

information. In some embodiments, the particular question is a finance-related
question, and the
user-specific financial information relates to financial transactions
previously performed by the
user and/or accounts held by the user.
[118] FIG. 7 illustrates a flowchart of a computer-implemented method for
interacting
with a user, according to one embodiment, wherein the question from the user
relates to finance
and wherein responses returned by the system to the user interface are based
on user-specific
information. FIG.8 provides an example of possible sub steps that can be
performed when the
response to a user's question depends on the user's profile.
[119] In step 552 of FIG.7, a user interface 104 is provided at which the
user can
provide a natural language input conveying a finance related question or a
finance related action
to be performed. The natural language input is processed by the data
processing unit 110. The
data processing unit 110 comprises at least one processor executing
instructions. The instructions
are configured to cause the data processing unit 110 to perform the remaining
steps of FIG. 7.
[120] In step 554, the data processing unit 110 derives, from the natural
language input,
a possible finance related question or possible finance related action. In
step 556, the data
processing unit 110 obtains a series of candidate answers, for example using a
query lookup
table, each of which is an answer to the possible finance related question or
the possible finance
related action. In step 558, the data processing unit 110 selects one of the
candidate responses on
the basis of user-specific financial information.
[121] In step 560, the data processing unit 110 presents the selected
candidate response
to the user through the user interface 104. Other examples of financial
related questions and
responses are provided earlier when describing FIG. 5 and related embodiments.
[122] In some embodiments, the candidate responses are a series of answers,
each
answer corresponding to a respective possible finance related question. In
other embodiments,
the candidate responses are a series of actions, each action corresponding to
a respective possible
finance related action instructed by the user. In other embodiments, the
candidate responses are a
series of questions. The questions may each correspond to possible finance
related question
being asked. In some embodiments, the user-specific financial information
relates to financial
26
Date Recue/Date Received 2022-08-23

transactions previously performed by the user and/or accounts held by the
user. In some
embodiments, the user-specific financial information is retrieved using an
identifier of the user
stored in memory.
[123] For instance, as per the example presented in FIG. 8, the confirmed
or validated
question can be: "What is the interest rate on the Master Card." In the
example of FIG. 8, the
answer relates to the interest rate of a credit card, as determined by
querying a lookup table, as in
step 802. For some questions, once confirmed, the answer will be independent
of the user's
profile, and therefor the answer will be the same for all users. This scenario
is reflected by the
left side of the flowchart, where the answer, corresponding in the example to
a 4% credit card
interest rate, is the same for all users. In other cases, the answer can vary
depending on the
user's profile, such as based on the user's financial profile, as per step
804. For example, a user
with three or more bank accounts can be offered a lower interest rate than
users having a single
bank account. An ID database can thus be queried, as per step 806, to uniquely
identify the user
with a user ID. Once the user has been uniquely identified, financial
databases can be in turn
queried to determine the personal or financial profile associated with the
unique ID (step 808).
The financial profile can include for example the number of accounts, cards,
financial services,
amount per account and loans associated with the user ID. In the example, for
users having more
than three accounts linked to their unique identifier, the system can be
configured such that a
lower interest rate is offered to these users. The rules for modulating the
answers based on
financial information parameters can be stored for instance in a lookup table
for answers
associated with financial information, as in step 810. In the example, the
user has four different
accounts, and thus has access to a reduced interest rate of 3%, instead of 4%.
The answer is
returned to the answer generator module 164 at step 812.
[124] The predominant modus operandi of dialogue systems consists in
determining the
question asked by a user, and in returning an answer corresponding to the
determined question.
Despite recent advances and developments in the field, the success rate of
dialogue systems is
still surprisingly low, wherein the success rate corresponds to the ability to
correctly determine
the question asked by a user, and to return a satisfactory/accurate answer
following a dialogue
session with the user. The proposed system and method improve on existing
dialogue systems
27
Date Recue/Date Received 2022-08-23

and methods by prompting users to confirm the system's "understanding" of
their question, such
as by reformulating the possible question. In cases where the level of
confidence in the
determined question is low, or if the initial attempt at determining the
original question has
failed, the system generates several candidate questions (alternate questions)
to be validated by
the user, instead of simply providing what would most likely be an inaccurate
answer, as a
typical chatbot would. This proposed method and system have proved to
significantly increase
the success rate in providing accurate answers to user's questions.
[125] A prototype of the proposed automated interactive
conservation/dialogue system
has been tested and compared with an existing commercially available system.
In the
experiment, the same 100 predetermined questions with known answers were asked
of both the
proposed system 102 and an existing commercially available system . With
respect to the
proposed system 102, with reference to FIG. 3, dialogue box 382 is considered
a first attempt by
the proposed system 102, and dialog box 384 is considered a second attempt by
the proposed
system 102. The dialogue session of FIG. 3 would thus be attributed a score of
0.5. The existing
commercially available system on the other hand provides an answer to the
question in its first
attempt and if the first attempt is incorrect, the existing commercially
available system provides
one or more alternative answers. This method of responding with a first answer
and then
subsequently with other alternative answers is common amongst existing chatbot
dialogue
systems. If the commercially available system provided a good answer at the
first attempt a score
of 1 is assigned, and if a correct answer was provided at the second attempt,
a score of 0.5 was
assigned. The assigned scores for all 100 questions was summed and the result
divided by 100,
for each system. The results are shown in the table below:
System tested Accuracy score
Proposed system and method 77.2%
Existing commercially available system 51.3%
[126] As can be appreciated, an increase of 50.4% in the accuracy of the
dialogue
process is achieved with the proposed system and method.
28
Date Recue/Date Received 2022-08-23

[127] Although the foregoing has been described with reference to certain
specific
embodiments, various modifications thereof will be apparent to those skilled
in the art without
departing from the scope of the claims appended hereto.
[128] Moreover, any module, component, or device exemplified herein that
executes
instructions may include or otherwise have access to a non-transitory
computer/processor
readable storage medium or media for storage of information, such as
computer/processor
readable instructions, data structures, program modules, and/or other data. A
non-exhaustive list
of examples of non-transitory computer/processor readable storage media
includes magnetic
cassettes, magnetic tape, magnetic disk storage or other magnetic storage
devices, optical disks
such as compact disc read-only memory (CD-ROM), digital video discs or digital
versatile disc
(DVDs), Blu-ray DiscTM, or other optical storage, volatile and non-volatile,
removable and non-
removable media implemented in any method or technology, random access memory
(RAM),
read-only memory (ROM), electrically erasable programmable read-only memory
(EEPROM),
flash memory or other memory technology. Any such non-transitory
computer/processor storage
media may be part of a device or accessible or connectable thereto. Any
application or module
herein described may be implemented using computer/processor
readable/executable instructions
that may be stored or otherwise held by such non-transitory computer/processor
readable storage
media.
29
Date Recue/Date Received 2022-08-23

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Un avis d'acceptation est envoyé 2024-05-21
Lettre envoyée 2024-05-21
month 2024-05-21
Inactive : Approuvée aux fins d'acceptation (AFA) 2024-05-14
Inactive : Q2 réussi 2024-05-14
Modification reçue - modification volontaire 2024-05-08
Modification reçue - réponse à une demande de l'examinateur 2024-05-08
Rapport d'examen 2024-04-19
Inactive : Rapport - CQ réussi 2024-04-19
Modification reçue - réponse à une demande de l'examinateur 2024-04-04
Modification reçue - modification volontaire 2024-04-04
Rapport d'examen 2024-02-14
Inactive : Rapport - Aucun CQ 2024-02-13
Modification reçue - réponse à une demande de l'examinateur 2024-02-07
Modification reçue - modification volontaire 2024-02-07
Rapport d'examen 2023-10-11
Inactive : Rapport - CQ réussi 2023-10-11
Modification reçue - réponse à une demande de l'examinateur 2023-09-28
Modification reçue - modification volontaire 2023-09-28
Rapport d'examen 2023-06-28
Inactive : Rapport - CQ réussi 2023-06-23
Modification reçue - modification volontaire 2023-05-23
Modification reçue - réponse à une demande de l'examinateur 2023-05-23
Rapport d'examen 2023-02-03
Inactive : Rapport - Aucun CQ 2023-02-01
Lettre envoyée 2023-01-27
Avancement de l'examen jugé conforme - alinéa 84(1)a) des Règles sur les brevets 2023-01-27
Lettre envoyée 2023-01-10
Inactive : CIB expirée 2023-01-01
Inactive : Avancement d'examen (OS) 2022-12-13
Inactive : Taxe de devanc. d'examen (OS) traitée 2022-12-13
Inactive : CIB attribuée 2022-10-25
Inactive : CIB attribuée 2022-10-25
Inactive : CIB en 1re position 2022-10-25
Inactive : CIB attribuée 2022-10-25
Inactive : CIB attribuée 2022-10-25
Lettre envoyée 2022-09-26
Demande de priorité reçue 2022-09-23
Lettre envoyée 2022-09-23
Lettre envoyée 2022-09-23
Exigences applicables à une demande divisionnaire - jugée conforme 2022-09-23
Exigences applicables à la revendication de priorité - jugée conforme 2022-09-23
Demande de remboursement reçue 2022-08-29
Inactive : CQ images - Numérisation 2022-08-23
Exigences pour une requête d'examen - jugée conforme 2022-08-23
Inactive : Pré-classement 2022-08-23
Toutes les exigences pour l'examen - jugée conforme 2022-08-23
Demande reçue - divisionnaire 2022-08-23
Demande reçue - nationale ordinaire 2022-08-23
Demande publiée (accessible au public) 2020-06-07

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Le dernier paiement a été reçu le 2023-11-07

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Requête d'examen - générale 2023-12-06 2022-08-23
TM (demande, 2e anniv.) - générale 02 2022-08-23 2022-08-23
Enregistrement d'un document 2022-08-23 2022-08-23
Taxe pour le dépôt - générale 2022-08-23 2022-08-23
TM (demande, 3e anniv.) - générale 03 2022-12-06 2022-11-15
Avancement de l'examen 2022-12-13 2022-12-13
TM (demande, 4e anniv.) - générale 04 2023-12-06 2023-11-07
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
BANQUE NATIONALE DU CANADA
Titulaires antérieures au dossier
ERIC CHARTON
JONATHAN GUYMONT
LOUIS MARCEAU
MATTHEW BONNELL
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document (Temporairement non-disponible). Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(yyyy-mm-dd) 
Nombre de pages   Taille de l'image (Ko) 
Revendications 2024-02-06 5 333
Description 2024-02-06 31 2 352
Revendications 2024-04-03 6 346
Description 2024-04-03 32 2 413
Revendications 2024-05-07 5 334
Description 2024-05-07 31 2 438
Description 2023-05-22 29 2 129
Revendications 2023-05-22 5 327
Description 2023-09-27 30 2 163
Revendications 2023-09-27 5 331
Page couverture 2022-11-28 1 43
Revendications 2022-08-22 5 229
Description 2022-08-22 29 1 518
Dessins 2022-08-22 10 313
Abrégé 2022-08-22 1 14
Dessin représentatif 2022-11-28 1 12
Modification / réponse à un rapport 2024-02-06 24 1 083
Demande de l'examinateur 2024-02-13 4 186
Modification / réponse à un rapport 2024-04-03 25 1 038
Demande de l'examinateur 2024-04-18 3 142
Modification / réponse à un rapport 2024-05-07 20 845
Avis du commissaire - Demande jugée acceptable 2024-05-20 1 579
Courtoisie - Réception de la requête d'examen 2022-09-22 1 422
Courtoisie - Certificat d'enregistrement (document(s) connexe(s)) 2022-09-22 1 353
Modification / réponse à un rapport 2023-05-22 22 929
Demande de l'examinateur 2023-06-27 4 230
Modification / réponse à un rapport 2023-09-27 27 1 172
Demande de l'examinateur 2023-10-10 4 207
Nouvelle demande 2022-08-22 17 695
Courtoisie - Certificat de dépôt pour une demande de brevet divisionnaire 2022-09-25 2 229
Remboursement 2022-08-28 15 1 080
Avancement d'examen (OS) 2022-12-12 5 132
Courtoisie - Accusé de réception de remboursement 2023-01-09 2 204
Courtoisie - Requête pour avancer l’examen - Conforme (OS) 2023-01-26 1 194
Demande de l'examinateur 2023-02-02 4 182