Language selection

Search

Patent 2811942 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2811942
(54) English Title: USER FEEDBACK IN SEMI-AUTOMATIC QUESTION ANSWERING SYSTEMS
(54) French Title: RETROACTION DE L'UTILISATEUR DANS DES SYSTEMES DE REPONSE AUX QUESTIONS SEMI-AUTOMATIQUES
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06Q 30/04 (2012.01)
  • G06F 17/27 (2006.01)
(72) Inventors :
  • KOLL, DETLEF (United States of America)
  • POLZIN, THOMAS (United States of America)
(73) Owners :
  • MMODAL IP LLC (United States of America)
(71) Applicants :
  • MMODAL IP LLC (United States of America)
(74) Agent: FASKEN MARTINEAU DUMOULIN LLP
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2011-09-23
(87) Open to Public Inspection: 2012-03-29
Examination requested: 2016-09-20
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/US2011/052983
(87) International Publication Number: WO2012/040578
(85) National Entry: 2013-03-20

(30) Application Priority Data:
Application No. Country/Territory Date
61/385,838 United States of America 2010-09-23
13/025,051 United States of America 2011-02-10

Abstracts

English Abstract

A system applies rules to a set of documents to generate codes, such as billing codes for use in medical billing. A human operator provides input specifying whether the generated codes are correct. Based on the input from the human operator, the system attempts to identify which clause(s) in the rules which were relied on to generate the particular code are correct and which such clause(s) are incorrect. The system then assigns praise to components of the system responsible for generating codes in the correct clauses, and assigns blame to components of the system responsible for generating codes in the incorrect clauses. Such blame and praise may then be used to determine whether particular code-generating components are insufficiently reliable. The system may disable, or take other remedial action in response to, insufficiently reliable code-generating components.


French Abstract

La présente invention a trait à un système qui applique des règles à un ensemble de documents en vue de générer des codes, tels que des indicatifs de facturation destinés à être utilisés dans la facturation médicale. Un opérateur humain fournit une entrée spécifiant si les codes générés sont corrects. En fonction de l'entrée fournie par l'opérateur humain, le système tente d'identifier la ou les clauses, parmi des règles, qui ont servi pour générer le code particulier et qui sont correctes et la ou les clauses qui sont incorrectes. Le système attribue alors un point positif aux composantes du système responsables de la génération des codes dans les clauses correctes, et il attribue un point négatif aux composantes du système responsables de la génération de codes dans les clauses incorrectes. Ces points positifs et négatifs peuvent ensuite être utilisés de manière à déterminer si des composantes de génération de code particulières ne sont pas suffisamment fiables. Le système peut désactiver les composantes de génération de code qui ne sont pas suffisamment fiables ou prendre toute autre action corrective en réponse à ces dernières.

Claims

Note: Claims are shown in the official language in which they were submitted.



CLAIMS

1. A method performed by at least one computer processor executing computer
program instructions tangibly stored on at least one non-transitory computer-
readable
medium,
the method for use with a system including a data source and a first billing
code,
the method comprising:
(A) receiving input from a user, wherein the input represents a
verification
status of the first billing code;
(B) applying first inverse logic to the input, the billing code, and a set
of
forward logic, to identify first and second concept extraction components;
and
(C) applying reinforcement to the first and second concept extraction
components, comprising:
(E)(1) determining whether the verification status indicates that the first
billing code is accurate;
(E)(2) if the verification status indicates that the first billing code is
inaccurate, then applying negative reinforcement to the first
and second concept extraction components, comprising
apportioning the negative reinforcement between the first
and second concept extraction components.
2. The method of claim 1, wherein (C) further comprises:
(E)(3) if the verification status does not indicate that the first billing
code
is inaccurate, then applying positive reinforcement to the
first and second concept extraction components,
comprising apportioning the positive reinforcement to the
first and second concept extraction components.
3. The method of claim 1, further comprising:

-41-


(D) determining whether the first concept extraction component is
unreliable
at generating concept codes; and
(E) if the first concept extraction component is determined to be
unreliable at
generating concept codes, then:
(E)(1) at the first concept extraction component, generating a concept
code; and
(E)(2) requiring human review of the concept code before adding the
concept code to the data source.
4. The method of claim 1, further comprising:
(D) determining whether the first concept extraction component is
unreliable
at generating concept codes; and
(E) if the first concept extraction component is determined to be
unreliable at
generating concept codes, then:
(E)(1) at the identified concept extraction component, generating a
concept code;
(E)(2) at a logic component in the system, generating a second billing
code based on the concept code; and
(E)(3) requiring human review of the second billing code before adding
the billing code to the system.
5. The method of claim 1, wherein (B) comprises:
(B)(1) determining that the first concept extraction component includes
means for generating concept codes representing instances of a first
concept;
(B)(2) determining that the first billing code was generated by a first
logic
component in reliance on a concept code representing an instance of the
first concept;
(B)(3) identifying the first concept extraction component based on the
determination that the first billing code was generated by the first logic
component.

-42-


6. The method of claim 1, wherein a first reliability score is associated with
the
first concept extraction component, wherein the first reliability score
represents an
estimate of a first degree to which the first concept extraction component
generates
concept codes accurately, and
wherein applying the negative reinforcement comprises associating a second
reliability score with the first concept extraction component, wherein the
second
reliability score represents an estimate of a second degree to which the first
concept
extraction component generates concept codes accurately, wherein the second
degree is
lower than the first degree.
7. The method of claim 1, wherein (B) comprises:
(B)(1) identifying a first logic component that generated the first
billing
code;
(B)(2) identifying, based on the input from the user, a concept
relied upon
by the first logic component to generate the first billing code; and
(B)(3) identifying the first concept extraction component based
upon the
concept relied upon by the first logic component.
8. The method of claim 8, wherein (B)(3) comprises identifying the first
concept
extraction component by determining that the first concept extraction
component
generates concept codes representing instances of the concept relied upon by
the first
logic component.
9. The method of claim 1, wherein (B) comprises:
(B)(1) identifying a first logic component that generated the first
billing
code, wherein the first logic component comprises means for
implementing first logic, wherein the first logic includes a first condition,
wherein the first condition includes a first sub-condition and a second sub-
condition; and

-43-


(B)(2) applying first inverse logic to the input received from the
user to
identify at least one of the first and second sub-conditions.
10. The method of claim 9, wherein (B)(2) comprises identifying exactly one of

the first and second sub-conditions, and wherein (B) further comprises:
(B)(3) identifying a first concept that satisfies the identified
one of the
first and second sub-conditions; and
(B)(4) identifying a concept extraction component comprising means
for
generating concept codes representing instances of the first concept.
11. The method of claim 9, wherein (B)(2) comprises identifying both of the
first
and second sub-conditions.

-44-


12. A non-transitory computer-readable medium comprising computer-readable
instructions tangibly stored on the computer-readable medium, wherein the
instructions
are executable by at least one computer processor to perform a method for use
with a
system including a data source and a first billing code, the method
comprising:
(A) receiving input from a user, wherein the input represents a
verification
status of the first billing code;
(B) applying first inverse logic to the input, the billing code, and a set
of
forward logic, to identify first and second concept extraction components;
and
(C) applying reinforcement to the first and second concept extraction
components, comprising:
(E)(1) determining whether the verification status indicates that the first
billing code is accurate;
(E)(2) if the verification status indicates that the first billing code is
inaccurate, then applying negative reinforcement to the first
and second concept extraction components, comprising
apportioning the negative reinforcement between the first
and second concept extraction components.
13. The computer-readable medium of claim 12, wherein (C) further comprises:
(E)(3) if the verification status does not indicate that the first billing
code
is inaccurate, then applying positive reinforcement to the
first and second concept extraction components,
comprising apportioning the positive reinforcement to the
first and second concept extraction components.
14. The computer-readable medium of claim 12, further comprising:
(D) determining whether the first concept extraction component is
unreliable
at generating concept codes; and
(E) if the first concept extraction component is determined to be
unreliable at
generating concept codes, then:

-45-

(E)(1) at the first concept extraction component, generating a concept
code; and
(E)(2) requiring human review of the concept code before adding the
concept code to the data source.
15. The computer-readable medium of claim 12, further comprising:
(F) determining whether the first concept extraction component is
unreliable
at generating concept codes; and
(G) if the first concept extraction component is determined to be
unreliable at
generating concept codes, then:
(E)(4) at the identified concept extraction component, generating a
concept code;
(E)(5) at a logic component in the system, generating a second billing
code based on the concept code; and
(E)(6) requiring human review of the second billing code before adding
the billing code to the system.
16. The computer-readable medium of claim 12, wherein (B) comprises:
(B)(4) determining that the first concept extraction component
includes
means for generating concept codes representing instances of a first
concept;
(B)(5) determining that the first billing code was generated by a
first logic
component in reliance on a concept code representing an instance of the
first concept;
(B)(6) identifying the first concept extraction component based on
the
determination that the first billing code was generated by the first logic
component.
17. The computer-readable medium of claim 12, wherein a first reliability
score is
associated with the first concept extraction component, wherein the first
reliability score
-46-

represents an estimate of a first degree to which the first concept extraction
component
generates concept codes accurately, and
wherein applying the negative reinforcement comprises associating a second
reliability score with the first concept extraction component, wherein the
second
reliability score represents an estimate of a second degree to which the first
concept
extraction component generates concept codes accurately, wherein the second
degree is
lower than the first degree.
18. The computer-readable medium of claim 12, wherein (B) comprises:
(B)(4) identifying a first logic component that generated the first
billing
code;
(B)(5) identifying, based on the input from the user, a concept
relied upon
by the first logic component to generate the first billing code; and
(B)(6) identifying the first concept extraction component based
upon the
concept relied upon by the first logic component.
19. The computer-readable medium of claim 19, wherein (B)(3) comprises
identifying the first concept extraction component by determining that the
first concept
extraction component generates concept codes representing instances of the
concept
relied upon by the first logic component.
20. The computer-readable medium of claim 12, wherein (B) comprises:
(B)(5) identifying a first logic component that generated the first
billing
code, wherein the first logic component comprises means for
implementing first logic, wherein the first logic includes a first condition,
wherein the first condition includes a first sub-condition and a second sub-
condition; and
(B)(6) applying first inverse logic to the input received from the
user to
identify at least one of the first and second sub-conditions.
-47-


21. The computer-readable medium of claim 20, wherein (B)(2) comprises
identifying exactly one of the first and second sub-conditions, and wherein
(B) further
comprises:
(B)(7) identifying a first concept that satisfies the identified
one of the
first and second sub-conditions; and
(B)(8) identifying a concept extraction component comprising means
for
generating concept codes representing instances of the first concept.
22. The computer-readable medium of claim 20, wherein (B)(2) comprises
identifying both of the first and second sub-conditions.
-48-

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
User Feedback in Semi-Automatic Question Answering Systems
BACKGROUND
[0001] There are a variety of situations in which a human operator has to
answer a set of
discrete questions given a corpus of documents containing information
pertaining to the
questions. One example of such a situation is that in which a human operator
is tasked with
associating billing codes with a hospital stay of a patient, based on a
collection of all documents
containing information about the patient's hospital stay. Such documents may,
for example,
contain information about the medical procedures that were performed on the
patient during the
stay and other billable activities performed by hospital staff in connection
with the patient during
the stay.
[0002] This set of documents may be viewed as a corpus of evidence for the
billing codes
that need to be generated and provided to an insurer for reimbursement. The
task of the human
operator, a billing coding expert in this example, is to derive a set of
billing codes that are
justified by the given corpus of documents, considering applicable rules and
regulations.
Mapping the content of the documents to a set of billing codes is a demanding
cognitive task. It
may involve, for example, reading reports of surgeries performed on the
patient and determining
not only which surgeries were performed, but also identifying the personnel
who participated in
such surgeries, and the type and quantity of materials used in such surgeries
(e.g., the number of
stents inserted into the patient's arteries), since such information may
influence the billing codes
that need to be generated to obtain appropriate reimbursement. Such
information may not be
presented within the documents in a format that matches the requirements of
the billing code
system. As a result, the human operator may need to carefully examine the
document corpus to
extract such information.
[0003] Because of such difficulties inherent in generating billing codes based
on a
document corpus, various computer-based support systems have been developed to
guide human
coders through the process of deciding which billing codes to generate based
on the available
evidence. Despite such guidance, it can still be difficult for the human coder
to identify the
information necessary to answer each question.
- 1 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0004] To address this problem, the above-referenced patent application
entitled,
"Providing Computable Guidance to Relevant Evidence in Question-Answering
Systems" (U.S.
Pat. App. Ser. No. 13/025,051) discloses various techniques for pointing the
human coder to
specific regions within the document corpus that may contain evidence of the
answers to
particular questions. The human coder may then focus initially or solely on
those regions to
generate answers, thereby generating such answers more quickly than if it were
necessary to
review the entire document corpus manually. The answers may themselves take
the form of
billing codes or may be used, individually or in combination with each other,
to select billing
codes.
[0005] For example, an automated inference engine may be used to generate
billing
codes automatically based on the document corpus and possibly also based on
answers generated
manually and/or automatically. The conclusions drawn by such an inference
engine may,
however, not be correct. What is needed, therefore, are techniques for
improving the accuracy of
billing codes and other data generated by automated inference engines.
SUMMARY
[0006] A system applies rules to a set of documents to generate codes, such as
billing
codes for use in medical billing. A human operator provides input specifying
whether the
generated codes are correct. Based on the input from the human operator, the
system attempts to
identify which clause(s) in the rules which were relied on to generate the
particular code are
correct and which such clause(s) are incorrect. The system then assigns praise
to components of
the system responsible for generating codes in the correct clauses, and
assigns blame to
components of the system responsible for generating codes in the incorrect
clauses. Such blame
and praise may then be used to determine whether particular code-generating
components are
insufficiently reliable. The system may disable, or take other remedial action
in response to,
insufficiently reliable code-generating components.
- 2 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. lA is a dataflow diagram of a system for extracting concepts from
speech
and for encoding such concepts within codes according to one embodiment of the
present
invention;
[0008] FIG. 1B is a dataflow diagram of a system for deriving propositions
from content
according to one embodiment of the present invention;
[0009] FIG. 2 is a flowchart of a method performed by the system of FIG. lA
according
to one embodiment of the present invention;
[0010] FIG. 3 is a diagram of a concept ontology according to one embodiment
of the
present invention; and
[0011] FIG. 4 is a dataflow diagram of a system for receiving feedback on
billing codes
from a human reviewer and for automaticlaly assessing and improving the
performance of the
system according to one embodiment of the present invention;
[0012] FIG. 5A is a flowchart of a method performed by the system of FIG. 5;
[0013] FIGS. 5B-5C are flowcharts of methods for implementing particular
operations of
the method of FIG. 5A according to one embodiment of the present invention;
and
[0014] FIG. 6 is a dataflow diagram of a system for using inverse reasoning to
identify
components of a system that were responsible for generating billing codes
according to one
embodiment of the present invention.
DETAILED DESCRIPTION
[0015] Embodiments of the present invention may be used to improve the quality
of
computer-based components that are used to identify concepts within documents,
such as
components that identify concepts within speech and that encode such concepts
in codes (e.g.,
XML tags) within transcriptions of such speech. Such codes are referred to
herein as "concept
codes" to distinguish them from other kinds of codes. One example of a system
for performing
such encoding of concepts within concept codes is disclosed in U.S. Pat. No.
7,584,103, entitled,
"Automated Extraction of Semantic Content and Generation of a Structured
Document from
Speech," which is hereby incorporated by reference herein. Embodiments of the
present
- 3 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
invention may generate transcripts of speech and encode concepts represented
by such speech
within concept codes in those transcripts using, for example, any of the
techniques disclosed in
U.S. Pat. No. 7,584,103.
[0016] For example, by way of high-level overview, FIG. lA is a dataflow
diagram of a
system 100a for extracting concepts from speech and for encoding such concepts
within concept
codes according to one embodiment of the present invention. FIG. 2 is a
flowchart of a method
200 performed by the system 100a of FIG. lA according to one embodiment of the
present
invention.
[0001] A transcription system 104 transcribes a spoken audio stream 102 to
produce a
draft transcript 106 (operation 202). The spoken audio stream 102 may, for
example, be
dictation by a doctor describing a patient visit. The spoken audio stream 102
may take any form.
For example, it may be a live audio stream received directly or indirectly
(such as over a
telephone or IP connection), or an audio stream recorded on any medium and in
any format.
[0002] The transcription system 104 may produce the draft transcript 106
using, for
example, an automated speech recognizer or a combination of an automated
speech recognizer
and a physician or other human reviewer. The transcription system 104 may, for
example,
produce the draft transcript 106 using any of the techniques disclosed in the
above-referenced
U.S. Pat. No. 7,584,103. As described therein, the draft transcript 106 may
include text that is
either a literal (verbatim) transcript or a non-literal transcript of the
spoken audio stream 102. As
further described therein, although the draft transcript 106 may include or
solely contain plain
text, the draft transcript 106 may also, for example, additionally or
alternatively contain
structured content, such as XML tags which delineate document sections and
other kinds of
document structure. Various standards exist for encoding structured documents,
and for
annotating parts of the structured text with discrete facts (data) that are in
some way related to
the structured text. Examples of existing techniques for encoding medical
documents include the
HL7 CDA v2 XML standard (ANSI-approved since May 2005), SNOMED CT, LOINC, CPT,

ICD-9 and ICD-10, and UMLS.
[0003] As shown in FIG. 1A, the draft transcript 106 includes one or more
concept codes
108a-c, each of which encodes an instance of a "concept" extracted from the
spoken audio
stream 102. The term "concept" is used herein as defined in U.S. Pat. No.
7,584,103. Reference
- 4 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
numeral 108 is used herein to refer generally to all of the concept codes 108a-
c within the draft
transcript 106. Although in FIG. lA only three concept codes 108a-c are shown,
the draft
transcript 106 may include any number of codes. In the context of a medical
report, each of the
codes 108 may, for example, encode an allergy, prescription, diagnosis, or
prognosis. Although
the draft transcript 106 is shown in FIG. lA as only containing text that has
corresponding codes,
the draft transcript 106 may also include unencoded text (i.e., text without
any corresponding
codes), also referred to as "plain text."
[0004] Codes 108 may encode instances of concepts represented by corresponding
text in
the draft transcript 106. For example, in FIG. 1A, concept code 108a encodes
an instance of a
concept represented by corresponding text 118a, concept code 108b encodes an
instance of a
concept represented by corresponding text 118b, and concept code 108c encodes
an instance of a
concept represented by corresponding text 118c. Although each unit of text
118a-c is shown as
disjoint in FIG. 1A, any two or more of the texts 118a-c may overlap with
and/or contain each
other. The correspondence between a code and its corresponding text may be
stored in the
system 100a, such as by storing each of the concept codes 108a-c as one or
more tags (e.g., XML
tags) that mark up the corresponding text. For example, concept code 108a may
be implemented
as a pair of tags within the transcript 106 that delimits the corresponding
text 118a, concept code
108b may be implemented as a pair of tags within the transcript 106 that
delimits the
corresponding text 118b, and concept code 108c may be implemented as a pair of
tags within the
transcript 106 that delimits the corresponding text 118c.
[0005] Transcription system 104 may include components for extracting
instances of
discrete concepts from the spoken audio stream 102 and for encoding such
concepts into the
draft transcript 106. For example, assume that first concept extraction
component 120a extracts
instances of a first concept from the audio stream 102, that the second
concept extraction
component 120b extracts instances of a second concept from the audio stream
102, and that the
third concept extraction component 120c extracts instances of a third concept
from the audio
stream 102. As a result, the first concept extraction component 120a may
extract an instance of
the first concept from a first portion of the audio stream 102 (FIG. 2,
operation 202a); the second
concept extraction component 120b may extract an instance of the second
concept from a second
portion of the audio stream 102 (FIG. 2, operation 202b); and the third
concept extraction
- 5 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
component 120c may extract an instance of the third concept from a third
portion of the audio
stream 102 (FIG. 2, operation 202c).
[0006] The concept extraction components 120a-c may use natural language
processing
(NLP) techniques to extract instances of concepts from the spoken audio stream
102. The
concept extraction components 120a-c may, therefore, also be referred to
herein as "natural
language processing (NLP) components."
[0007] The first, second, and third concepts may differ from each other. As
just one
example, the first concept may be a "date" concept, the second concept may be
a "medications"
concept, and the third concept may be an "allergies" concept. As a result, the
concept
extractions performed by operations 202a, 202b, and 202c in FIG. 2 may involve
extracting
instances of concepts that differ from each other.
[0008] The first, second, and third portions of the spoken audio stream 102
may be
disjoint, contain each other, or otherwise overlap with each other in any
combination.
[0009] As used herein "extracting an instance of a concept from an audio
stream" refers
to generating content that represents the instance of the concept, based on a
portion of the audio
stream 102 that represents the instance of the concept. Such generated content
is referred to
herein as "concept content." For example, in the case of a "date" concept, an
example of
extracting an instance of the date concept from the audio stream 102 is
generating the text
"<DATE>October 1, 1993</DATE>" based on a portion of the audio stream in which
"ten one
ninety three" is spoken, because both the text "<DATE>October 1, 1993</DATE>"
and the
speech "one ninety three" represent the same instance of the "date" concept,
namely the date
October 1, 1993. In this example, the text "<DATE>October 1, 1993</DATE>" is
an example
of concept content.
[0010] As this example illustrates, concept content may include a code and
corresponding text. For example, the first concept extraction component 120a
may extract an
instance of the first concept to generate first concept content 122a
(operation 202a) by encoding
the instance of the first concept in concept code 108a and corresponding text
118a in the draft
transcript 106, where the concept code 108a specifies the first concept (e.g.,
the "date" concept)
and wherein the first text 118a represents (i.e., is a literal or non-literal
transcription of) the first
portion of spoken audio stream 102. Similarly, the second concept extraction
component 120b
- 6 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
may extract an instance of the second concept to generate second concept
content 122b
(operation 202b) by encoding the instance of the second concept in concept
code 108b and
corresponding text 118b in the draft transcript 106, where the concept code
108b specifies the
second concept (e.g., the "medications" concept) and wherein the second text
118b represents the
second portion of spoken audio stream 102. Finally, the third concept
extraction component
120c may extract an instance of the third concept to generate third concept
content 122c
(operation 202c) by encoding the instance of the third concept in concept code
108c and
corresponding text 118c in the draft transcript 106, where the concept code
108c specifies the
second concept (e.g., the "medications" concept) and wherein the second text
118c represents the
second portion of spoken audio stream 102.
[0011] As stated above, in this example, the text "<DATE>October 1,
1993</DATE>" is
an example of concept content that represents an instance of the "date"
concept. Concept content
need not, however, include both a code and text. Instead, for example, concept
content may
include only a code (or other specifier of the instance of the concept
represented by the code) but
not any corresponding text. For example, the concept content 122a in FIG. lA
may alternatively
include the concept code 108a but not the text 118a. As another example,
concept content may
include text but not a corresponding code (or other specifier of the instance
of the concept
represented by the text). For example, the concept content 122a in FIG. lA may
alternatively
include the text 118a but not the concept code 108a. Therefore, any references
herein to concept
content 122a-c should be understood to include embodiments of such content
122a-c other than
the embodiment shown in FIG. 1A.
[0012] The concept extraction components 120a-c may take any form. For
example, they
might be distinct rules, heuristics, statistical measures, sets of data, or
any combination thereof.
Each of the concept extraction components 120a-c may take the form of a
distinct computer
program module, but this is not required. Instead, for example, some or all of
the concept
extraction components may be implemented and integrated into in a single
computer program
module.
[0013] As described in more detail below, embodiments of the present invention
may
track the reliability of each of the concept extraction components 120a-c,
such as by associating
a distinct reliability score or other measure of reliability with each of the
concept extraction
- 7 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
components 120a-c. Such reliability scores may, for example, be implemented by
associating
and storing a distinct reliability score in connection with each of the
concepts extracted by the
concept extraction components 120a-c. For example, a first reliability score
may be associated
and stored in connection with the concept generated by concept extraction
component 120a; a
second reliability score may be associated and stored in connection with the
concept generated
by concept extraction component 120b; and a third reliability score may be
associated and stored
in connection with the concept generated by concept extraction component 120a.
If some or all
of the concept extraction components 120a-c are integrated into a single
computer program
module, then the distinct concept extraction components 120a-c shown in FIG.
lA may merely
represent the association of distinct reliability scores with distinct
concepts, rather than distinct
computer program modules or distinct physical components.
[0014] As described above, each of the concept contents 122a-c in the draft
transcript
106 may be created by a corresponding one of the concept extraction components
120a-c. Links
124a-c in FIG. lA illustrate the correspondence between concept contents 122a-
c and the
corresponding concept extraction components 120a-c, respectively, that created
them (or that
caused transcription system 104 to create them). More specifically, link 124a
indicates that
concept extraction component 120a created or caused the creation of concept
content 122a; link
124b indicates that concept extraction component 120b created or caused the
creation of concept
content 122b; and link 124c indicates that concept extraction component 120c
created or caused
the creation of concept content 122c.
[0015] Links 124a-c may or may not be generated and/or stored as elements of
the
system 100a. For example, links 124a-c may be stored within data structures in
the system 100a,
such as in data structures within the draft transcript 106. For example, each
of the links 124a-c
may be stored within a data structure within the corresponding one of the
concept contents 122a-
c. Such data structures may, for example, be created by or using the concept
extraction
components 120a as part of the process of generating the concept contents 122a-
c (FIG. 2,
operations 202a-c). As will be clear from the description below, whether or
not the links 124a-c
are stored within data structures in the system 100a, the information
represented by links 124a-c
may later be used to take action based on the correspondence between concept
contents 122a-c
and concept extraction components 120a-c.
- 8 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0017] Embodiments of the present invention may be used in connection with a
question-
answering system, such as the type described in the above-referenced patent
application entitled,
"Providing Computable Guidance to Relevant Evidence in Question-Answering
Systems." As
described therein, one use of question-answering systems is for generating
billing codes based on
a corpus of clinical medical reports. In this task, a human operator (coder)
has to review the
content of the clinical medical reports and, based on that content, generate a
set of codes within a
controlled vocabulary (e.g., CPT and ICD-9 or ICD-10) that can be submitted to
a payer for
reimbursement. This is a cognitively demanding task which requires abstracting
from the
document content to generate appropriate billing codes.
[0018] In particular, once the draft transcript 106 has been generated, a
reasoning module
130 (also referred to herein as an "inference engine") may be used to generate
or select
appropriate billing codes 140 based on the content of the draft transcript 106
and/or additional
data sources. The reasoning module 130 may use any of the techniques disclosed
in the above-
referenced U.S. Pat. App. Ser. No. 13/025,051 ("Providing Computable Guidance
to Relevant
Evidence in Question-Answering Systems") to generate billing codes 140. For
example, the
reasoning module 130 may be a fully automated reasoning module, or combine
automated
reasoning with human reasoning provided by a human billing code expert.
[0019] Although billing codes 140 are shown in FIG. lA as containing three
billing
codes 142a-c, billing codes 140 may contain fewer or greater than three
billing codes. The
billing codes 140 may be stored and represented in any manner. For example,
the billing codes
140 may be integrated with and stored within the draft transcript 106.
[0020] The reasoning module 130 may encode the applicable rules and
regulations for
billing coding published by, e.g., insurance companies and state agencies. The
reasoning module
130 may, for example, include forward logic components 132a-c, each of which
implements a
distinct set of logic for mapping document content to billing codes. Although
three forward
logic components 132a-c are shown in FIG. lA for purposes of example, the
reasoning module
130 may include any number of forward logic components, which need not be the
same as the
number of concept extraction components 120a-c or the number of concept
contents 122a-c.
[0021] Although the reasoning module 130 is shown in FIG. lA as receiving the
draft
transcript 106 as input, this is merely one example and does not constitute a
limitation of the
- 9 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
present invention. The reasoning module 130 may receive input from, and apply
forward logic
components 132a-c to, data sources in addition to and/or instead of the draft
transcript 106. For
example, the reasoning module 106 may receive multiple documents (e.g.,
multiple draft
transcripts created in the same manner as draft transcript 106) as input. Such
multiple documents
may, for example, be a plurality of reports about the same patient. As another
example, the
reasoning module 106 may receive a database record, such as an Electronic
Medical Record
(EMR), as input. Such a database record may, for example, contain information
about a
particular patient, and may have been created and/or updated using data
derived from the draft
transcript 106 and/or other document(s). The database record may, for example,
contain text
and/or discrete facts (e.g., encoded concepts of the same or similar form as
concept contents
122a-c). The transcription system 104 may apply concept extraction components
120a-c to text
in the database record but not apply concept extraction components 120a-c to
any discrete facts
in the database record, thereby leaving such discrete facts unchanged.
[0022] As another example, the reasoning module 106 may receive a text
document (e.g.,
in ASCII or HTML), which is then processed by data extraction components (not
shown) to
encode the text document with concept content in a manner similar to that in
which the concept
extraction components 120a-c encode concept contents based on an audio stream.
Therefore,
any reference herein to the use of the draft transcript 106 by the reasoning
module 130 should be
understood to refer more generally to the use of any data source (such as a
data source containing
data relating to a particular patient or a particular procedure) by the
reasoning module 130 to
generate billing codes 140.
[0023] Furthermore, although in the example of FIG. lA the reasoning module
130
receives concept content 122a-c as input, this is merely an example.
Alternatively or
additionally, for example, and as shown in FIG. 1B, the reasoning module 130
may receive
propositions 160 (also referred to herein as "facts") as input. Propositions
160 may include data
representing information derived from one or more draft transcripts 106a-c
(which may include
the draft transcript 106 of FIG. 1A). For example, propositions 160 may
include any number of
propositions 162a-c derived from draft transcripts 106a-c by a reconciliation
module 150. A
proposition may, for example, represent information about a particular
patient, such as the fact
that the patient has diabetes.
- 10 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0024] The reconciliation module 150 may derive the propositions 162a-c from
the draft
transcripts 106a-c by, for example, applying reconciliation logic modules 152a-
c to the draft
transcripts 106a-c (e.g., to the concept contents 122a-c within the draft
transcripts 106a-c). Each
of the reconciliation logic modules 152a-c may implement distinct logic for
deriving
propositions from draft transcripts 106a-c. A reconciliation logic module may,
for example,
derive a proposition from a single concept content (such as by deriving the
proposition "patient
has diabetes" from a <DIABETES NOT FURTHER SPECIFIED> code). As another
example,
a reconciliation logic module may derive a proposition from multiple concept
contents, such as
by deriving the proposition "patient has uncontrolled diabetes" from a
<DIABETES NOT FURTHER SPECIFIED> code and a <DIABETES UNCONTROLLED>
code. The reconciliation module 150 may perform such derivation of a
proposition from
multiple content contents by first deriving distinct propositions from each of
the content contents
and then applying a reconciliation logic module to the distinct propositions
to derive a further
proposition.
[0025] This is an example of reconciling a general concept with a
specialization of the
general concept by deriving a proposition representing the specialization of
the general concept.
Those having ordinary skill in the art will understand how to implement other
reconciliation
logic for reconciling multiple concepts to generate propositions resulting
from such
reconciliation. Furthermore, the reconciliation module 150 need not be limited
to applying
reconciliation logic modules 152a-c to draft transcripts 106a-c in a single
iteration. More
generally, reconciliation module 150 may, for example, repeatedly (e.g.,
periodically) apply
reconciliation logic modules 152a-c to the current set of propositions 162a-c
to refine existing
propositions and to add new propositions to the set of propositions 160. As
new draft transcripts
are provided as input to the reconciliation module 150, the reconciliation
module 150 may derive
new propositions from those draft transcripts, add the new propositions to the
set of propositions
160, and again apply reconciliation logic modules 152a-c to the new set of
propositions 160.
[0026] As described in more detail below, embodiments of the present invention
may
track the reliability of various components of the systems 100a-b, such as
individual concept
extraction components 120a-c. The reconciliation module 150 may propagate the
reliability of
one concept to other concepts that are derived from that concept using the
reconciliation logic
- 11 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
modules 152a-c. For example, if a first concept has a reliability score of
50%, then the
reconciliation module 150 may assign a reliability score of 50% to any
proposition that the
reconciliation module 150 derives from the first concept. When the
reconciliation module 150
derives a proposition from multiple propositions, the reconciliation module
150 may assign a
reliability score to the derived proposition based on the reliability scores
of the multiple
propositions in any of a variety of ways.
[0027] The propositions 160 may be represented in a different form than the
concept
contents 122a-c in the draft transcripts 106a-c. For example, the concept
contents 122a-c may be
represented in a format such as SNOMED, while the propositions 162a-c may be
represented in a
format such as ICD-10.
[0028] The reasoning module 130 may reason on the propositions 160 instead of
or in
addition to the concepts represented by the draft transcripts 106a-c. For
example, the systems
100a (FIG. 1A) and 100b (FIG. 1B) may be combined with each other to produce a
system
which: (1) uses the transcription system 104 to extract concept contents from
one or more spoken
audio streams (e.g., audio stream 102); (2) uses the reconciliation module 150
to derive
propositions 160 from the draft transcripts 106a-c; and (3) uses reasoning
module 130 to apply
forward logic components 132a-c to the derived propositions 160 and thereby to
generate billing
codes 140 based on the propositions 160. Any reference herein to applying the
reasoning
module 130 to concept content should be understood to refer to applying the
reasoning module
130 to propositions 160 in addition to or instead of concept content.
[0029] Although the reasoning module 130 may, for example, be either
statistical or
symbolic (e.g., decision logic), for ease of explanation and without
limitation the reasoning
module 130 in the following description will be assumed to reason based on
symbolic rules. For
example, each of the forward logic components 132a-c may implement a distinct
symbolic rule
for generating or selecting billing codes 140 based on information derived
from the draft
transcript 106. Each such rule includes a condition (also referred to herein
as a premise) and a
conclusion. The conclusion may specify one or more billing codes. As described
in more detail
below, if the condition of a rule is satisfied by content (e.g., concept
content) of a data source,
then the reasoning module 130 may generate the billing code specified by the
rule's conclusion.
- 12 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0030] A condition may, for example, require the presence in the data source
of a concept
code representing an instance of a particular concept. Therefore, in the
description herein,
"condition A" may refer to a condition which is satisfied if the data source
contains a concept
code representing an instance of concept A, whereas "condition B" may refer to
a condition
which is satisfied if the data source contains a concept code representing an
instance of concept
B, where concept A may differ from concept B. Similarly, "condition A" may
refer to a
condition which is satisfied by the presence of a proposition representing
concept A in the
propositions 160, while "condition B" may refer to a condition which is
satisfied by the presence
of a proposition representing concept B in the propositions 160. These are
merely examples of
conditions, however, not limitations of the present invention. A condition
may, for example,
include multiple sub-conditions (also referred to herein as clauses) joined by
one or more
Boolean operators.
[0031] One advantage of symbolic rules systems is that as rules and
regulations change,
the symbolic rules represented by the forward logic components 132a-c may be
adjusted
manually without the need to re-learn the new set of rules on an annotated
corpus respectively
from observing operator feedback.
[0032] Furthermore, not all elements of the systems 100a (FIG. 1A) and 100b
(FIG. 1B)
are required. For example, embodiments of the present invention may omit the
transcription
system 104 and receive as input one or more draft transcripts 106a-c,
regardless of how such
draft transcripts 106a-c were generated. The draft transcripts 106a-c may
already contain
concept contents. Alternatively, the draft transcripts 106a-c may not contain
concept contents, in
which case embodiments of the present invention may create concept contents
within the draft
transcripts 106a-c, such as by marking up existing text within the draft
transcripts 106a-c with
concept codes using the concept extraction components 120a-c or other
components. As these
examples illustrate, embodiments of the present invention need not receive or
act on audio
streams, such as audio stream 102.
[0033] Furthermore, although transcript 106 and transcripts 106a-c are
referred to herein
as "draft" transcripts, embodiments of the present invention may be applied
not only to draft
documents but more generally to any document, such as documents that have been
reviewed,
revised, and finalized, so that they are no longer drafts.
- 13 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0034] An example of three rules that may be implemented by forward logic
components
132a-c, respectively, are shown in Table 1:
Rule Premise Conclusion
No.
1 patient has_problem addBillingCode
<DIABETES> : p (<DIABETES NOT FURTHER SPECIFIED)
2 patient has_problem addBillingCode
<DIABETES> : p (<UNCONTROLLED DIABETES>)
AND
p.getStatus() ¨
<UNCONTROLLED>
3 patient has_problem addBillingCode
<DIABETES> : p (<UNCONTROLLED DIABETES>)
AND
p.getStatus ¨
<UNCONTROLLED>
AND
p.hasRelatedFinding(hypero
smolarity)
Table 1
[0035] Each of the three rules is of the form "if (premise) then
(conclusion)," where the
premise and conclusion of each rule is as shown in Table 1. More specifically,
in the example of
Table 1:
= Rule #1 is for generating a billing code if the data source specifies
that the patient
has diabetes, but the data source does not mention that the patient has any
complications in connection with diabetes. In particular, Rule #1 indicates
that if
the data source specifies that the patient has diabetes, then the reasoning
module
130 should add the billing code <DIABETES NOT FURTHER SPECIFIED> to
the billing codes 140.
- 14 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
= Rule #2 is for generating a billing code if the data source specifies
that the patient
has uncontrolled diabetes. In particular, Rule #2 indicates that if the data
source
specifies that the patient has diabetes and that the status of the patient's
diabetes is
uncontrolled, then the reasoning module 130 should add the billing code
<UNCONTROLLED DIABETES> to the billing codes 140.
= Rule #3 is for generating a billing code if the data source specifies
that the patient
has diabetes with hyperosmolarity. In particular, Rule #3 indicates that if
the data
source specifies that the patient has diabetes and that the patient has
hyperosmolarity, then the reasoning module 130 should add the billing code
<UNCONTROLLED DIABETES> to the billing codes 140.
[0036] The reasoning module 130 may generate the set of billing codes 140
based on the
data source (e.g., draft transcript 106) by initializing the set of billing
codes 140 (e.g., creating an
empty set of billing codes) (FIG. 2, operation 204) and then applying all of
the forward logic
components 132a-c (e.g., symbolic rules) to the data source (FIG. 2, operation
206). For each
forward logic component L, the reasoning module 130 determines whether the
data source
satisfies the conditions of forward logic component L (FIG. 2, operation 208).
If such conditions
are satisfied, the reasoning module 130 adds one or more billing codes
specified by forward logic
component L to the set of billing codes 140 (FIG. 2, operation 210). In the
particular case of
forward logic components 132a that take the form of rules, if the data source
satisfies the
premise of such a rule, then the reasoning module 130 add the billing code(s)
specified by the
conclusion of the rule to the set of billing codes 140. If the conditions
specified by forward logic
component L are not satisfied, then the reasoning module 130 does not add any
billing codes to
the set of billing codes 140 (FIG. 2, operation 212).
[0037] As previously mentioned, the reasoning module 130 may generate the set
of
billing codes 140 based on the propositions 160 instead of the data source
(e.g., draft transcript
106), in which case any reference herein to applying forward logic components
132a-c to
concept codes or to the data source should be understood to refer to applying
forward logic
components 132a-c to the propositions 160. For example, the conditions of the
rules in Table 1
may be applied to the propositions 160 instead of to codes in the data source.
- 15 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0038] Billing codes may represent concepts organized in an ontology. For
example,
FIG. 3 shows a highly simplified example of an ontology 300 including concepts
relating to
diabetes. The ontology includes: (1) a root node 302 representing the general
concept of
diabetes; (2) a first child node 304a of root node 302, representing the
concept of unspecified
diabetes; and (3) a second child node 304b of root node 302, representing the
concept of
uncontrolled diabetes. Any particular node in the ontology 300 may or may not
have a
corresponding code (e.g., billing code). For example, in the ontology 300 of
FIG. 3, the general
concept of diabetes (represented by root node 302) may not have any
corresponding code,
whereas the child nodes 304a-b may both have corresponding codes.
[0039] If a particular node represents a first concept, and a child node of
the particular
node represents a second concept, then the second concept may be a
"specialization" of the first
concept. For example, in the ontology 300 of FIG. 3, the concept of
unspecified diabetes
(represented by node 304a) is a specialization of the general concept of
diabetes (represented by
node 302), and the concepts of uncontrolled diabetes (represented by node
304b) and diabetes
with hyperosmorality (represented by node 304c) are specializations of the
general concept of
diabetes (represented by node 302). More generally, the concept represented by
a node may be a
specialization of the concept represented by any ancestor (e.g., parent,
grandparent, or great-
grandparent) of that node.
[0040] Operation 208 of the method 200 of FIG. 2 may treat a condition as
satisfied by
data in the data source if the concept represented by that data satisfies the
condition or if the
concept represented by that data is a specialization of a concept that
satisfies the condition. For
example, if a particular condition is satisfied by the concept of diabetes
(represented by node 302
in FIG. 3), then operation 208 may treat data that represents unspecified
diabetes (represented by
node 304a in FIG. 3) as satisfying the particular condition, because
unspecified diabetes is a
specialization of diabetes.
[0041] To further understand the method 200 of FIG. 2, consider a particular
example in
which the reasoning module 130 finds that the draft transcript 106 contains a
finding related to a
patient that has been marked up with a code indicating that the patient has
diabetes or any
specializations of that code within the corresponding ontology. In this case,
the condition of
forward logic component 132a (e.g., Rule #1) would be satisfied, and the
reasoning module 130
- 16 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
would add a billing code <DIABETES NOT FURTHER SPECIFIED> to the current set
of
billing codes 140 being generated. Assume for purposes of example that billing
code 142a in
FIG. lA is the billing code <DIABETES NOT FURTHER SPECIFIED>.
[0042] Similarly, assume that the reasoning module 130 finds that the draft
transcript 106
contains a finding related to the same patient that has been marked up with a
code of
"<DIABETES UNCONTROLLED>." In this case, the condition of forward logic
component
132b (e.g., Rule #2) would be satisfied, and the reasoning module 130 would
add a billing code
<DIABETES UNCONTROLLED> to the current set of billing codes 140 being
generated.
Assume for purposes of example that billing code 142b is the billing code
<DIABETES UNCONTROLLED>.
[0043] Further assume that the draft transcript 106 contains no evidence that
the same
patient suffers from hyperosmorality. As a result, the reasoning module 130
would not find that
the condition of forward logic component 132c (e.g., Rule #3) is satisfied
and, as a result,
forward logic component 132c would not cause any billing codes to be added to
the set of billing
codes 140 in this example.
[0044] In this example, although the set of billing codes 140 would now
contain both the
billing code <DIABETES NOT FURTHER SPECIFIED> and the billing code
<UNCONTROLLED DIABETES>, the code <UNCONTROLLED DIABETES> should take
precedence over the code <DIABETES NOT FURTHER SPECIFIED>. The reasoning
module 130 may remove the now-moot code <DIABETES NOT FURTHER SPECIFIED>, for
example, by applying a re-combination step. For example, if a generated code A
represents a
specialization of the concept represented by a generated code B, then the two
codes A and B may
be combined with each other. As another example, if the clauses Z1 of a rule
that generates a
code Y1 strictly implies a clause Z2 of a rule that generates a code Y2, then
the two codes Y1
and Y2 may be combined with each other (e.g., so that code Y1 survives the
combination but
code Y2 does not). As another example, codes may be combined based on a rule,
e.g., a rule that
specifies that if code A and B have been generated, then codes A and B should
be combined
(e.g., so that code A survives the combination but code B does not). As yet
another example,
statistical or other learned measures of recombination may be used.
- 17 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0045] FIG. lA also shows links 134a-b between concept contents 122a-c in the
data
source (e.g., draft transcript 106) and forward logic components 132a-b having
conditions that
were satisfied by such concept contents 122a-c in operation 208 of FIG. 2. For
example, link
134a indicates that concept content 122a (e.g., the concept code 108a)
satisfied the condition of
forward logic component 132a, and that the reasoning module 130 generated the
billing code
142a in response to such satisfaction. Similarly, link 134b indicates that
concept content 122b
(e.g., the concept code 108b) satisfied the condition of forward logic
component 132b, and that
the reasoning module 130 generated the billing code 142b in response to such
satisfaction.
[0046] Links 134a-b may or may not be generated and/or stored as elements of
the
system 100a. For example, links 134a-b may be stored within data structures in
the system 100a,
such as in data structures within the set of billing codes 140. For example,
each of the billing
codes may contain data identifying the forward logic component concept content
(or part
thereof) that caused the billing code to be generated. The reasoning module
130 may, for
example, generate and store data representing the links 134a-b as part of the
process of adding
individual billing codes 142a-b, respectively, to the system 100a in operation
210 of FIG. 2.
[0047] FIG. lA also shows links 144a-b between forward logic components 132a-b
and
the billing codes 142a-b generated by the reasoning module 130 as a result of,
and in response to,
determining that the conditions of the forward logic components 132a-b were
satisfied by the
data source (e.g., draft transcript 106). More specifically, link 144a
indicates that billing code
142a was generated as a result of, and in response to, the reasoning module
130 determining that
the data source satisfied the condition of forward logic component 132a.
Similarly, link 144b
indicates that billing code 142b was generated as a result of, and in response
to, the reasoning
module 130 determining that the data source satisfied the condition of forward
logic component
132b.
[0048] Links 144a-b may or may not be generated and/or stored as elements of
the
system 100a. For example, links 144a-b may be stored within data structures in
the system 100a,
such as in data structures within the set of billing codes 140. For example,
each of the billing
codes may contain data identifying the forward logic component that caused the
billing code to
be generated. The reasoning module 130 may, for example, generate and store
data representing
- 18 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
the links 144a-b as part of the process of adding individual billing codes
142a-b, respectively, to
the system 100a in operation 210 of FIG. 2.
[0049] The set of billing codes 140 that is output by the reasoning module 130
may be
reviewed by a human operator, who may accept or reject/modify the billing
codes 140 generated
by the automatic system 100a. More specifically, FIG. 4 is a dataflow diagram
of a system 400
for receiving feedback on the billing codes 140 from a human reviewer 406 and
for
automatically assessing and improving the performance of the system 100a in
response to and
based on such feedback according to one embodiment of the present invention.
FIG. 5A is a
flowchart of a method 500 performed by the system 400 of FIG. 4 according to
one embodiment
of the present invention.
[0050] A billing code output module 402 provides output 404, representing some
or all of
the billing codes 142a-c, to the human reviewer 406 (FIG. 5A, operation 502).
The billing code
output 404 may take any form, such as textual representations of the billing
codes 142a-c (e.g.,
"DIABETES NOT FURTHER SPECIFIED" and/or "Unspecified Diabetes" in the case of
billing code 142a). The output 404 may also include output representing any of
element(s) of the
system 100a, such as output representing some or all of the data source (e.g.,
draft transcript 106)
and/or spoken audio stream 102. Such additional output may assist the reviewer
406 in
evaluating the accuracy of the billing codes 140. Embodiments of the present
invention are not
limited to any particular form of the output 404.
[0051] The human reviewer 406 may evaluate some or all of the billing codes
140 and
make a determination regarding whether some or all of the billing codes 140
are accurate. The
human reviewer 406 may make this determination in any way, and embodiments of
the present
invention do not depend on this determination being made in any particular
way. The human
reviewer 406 may, for example, determine that a particular one of the billing
codes 140 is
inaccurate because it is inconsistent with information represented by the
spoken audio stream
102 and/or the draft transcript 106.
[0052] For example, the human reviewer 406 may conclude that one of the
billing codes
142a is inaccurate because the billing code is inconsistent with the meaning
of some or all of the
text (e.g., text 118a-c) in the data source. As one particular example of
this, the human reviewer
406 may conclude that one of the billing codes 142a is inaccurate because the
billing code is
- 19 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
inconsistent with the meaning of text in the data source that has been encoded
incorrectly by the
transcription system 104. For example, the human reviewer 406 may conclude
that billing code
142a is inaccurate as a result of concept extraction component 120a
incorrectly encoding text
118a with concept code 108a. In this case, concept code 108a may represent a
concept that is not
represented by text 118a or by the speech in the spoken audio stream 102 that
caused the
transcription system 104 to generate the text 118a. As this example
illustrates, the reasoning
module 130 may generate an incorrect billing code as the result of providing
an invalid premise
(e.g., inaccurate concept content 122a) to one of the forward logic components
132a-c, where the
invalid premise includes concept content that was generated by one of the
concept extraction
components 120a-c.
[0053] The system 400 also includes a billing code feedback module 408. Once
the
human reviewer 406 has determined whether a particular billing code is
accurate, the reviewer
406 provides input 408 representing that determination to a billing code
feedback module 410
(FIG. 5A, operation 504). In general, the input 408 represents a verification
status of the
reviewed billing code, where the verification status may have a value selected
from a set of
permissible values, such as "accurate" and "inaccurate" or "true" and "false."
The feedback 408
may include feedback on the accuracy of one or more of the billing codes 142a-
c.
[0054] As will now be described in more detail, the feedback 408 provided by
the
reviewing human operator 406 may be captured and interpreted automatically to
assess the
performance of the automatic billing coding system 100a. In particular,
embodiments of the
present invention are directed to techniques for inverting the reasoning
process of the reasoning
module 130 in a probabilistic way to assign blame and/or praise for an
incorrectly/correctly-
generated billing code to the constituent logic clauses which lead to the
generation of the billing
code.
[0055] In general, the billing code feedback module 410 may identify one or
more
components of the billing code generation system 100a that was responsible for
generating the
billing code corresponding to the feedback 408 (FIG. 5A, operation 506), and
associate either
blame (e.g., a penalty or other negative reinforcement) or praise (e.g., a
reward or other positive
reinforcement) with that component.
- 20 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0056] Examples of components that may be identified as responsible for
generating the
billing code associated with the feedback 408 are the concept extraction
components 120a-c and
the forward logic components 132a-c. The system 400 may identify the forward
logic
component responsible for generating a billing code by, for example, following
the liffl( from the
billing code back to the corresponding forward logic component. For example,
if the reviewer
406 provides feedback 408 on billing code 142b, then the feedback module 410
may identify
forward logic component 132b as the forward logic component that generated
billing code 142b
by following the liffl( 144b from billing code 142b to forward logic component
132b. It is not
necessary, however, to use links to identify the forward logic component
responsible for
generating a billing code. Instead, and as will be described in more detail
below, inverse logic
may be applied to identify the responsible forward logic component without the
use of links.
[0057] The billing code feedback module 410 may associate a truth value with
the
identified forward logic component. For example, if the reviewer's feedback
408 confirms the
reviewed billing code, then the billing code feedback module 410 may associate
a truth value of
"true" with the identified forward logic component; if the reviewer's feedback
408 disconfirms
the reviewed billing code, then the billing code feedback module 410 may
associate a truth value
of "false" with the identified forward logic component. The billing code
feedback module 410
may, for example, store such a truth value in or in association with the
corresponding forward
logic component.
[0058] The system 400 (in operation 506) may identify the concept extraction
component
responsible for generating the billing code by, for example, following the
series of links from the
billing code back to the corresponding forward logic component. For example,
if the reviewer
406 provides feedback 408 on billing code 142b, then the feedback module 410
may identify the
concept extraction component 120b as the concept extraction component that
generated billing
code 142b by following the link 144b from billing code 142b to forward logic
component 132b,
by following the link 134b from the forward logic component 132b to the
concept content 122b,
and by following the link 124b from the concept content 122b to the concept
extraction
component 120b. It is not necessary, however, to use links to identify the
concept extraction
component responsible for generating a billing code. Instead, and as will be
described in more
- 21 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
detail below, inverse logic may be applied to identify the responsible concept
extraction
component without the use of links.
[0059] The system 400 (in operation 506) may identify more than one component
as
being responsible for generating a billing code, including components of
different types. For
example, the system 400 may identify both the forward logic component 132b and
the concept
extraction component 120b as being responsible for generating billing code
142b.
[0060] The system 400 (in operation 506) may, additionally or alternatively,
identify one
or more sub-components of a component as being responsible for generating a
billing code. For
example, as illustrated by the example rules above, a forward logic component
may represent
logic having multiple clauses (sub-conditions). For example, consider a
forward logic
component that implements a rule of the form "if A AND B, Then C." Such a rule
contains two
clauses (sub-conditions): A and B. In the description herein, each such clause
is said to be
correspond to and be implemented by a "sub-component" of the forward logic
component that
implements the rule containing the clauses.
[0061] The system 400 (in operation 506) may identify, for example, one or
both of these
clauses individually as being responsible for generating a billing code.
Therefore, any reference
herein to taking action in connection with (such as associating blame or
praise with) a
"component" of the system 100a should also be understood to refer to taking
the action in
connection with one or more sub-components of the component. In particular,
each sub-
component of a forward logic component may correspond to and implement a
distinct clause
(sub-condition) of the logic represented by the forward logic component.
[0062] The billing code feedback module 410 may associate reinforcement with
the
component identified in operation 506 in a variety of ways. Associating
reinforcement with a
component is also referred to herein as "applying" reinforcement to the
component.
[0063] The billing code feedback module 410 may, for example, determine
whether the
feedback 408 provided by the human reviewer 406 is positive, i.e., whether the
feedback 408
indicates that the corresponding billing code is accurate (FIG. 5A, operation
508). If the
feedback 408 is positive, the billing code feedback module 410 associates
praise with the system
component(s) identified in operation 506 (FIG. 5A, operation 510). If the
feedback 408 is
- 22 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
negative, the billing code feedback module 410 associates blame with the
system component(s)
identified in operation 506 (FIG. 5A, operation 512).
[0064] Both praise and blame are examples of "reinforcement" as that term is
used
herein. Therefore, in general the billing code feedback module 410 may
generate reinforcement
output 412, representing praise and/or blame, as part of operations 510 and
512 in FIG. 5A.
Such reinforcement output 412 may take any of a variety of forms. For example,
a score,
referred to herein as a "reliability score," may be associated with each of
one or more
components (e.g., concept extraction components 120a-c and forward logic
components 132a-c)
in the system 100a. The reliability score of a particular component represents
an estimate of the
degree to which the particular component reliably generates accurate output
(e.g., accurate
concept codes 108a-c or billing codes 142a-c). Assume for purposes of example
that the value
of a reliability score may be a real number that ranges from 0 (representing
complete
unreliability) to 1 (representing complete reliability). The reliability score
associated with each
particular component may be initialized to some initial value, such as 0, 1,
or 0.5.
[0065] As mentioned above, reliability scores may be associated and stored in
connection
with representations of concepts, rather than in connection with concept
extraction components.
In either case, a concept may have one or more attributes, and reliability
scores may be
associated with attributes of the concept in addition to being associated with
the concept itself.
For example, if a concept has two attributes, then a first reliability score
may be associated with
the concept, a second reliability score may be associated with the first
attribute, and a second
reliability score may be associated with the second attribute.
[0066] This particular reliability score scheme is merely one example and does
not
constitute a limitation of the present invention, which may implement
reinforcement output 412
in any way. For example, the scale of reliability scores may be inverted, so
that 0 represents
complete reliability and 1 represents complete unreliability. In this case,
the reliability score
may be thought of as a likelihood of error, ranging from 0% to 100%.
[0067] Associating praise (positive reinforcement) with a particular component
(FIG. 5A,
operation 510) may include increasing (e.g., incrementing) a reliability score
counter associating
with the component, assigning a particular reliability score to the component
(e.g., 0, 0.5, or .1),
increasing the reliability score associated with the particular component,
such as by a
- 23 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
predetermined amount (e.g., 0.01 or 0.1), by a particular percentage (e.g.,
1%, 5%, or 10%), or
by using the output of an algorithm. Similarly, associating blame (negative
reinforcement) with
a particular component (FIG. 5A, operation 512) may include decreasing (e.g.,
decrementing) or
otherwise decreasing a reliability score counter associated with the
component, assigning a
particular reliability score to the component (e.g., 0, 0.5, or .1),
decreasing the reliability score
associated with the particular component, such as by a predetermined amount
(e.g., 0.01 or 0.1),
by a particular percentage (e.g., 1%, 5%, or 10%), or by using the output of
an algorithm.
[0068] In addition to or instead of associating a reliability score with a
component, a
measure of relevance may be associated with the component. Such a measure of
relevance may,
for example, be a counter having a value that is equal or proportional to the
number of observed
occurrences of instances of the concept generated by the component. For
example, each time an
instance of a concept generated by a particular component is observed, the
relevance counter
associated with that component
[0069] If the billing code feedback module 410 applies reinforcement (i.e.,
blame or
praise) to multiple components of the same type (e.g., multiple forward logic
components, or
multiple clauses of a single forward logic component), the billing code
feedback module 410
may divide (apportion) the reinforcement among the multiple components of the
same type,
whether evenly or unevenly. For example, if the billing code feedback module
410 determines
that two clauses of forward logic component 132b are responsible for
generating incorrect billing
code 142b, then the billing code feedback module 410 may assign half of the
blame to the first
clause and half of the blame to the second clause, such as by dividing
(apportioning) the total
blame to be assigned in half (e.g., by dividing a blame value of 0.1 into a
blame value of 0.05
assigned to the first clause and a blame value of 0.05 assigned to the second
clause).
[0070] As yet another example, the billing code feedback module 410 may apply
reinforcement to a particular component (or sub-component) of the system 100a
by assigning, to
the component, a prior known likelihood of error associated with the
component. For example, a
particular component may be observed in a closed feedback loop in connection
with a plurality
of different rules. The accuracy of the component may be observed, recorded,
and then used as a
prior known likelihood of error for that component by the billing code
feedback module 410.
- 24 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0071] The results of applying reinforcement output 412 to the component
identified in
operation 506 may be stored within the system 100a. For example, the
reliability score
associated with a particular component may be stored within, or in association
with, the
particular component. For example, reliability scores associated with concept
extraction
components 120a-c may be stored within concept extraction components 120a-c,
respectively, or
within transcription system 104 and be associated with concept extraction
components 120a-c.
Similarly, reliability scores associated with forward logic components 132a-c
may be stored
within forward logic components 132a-c, respectively, or within reasoning
module 130 and be
associated with forward logic components 132a-c. As another example,
reliability scores may be
stored in, or in association with, billing codes 142a-c. For example, the
reliability score(s) for
the forward logic component and/or concept extraction component responsible
for generating
billing code 142a may be stored within billing code 142a, or be stored within
billing codes 140
and be associated with billing code 142a.
[0072] As mentioned above, the component that generated a billing code may be
identified in operation 506 by, for example, following one or more links from
the billing code to
the component. Following such links, however, merely identifies the component
responsible for
generating the billing code. Such identification, however, may identify a
component that
includes multiple sub-components, some of which relied on accurate data to
generate the billing
code, and some of which relied on inaccurate data to generate the billing
code. It is not desirable
to assign blame to sub-components that relied on accurate data or to assign
praise to sub-
components that relied on inaccurate data.
[0073] Some embodiments of the present invention, therefore, distinguish
between the
responsibilities of sub-components within a component. For example, referring
to FIG. 5B, a
flowchart is shown of a method that is performed in one embodiment of the
present invention to
implement operation 512 of FIG. 5A (associating blame with a component that
was responsible
for generating the billing code on which feedback 408 was provided by the
reviewer 406). The
method 512 identifies all sub-components of the component identified in
operation 506 (FIG.
5B, operation 522). Then, for each such sub-component S (FIG. 5B, operation
524), the method
512 determines whether the reviewer's feedback 408 indicates that sub-
component S is
responsible for the inaccuracy of the billing code (FIG. 5B, operation 526).
If sub-component S
- 25 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
is determined to be responsible, then method 512 assigns blame to sub-
component Sin any of the
ways described above (FIG. 5B, operation 528).
[0074] If sub-component S is not determined to be responsible, then method 512
may
either assign praise to sub-component Sin any of the ways described above
(FIG. 5B, operation
530) or take no action in connection with sub-component S. The method 512
repeats the
operations described above for the remaining sub-components (FIG. 5B,
operation 532). One
consequence of the methods of FIGS. 5A and 5B is that the feedback module 410
may apply
reinforcement to one sub-component of a component but not to another sub-
component of the
component, and that the feedback module 410 may apply one type of
reinforcement (e.g., praise)
to one sub-component of a component and another type of reinforcement (e.g.,
blame) to another
sub-component of the component.
[0075] Similar techniques may be applied to assign praise to sub-components of
a
particular component. For example, referring to FIG. 5C, a flowchart is shown
of a method that
is performed in one embodiment of the present invention to implement operation
510 of FIG. 5A
(associating praise with a component that was responsible for generating the
billing code on
which feedback 408 was provided by the reviewer 406). The method 510
identifies all sub-
components of the component identified in operation 506 (FIG. 5C, operation
542). Then, for
each such sub-component S (FIG. 5C, operation 544), the method 510 determines
whether the
reviewer's feedback 408 indicates that sub-component S is responsible for the
accuracy of the
billing code (FIG. 5C, operation 546). If sub-component S is determined to be
responsible, then
method 510 assigns praise to sub-component Sin any of the ways described above
(FIG. 5C,
operation 548).
[0076] If sub-component S is not determined to be responsible, then method 510
may
either assign blame to sub-component Sin any of the ways described above (FIG.
5C, operation
550) or take no action in connection with sub-component S. The method 510
repeats the
operations described above for the remaining sub-components (FIG. 5C,
operation 552).
[0077] The billing code feedback module 410 may implement either or both of
the
methods shown in FIGS. 5B and 5C. In other words, the billing code feedback
module 410 may
assign blame on a sub-component basis (and optionally also on a component
basis) but only
assign praise on a component basis. As another example, the billing code
feedback module 410
- 26 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
may assign praise on a sub-component basis (and optionally also on a component
basis) but only
assign blame on a component basis. As yet another example, the billing code
feedback module
410 may assign blame on a sub-component basis (and optionally also on a
component basis) and
also assign praise on a sub-component basis (and optionally also on a
component basis). As yet
another example, the billing code feedback module 410 may assign blame only on
a component
basis and assign praise only on a component basis.
[0078] The billing code feedback module 410 may use any of a variety of
techniques to
determine (e.g., in operations 526 of FIG. 5B and 548 of FIG. 5C) whether the
billing code
feedback 408 indicates that a particular sub-component S is responsible for
the accuracy or
inaccuracy of a particular billing code. For example, referring to FIG. 6, a
dataflow diagram is
shown of a system 600 in which billing code feedback module 410 uses an
inverse reasoning
module 630 to implement identify responsible components.
[0079] Inverse reasoning module 630 includes inverse logic components 632a-c,
each of
which may be implemented in any of the ways disclosed above in connection with
forward logic
components 132a-c of reasoning module 130 (FIG. 1A). Each of the inverse logic
components
632a-c may implement distinct logic for reasoning backwards over the set of
logic (e.g., set of
rules) represented and implemented by the reasoning module 130 as a whole. The
set of logic
represented and implemented by the reasoning module 130 as a whole will be
referred to herein
as the "rule set" of the reasoning module 130, although it should be
understood more generally
that the reasoning module 130 may implement logic in addition to or other than
rules, and that
the term "rule set" refers generally herein to any such logic.
[0080] Inverse logic component 632a may implement first logic for reasoning
backwards
over the rule set of reasoning module 130, inverse logic component 632b may
implement second
logic for reasoning backwards over the rule set of reasoning module 130, and
inverse logic
component 632c may implement third logic for reasoning backwards over the rule
set of
reasoning module 130.
[0081] For example, each of the inverse logic components 632a-c may contain
both a
confirmatory logic component and a disconfirmatory logic component, both of
which may be
implemented in any of the ways disclosed above in connection with forward
logic components
132a-c of reasoning module 130 (FIG. 1A). More specifically, inverse logic
component 632a
- 27 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
contains confirmatory logic component 634a and disconfirmatory logic component
634b; inverse
logic component 632b contains confirmatory logic component 634c and
disconfirmatory logic
component 634d; and inverse logic component 632c contains confirmatory logic
component
634e and disconfirmatory logic component 634f.
[0082] The billing code feedback module 410 may use a confirmatory logic
component
to invert the logic of the rule set of reasoning module 130 if the feedback
408 confirms the
accuracy of the reviewed billing code (i.e., if the feedback 408 indicates
that the reviewed billing
code is accurate). In other words, a confirmatory logic component specifies a
conclusion that
may be drawn from: (1) the rule set of reasoning module 130; (2) the
propositions 160; (3) the
billing code under review; and (4) feedback indicating that a reviewed billing
code is accurate.
Such a conclusion may, for example, be that the premise (i.e., condition) of
the logic represented
by a particular forward logic component in the rule set of the reasoning
module 130 is valid
(accurate), or that no conclusion can be drawn about the validity of the
premise.
[0083] Conversely, the billing code feedback module 410 may use a
disconfirmatory
logic component to invert the logic of the rule set of reasoning module 130 if
the feedback 408
disconfirms the accuracy of the reviewed billing code (i.e., if the feedback
408 indicates that the
reviewed billing code is inaccurate). In other words, a disconfirmatory logic
component
specifies a conclusion that may be drawn from: (1) the rule set of reasoning
module 130; (2) the
propositions 160; (3) the billing code under review; and (4) feedback
indicating that a reviewed
billing code is inaccurate. Such a conclusion may, for example, be that the
premise (i.e.,
condition) of the logic represented by a particular forward logic component in
the rule set of the
reasoning module 130 is invalid (inaccurate), or that no conclusion can be
drawn about the
validity of the premise.
[0084] Consider a simple example in which forward logic component 132a
represents
logic of the following form: "If A, Then B." The reasoning module 130 may
apply such a rule to
mean, "if concept A is represented by the data source (e.g., draft transcript
106), then add a
billing code representing concept B to the billing codes 140." Assuming that
inverse logic
component 632a corresponds to forward logic component 132a, the confirmatory
logic
component 634a and disconfirmatory logic components 634b of inverse logic
component 632a
may represent the logic indicated by Table 2.
- 28 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
Inverse Logic Type Conditions Conclusion
Confirmatory (If A, Then B) A is accurate
B Confirmed
Disconfirmatory (If A, Then B) A is inaccurate
B Disconfirmed
Table 2
[0085] As indicated by Table 2, the confirmatory logic component 634a may
represent
logic indicating that the combination of: (1) the rule "If A, Then B"; and (2)
feedback indicating
that B is true (e.g., that a billing code representing B has been confirmed to
be accurate) justifies
the conclusion that (3) A is true (e.g., that the code representing A is
accurate). Such a
conclusion may be justified if it is also known that the rule set of reasoning
module 130 contains
no logic, other than the rule "If A, Then B," for generating B. Confirmatory
logic component
634a may, therefore, draw the conclusion that A is accurate by applying
inverse reasoning to the
rule set of the reasoning module 130 (including rules other than the rule "If
A, Then B" which
generated B), based on feedback indicating that B is true. In this case, the
billing code feedback
module 410 may assign praise to the component(s) that generated the billing
code representing
B. If confirmatory logic component 634a cannot determine that "If A, Then B"
is the only rule
in the rule set of the reasoning module 130 that can generate B, then the
confirmatory logic
module may assign neither praise nor blame to the component(s) that generated
the billing code
representing B.
[0086] Now consider the disconfirmatory logic component 634b of inverse logic
component 632a. As indicated by Table 2, disconfirmatory logic component 634b
may, for
example, represent logic indicating that the combination of: (1) the rule "If
A, Then B"; and (2)
disconfirmation of B justifies the conclusion that (3) A is false (e.g., that
the code representing
concept A is inaccurate). In this case, the billing code feedback module 410
may assign blame to
the component(s) that generated the billing code representing concept B (e.g.,
the component(s)
that generated the concept code representing concept A).
[0087] The techniques disclosed above may be used to identify components
responsible
for generating a billing code without using all of the various links 124a-c,
134a-c, and 144a-c
shown in FIG. 1A. In particular, consider again a rule of the form "If A, Then
B." Assume that
- 29 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
one of the concept extraction components 120a is solely responsible for
generating concept
codes representing instances of concept A (i.e., that none of the other
concept extraction
components 120b-c generates concept codes representing instances of concept
A). In this case, if
the billing code feedback module 410 concludes, based on the rule "If A, Then
B" and feedback
provided on a billing code representing concept B, that reinforcement (praise
or blame) should
be assigned to the concept extraction component responsible for generating the
concept code
representing concept A, the billing code feedback module 410 may identify the
appropriate
concept extraction component 120a by matching the concept A from the rule "If
A, Then B"
with the concept A corresponding to concept extraction component 120a. In
other words, the
billing code feedback module 410 may identify the responsible concept
extraction component
120a on the fly (i.e., during performance of operation 506 in FIG. 5A),
without needing to create,
store, or read from any record of the concept extraction component that
actually generated the
concept code representing concept A.
[0088] The inverse reasoning module 630 may, alternatively or additionally,
use inverse
logic components 632a-c to identify sub-components that are and are not
responsible for the
accuracy or inaccuracy of a reviewed billing code, and thereby to enable
operations 526 (FIG.
5B) and 546 (FIG. 5C). For example, assume that forward logic component 132a
represents a
rule of the form "If (A AND B), Then C." The forward reasoning module 130 may
apply such a
rule to mean, "if concept A and concept B are represented by the data source
(e.g., draft
transcript 106), then add a billing code representing concept C to the billing
codes 140." The
confirmatory logic component 634a and disconfirmatory logic component 634b of
inverse logic
component 632a may represent the logic indicated by Table 3.
Inverse Logic Type Conditions Conclusion
Confirmatory If (A AND B), Then C A is accurate and B is
accurate
C Confirmed
Disconfirmatory If (A AND B), Then C A is inaccurate, B is
C Disconfirmed inaccurate, or both A and
B
are inaccurate
Table 3
- 30 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0089] As indicated by Table 3, confirmatory logic component 634a may, for
example,
represent logic indicating that if the rule "If (A AND B), Then C" is inverted
based on feedback
indicating that C is true (e.g., that a billing code representing concept C is
accurate), then it can
be concluded that A is true (e.g., that the concept code representing concept
A and relied upon by
the rule is accurate) and that B is true (e.g., that the concept code
representing concept B and
relied upon by the rule is accurate), if no other rule in the rule set of the
reasoning module 130
can generate C. In this case, the billing code feedback module 410 may assign
praise to the
component(s) that generated the code representing concept A and to the
component(s) that
generated the code representing concept B.
[0090] As indicated by Table 3, disconfirmatory logic component 634b may, for
example, represent logic indicating if the rule "If (A AND B), Then C" is
inverted based on
feedback indicating that C is false (e.g., that a billing code representing
concept C is inaccurate),
then either A is false, B is false, or both A and B are false. In this case,
the billing code feedback
module 410 may assign blame to both the component(s) responsible for
generating A and the
component(s) responsible for generating B. For example, the billing code
feedback module 410
may divide the blame evenly, such as by assigning 50% of the blame to the
component
responsible for generating concept A and 50% of the blame to the component
responsible for
generating concept B.
[0091] Although such a technique may result in assigning blame to a component
that
does not deserve such blame in a specific case, as the billing feedback module
410 assigns blame
and praise to the same component repeatedly over time, and to a variety of
components in the
systems 100a-b over time, the resulting reliability scores associated with the
various components
is likely to reflect the actual reliabilities of such components. Therefore,
one advantage of
embodiments of the present invention is that they are capable of assigning
praise and blame to
components with increasing accuracy over time, even while assigning praise and
blame
inaccurately in certain individual cases.
[0092] Alternatively, for example, if it is not immediately possible to assign
any praise or
blame to the components responsible for generating codes A or B, the billing
code feedback
module 410 may associate and store a truth value of "false" with the rule "If
(A AND B), Then
C" (e.g., with the forward logic component representing that rule). As
described in more detail
- 31 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
below, this truth value may be used to draw inferences about the truth values
of A and/or B
individually.
[0093] Now assume that forward logic component 132a represents a rule of the
form "If
(A OR B), Then C." The forward reasoning module 130 may apply such a rule to
mean, "if
concept A is represented by the data source (e.g., draft transcript 106) or
concept B is
represented by the data source, then add a billing code representing concept C
to the billing
codes 140." The confirmatory logic component 634a and disconfirmatory logic
components
634b of inverse logic component 632a may represent the logic indicated by
Table 4.
Inverse Logic Type Conditions Conclusion
Confirmatory If (A OR B), Then C A is accurate, B is
accurate, or
C Confirmed both A and B are accurate
Disconfirmatory If (A OR B), Then C A is inaccurate and B is
C Disconfirmed inaccurate
Table 4
[0094] As indicated by Table 4, confirmatory logic component 634b may, for
example,
represent logic indicating if the rule "If (A AND B), Then C" is inverted
based on feedback
indicating that C is true (e.g., that a billing code representing concept C is
accurate), then either
A is true, B is true, or both A and B are true. In this case, the billing code
feedback module 410
may assign praise to both the component(s) responsible for generating A and
the component(s)
responsible for generating B. For example, the billing code feedback module
410 may divide the
praise evenly, such as by assigning 50% of the praise to the component
responsible for
generating concept A and 50% of the praise to the component responsible for
generating concept
B.
[0095] Alternatively, for example, if it is not immediately possible to assign
any praise or
blame to the components responsible for generating codes A or B, the billing
code feedback
module 410 may associate and store a truth value of "true" with the rule "If
(A OR B), Then C"
(e.g., with the forward logic component representing that rule). As described
in more detail
below, this truth value may be used to draw inferences about the truth values
of A and/or B
individually.
- 32 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0096] As indicated by Table 4, disconfirmatory logic component 634b may, for
example, represent logic indicating if the rule "If (A ORB), Then C" is
inverted based on
feedback indicating that C is false (e.g., that a billing code representing
concept C is inaccurate),
then A must be false and B must be false. In this case, the billing code
feedback module may
assign blame to both the component(s) responsible for generating the code
representing concept
A and the component(s) responsible for generating the code representing
concept B.
[0097] The particular inversion logic described above is merely illustrative
and does not
constitute a limitation of the present invention. Those having ordinary skill
in the art will
appreciate that other inversion logic will be applicable to logic having forms
other than those
specifically listed above.
[0098] The feedback provided by the reviewer 406 may include, in addition to
or instead
of an indication of whether the reviewed billing code is accurate, a revision
to the reviewed
billing code. For example, the reviewer 406 may indicate, via the feedback
408, a replacement
billing code. In response to receiving such a replacement billing code, the
billing code feedback
module 410 may replace the reviewed billing code with the replacement billing
code. The
reviewer 406 may specify the replacement billing code, such as by typing the
text of such a code,
selecting the code from a list, or using any user interface to select a
description of the
replacement billing code, in response to which the billing code feedback
module 410 may select
the replacement billing code and use it to replace the reviewed billing code
in the data source.
[0099] For example, referring again to Table 1, assume that the forward
reasoning
module 130 had used Rule #2 to generate billing code 142b representing
"<UNCONTROLLED DIABETES>," and that the reviewer 406 has provided feedback 408

indicating that "<UNCONTROLLED DIABETES>" should be replaced with
"<DIABETES NOT FURTHER SPECIFIED>." In response, the billing code feedback
module 410 may replace the code "<UNCONTROLLED DIABETES>" with the code
"<DIABETES NOT FURTHER SPECIFIED>" in the draft transcript 106.
[0100] More generally, the billing code feedback module 410 may treat the
receipt of
such a replacement billing code as: (1) disconfirmation by the reviewer 406 of
the reviewed
billing code (i.e., the billing code replaced by the reviewer 406, which in
this example is
"<UNCONTROLLED DIABETES>"); and (2) confirmation by the reviewer 406 of the
- 33 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
replacement billing code (which in this example is
"<DIABETES NOT FURTHER SPECIFIED>"). In other words, a single feedback input
provided by the reviewer 406 may be treated by the billing code feedback
module 410 as a
disconfirmation of one billing code and a confirmation of another billing
code. In response, the
feedback module 410 may: (1) take any of the steps described above in response
to a
disconfirmation of a billing code in connection with the reviewed billing code
that has
effectively been disconfirmed by the reviewer 406; and (2) take any of the
steps described above
in response to a confirmation of a billing code in connection with the
reviewed billing code that
has effectively been confirmed by the reviewer 406.
[0101] As described above, reviewer feedback 408 may cause the feedback module
410
to associate truth values with particular forward logic components (e.g.,
rules). The feedback
module 410 may use such truth values to automatically confirm or disconfirm
individual forward
logic components and/or sub-components thereof In general, the feedback module
410 may
follow any available chains of logic represented by the forward logic
components 132a-c and
their associated truth values at any given time, and draw any conclusions
justified by such chains
of logic.
[0102] As a result, the feedback module 410 may confirm or disconfirm the
accuracy of a
component of the system 100a, even if such a component was not directly
confirmed or
disconfirmed by the reviewer's feedback 408. For example, the reviewer 406 may
provide
feedback 408 on a billing code that disconfirms a first component (e.g.,
forward logic
component) of the system 100a. Such disconfirmation may cause the feedback
module to
confirm or disconfirm a second component (e.g., forward logic component) of
the system 100a,
even if the second component was not responsible for generating the billing
code on which
feedback 408 was provided by the reviewer 406. Automatic
confirmation/disconfirmation of a
system component by the feedback module 410 may include taking any of the
actions disclosed
herein in connection with manual confirmation/disconfirmation of a system
component. The
feedback module 410 may follow chains of logic through any number of
components of the
system 100a in this way.
[0103] As described above, the term "component" as used herein includes one or
more
sub-components of a component. Therefore, for example, if the reviewer's
feedback 408
- 34 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
disconfirms the reviewed billing code, this may cause the feedback module 410
to disconfirm a
first sub-component (e.g., condition) of a first one of the forward logic
components 142a-c,
which may in turn cause the feedback module 410 to confirm a sub-component
(e.g., condition)
of a second one of the forward logic components 142a-c, which may in turn
cause the feedback
module 410 to disconfirm (and thereby to assign blame to) a second sub-
component of the first
one of the forward logic components 142a-c.
[0104] As a particular example, consider again the case in which the
reviewer's feedback
408 replaces the billing code "<UNCONTROLLED DIABETES>" generated by Rule #2
of
Table 1 with the billing code "<DIABETES NOT FURTHER SPECIFIED>". In response,
the
feedback module 410 may assign a truth value of "false" (i.e., disconfirm)
Rule #2, but not yet
determine which sub-component (e.g., the clause "patient
has_problem<DIABETES>" or the
clause "p.getStatus() ¨ <UNCONTROLLED>") is to blame for the disconfirmation
of the rule
as a whole.
[0105] Since the user has now also confirmed the billing code
"<DIABETES NOT FURTHER SPECIFIED>," the feedback module 410 may use the
inverse
reasoning of inverse reasoning module 630 to automatically confirm Rule #1 of
Table 1 and to
assign a truth value of "true" (i.e., confirm) to Rule #1. Now that Rule #1
has been confirmed, it
is known that the clause "patient has_problem<DIABETES>" is true (confirmed).
It is also
known, as described above, that the truth value of Rule #2 is false.
Therefore, the feedback
module 410 may apply the logic "If (A AND B) AND (NOT A), Then (NOT B)" to
Rule #2 to
conclude that "p.getStatus() ¨ <UNCONTROLLED>" is false (where A is
"patient has_problem<DIABETES>" and where B is "p.getStatus() ¨
<UNCONTROLLED>"). The feedback module 410 may, in response to drawing this
conclusion, associate blame with the component(s) responsible for generating
the code
"<UNCONTROLLED>."
[0106] Assigning blame and praise to components responsible for generating
codes
enables the system 400 to independently track the accuracy of constituent
components (e.g.,
clauses) in the forward reasoning module 130 (e.g., rule set), and thereby to
identify components
of the system 100a that are not reliable at generating concept codes and/or
billing codes. The
feedback module 410 may take any of a variety of actions in response to
determining that a
- 35 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
particular component is unreliable. More generally, the feedback module 410
may take any of a
variety of actions based on the reliability of a component, as may be
represented by the
reliability score of the component (FIG. 5A, operation 514).
[0107] The feedback module 410 may consider a particular component to be
"unreliable"
if, for example, the component has a reliability score falling below (or
above) some
predetermined threshold. For example, a component may be considered
"unreliable" if the
component has generated concept codes that have been disconfirmed more than a
predetermined
minimum number of times. For purposes of determining whether a component is
unreliable, the
feedback module 410 may take into account only manual disconfirmations by
human reviewers,
or both manual disconfirmations and automatic disconfirmations resulting from
application of
chains of logic by the feedback module 410.
[0108] The system 400 may take any of a variety of actions in response to
concluding
that a component is unreliable. For example, the system 100a may subsequently
and
automatically require the human operator 406 to review and approve of any
concept codes
(subsequently and/or previously) generated by the unreliable concept
extraction component,
while allowing codes (subsequently and/or previously) generated by other
concept extraction
components to be used without requiring human review. For example, if a
particular concept
extraction component is deemed by the feedback module 410 to be unreliable,
then when the
particular concept extraction component next generates a concept code, the
system 100a may
require the human reviewer to review and provide input indicating whether the
reviewer
approves of the generated concept code. The system 100a may insert the
generated concept code
into the draft transcript 106 in response to input indicating that the
reviewer 406 approves of the
generated concept code, and not insert the generated concept code into the
draft transcript 106 in
response to input indicating that the reviewer 406 does not approve of the
generated concept
code.
[0109] Additionally or alternatively, the system 100a may subsequently and
automatically require the human operator 406 to review and approve of any
billing codes
(subsequently and/or previously) generated based on concept codes generated by
the unreliable
concept extraction component, while allowing billing codes (subsequently
and/or previously)
generated without reliance on the unreliable concept extraction component to
be used without
- 36 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
requiring human review. For example, if a particular concept extraction
component is deemed
by the feedback module 410 to be unreliable, then when any of the forward
logic components
132a-c next generates a concept code based on logic that references the
concept code (e.g., a
condition which requires the data source to contain a concept code generated
by the unreliable
concept extraction component), the system 100a may require the human reviewer
to review and
provide input indicating whether the reviewer approves of the generated
billing code and/or
concept code. The system 100a may insert the generated billing code into the
draft transcript
106 in response to input indicating that the reviewer 406 approves of the
generated billing code
and/or concept code, and not insert the generated billing code into the draft
transcript 106 in
response to input indicating that the reviewer 406 does not approve of the
generated billing code
and/or concept code.
[0110] As another example, in response to concluding that a particular concept
extraction
component is unreliable, the system 400 may notify the human reviewer 406 of
such insufficient
reliability, in response to which the human reviewer 406 or other person may
modify (e.g., by
reprogramming) the identified concept extraction component in an attempt to
improve its
reliability.
[0111] Although certain examples described above refer to applying
reinforcement (i.e.,
assigning praise and/or blame) to components of systems 100a-b, embodiments of
the present
invention may also be used to apply reinforcement to one or more human
reviewers 406 who
provide feedback on the billing codes 140. For example, the system 400 may
associate a
reliability score with the human reviewer 406, and associate distinct
reliability scores with each
of one or more additional human reviewers (not shown) who provide feedback to
the system 400
in the same manner as that described above in connection with the reviewer
406.
[0112] As described above in connection with FIGS. 4 and 5A, the billing code
feedback
module 410 may solicit feedback 408 from the human reviewer 406 in connection
with a
particular one of the billing codes 142a-c. The billing code feedback module
410 may further
identify a reference reliability score associated with the billing code under
review. Such a
reliability score may, for example, be implemented in any of the ways
disclosed herein, and may
therefore, for example, have a value of "accurate" or "inaccurate" or any
value representing an
intermediate verification status. The billing code feedback module 410 may
identify the
- 37 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
reference reliability score of the billing code in any manner, such as by
initially associated a
default reliability score with the billing code (e.g., 0.0, 1.0, or 0.5) and
then revising the
reference reliability score in response to feedback 408 provided by the
reviewer 406 and other
reviewers over time on the billing code.
[0113] As a result, as many reviewers provide feedback on a plurality of
billing codes,
the system 400 may refine the reliability scores that are associated with
concept extraction
components 120a-c over time. The billing code feedback module 410 may use such
a refined
reliability score for a billing code as the reference reliability score for
the billing code in the
process described below. The billing code feedback module 410 may, for
example, first wait
until the billing code's reliability score achieves some predetermined degree
of confirmation,
such as by waiting until some minimum predetermined amount of feedback has
been provided on
the billing code, or until some minimum predetermined number of reviewers have
provided
feedback on the billing code.
[0114] As reviewers (such as reviewer 406 and other reviewers) continue to
provided
feedback to the billing code feedback module 410 in connection with the
billing code, the billing
code feedback module may determine whether the feedback provided by the human
reviewers,
individually or in aggregate, diverges from the reliability scores (e.g., the
sufficiently-confirmed
reliability scores) sufficiently (e.g., by more than some predetermined
degree). If the
determination indicates that the reviewers' feedback does sufficiently diverge
from the reference
reliability score, then the billing code feedback module 410 may take any of a
variety of actions,
such as one or more of the following: (1) assigning blame to one or more of
the human reviewers
who provided the diverging feedback; and (2) prevent any blame resulting from
the diverging
feedback from propagating backwards through the systems 100a-b to the
corresponding
components (e.g., concept extraction components 120a-c and/or forward logic
components 132a-
c). Performing both (1) and (2) is an example in which the system 400 assigns
blame to one
component of the system (the human reviewer 406) but does not propagate such
blame
backwards up to any of the system components.
[0115] The billing code feedback module may apply the same techniques to any
number
of human reviewers 406 to modify the distinct reliability scores associated
with such reviewers
over time based on the feedback they provide. Such a method in effect treats
the human
- 38 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
reviewer 406 as the first component in the chain of inverse logic implemented
by the inverse
reasoning component 630.
[0116] It is to be understood that although the invention has been described
above in
terms of particular embodiments, the foregoing embodiments are provided as
illustrative only,
and do not limit or define the scope of the invention. Various other
embodiments, including but
not limited to the following, are also within the scope of the claims. For
example, elements and
components described herein may be further divided into additional components
or joined
together to form fewer components for performing the same functions.
[0117] Any of the functions disclosed herein may be implemented using means
for
performing those functions. Such means include, but are not limited to, any of
the components
disclosed herein, such as the computer-related components described below.
[0118] Although certain examples herein involve "billing codes," such examples
are not
limitations of the present invention. More generally, embodiments of the
present invention may
be applied in connection with codes other than billing codes, and in
connection with data
structures other than codes, such as data stored in databases and in forms
other than structured
documents.
[0119] The techniques described above may be implemented, for example, in
hardware,
one or more computer programs tangibly stored on one or more computer-readable
media,
firmware, or any combination thereof. The techniques described above may be
implemented in
one or more computer programs executing on (or executable by) a programmable
computer
including any combination of any number of the following: a processor, a
storage medium
readable by the processor (including, for example, volatile and non-volatile
memory and/or
storage elements), an input device, and an output device. Program code may be
applied to input
entered using the input device to perform the functions described and to
generate output using
the output device.
[0120] Each computer program within the scope of the claims below may be
implemented in any programming language, such as assembly language, machine
language, a
high-level procedural programming language, or an object-oriented programming
language. The
programming language may, for example, be a compiled or interpreted
programming language.
- 39 -

CA 02811942 2013-03-20
WO 2012/040578 PCT/US2011/052983
[0121] Each such computer program may be implemented in a computer program
product tangibly embodied in a machine-readable storage device for execution
by a computer
processor. Method steps of the invention may be performed by a computer
processor executing a
program tangibly embodied on a computer-readable medium to perform functions
of the
invention by operating on input and generating output. Suitable processors
include, by way of
example, both general and special purpose microprocessors. Generally, the
processor receives
instructions and data from a read-only memory and/or a random access memory.
Storage
devices suitable for tangibly embodying computer program instructions include,
for example, all
forms of non-volatile memory, such as semiconductor memory devices, including
EPROM,
EEPROM, and flash memory devices; magnetic disks such as internal hard disks
and removable
disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be
supplemented by, or
incorporated in, specially-designed ASICs (application-specific integrated
circuits) or FPGAs
(Field-Programmable Gate Arrays). A computer can generally also receive
programs and data
from a storage medium such as an internal disk (not shown) or a removable
disk. These elements
will also be found in a conventional desktop or workstation computer as well
as other computers
suitable for executing computer programs implementing the methods described
herein, which
may be used in conjunction with any digital print engine or marking engine,
display monitor, or
other raster output device capable of producing color or gray scale pixels on
paper, film, display
screen, or other output medium.
[0122] What is claimed is:
- 40 -

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(86) PCT Filing Date 2011-09-23
(87) PCT Publication Date 2012-03-29
(85) National Entry 2013-03-20
Examination Requested 2016-09-20
Dead Application 2021-10-29

Abandonment History

Abandonment Date Reason Reinstatement Date
2020-10-29 R86(2) - Failure to Respond
2021-03-23 FAILURE TO PAY APPLICATION MAINTENANCE FEE

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $400.00 2013-03-20
Maintenance Fee - Application - New Act 2 2013-09-23 $100.00 2013-09-05
Maintenance Fee - Application - New Act 3 2014-09-23 $100.00 2014-09-15
Maintenance Fee - Application - New Act 4 2015-09-23 $100.00 2015-08-31
Request for Examination $800.00 2016-09-20
Maintenance Fee - Application - New Act 5 2016-09-23 $200.00 2016-09-21
Maintenance Fee - Application - New Act 6 2017-09-25 $200.00 2017-08-22
Maintenance Fee - Application - New Act 7 2018-09-24 $200.00 2018-08-29
Maintenance Fee - Application - New Act 8 2019-09-23 $200.00 2019-08-22
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
MMODAL IP LLC
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Amendment 2019-12-18 94 5,138
Claims 2019-12-18 8 295
Description 2019-12-18 40 2,286
Examiner Requisition 2020-06-29 7 357
Abstract 2013-03-20 2 77
Claims 2013-03-20 8 255
Drawings 2013-03-20 9 131
Description 2013-03-20 40 2,258
Representative Drawing 2013-04-22 1 9
Cover Page 2013-06-04 1 47
Examiner Requisition 2017-07-10 5 362
Amendment 2017-11-03 9 350
Description 2017-11-03 40 2,117
Examiner Requisition 2018-06-08 6 371
Amendment 2018-12-10 13 498
Claims 2018-12-10 7 275
Request for Examination 2016-09-20 2 58
Examiner Requisition 2019-06-18 6 405
Correspondence 2013-06-26 3 103
PCT 2013-03-20 7 298
Assignment 2013-03-20 8 175
Correspondence 2013-07-08 1 12
Correspondence 2013-07-08 1 19
Fees 2016-09-21 1 33
Change to the Method of Correspondence 2016-10-25 3 78
Prosecution-Amendment 2016-10-25 3 78