Language selection

Search

Patent 2371730 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2371730
(54) English Title: ACCOUNT FRAUD SCORING
(54) French Title: CALCUL DES UTILISATIONS FRAUDULEUSES D'UN COMPTE
Status: Deemed Abandoned and Beyond the Period of Reinstatement - Pending Response to Notice of Disregarded Communication
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06Q 99/00 (2006.01)
  • H04M 15/00 (2006.01)
(72) Inventors :
  • GRADY, BEN (United Kingdom)
  • HOBSON, PHILIP WILLIAM (United Kingdom)
  • JOLLIFFE, GRAHAM (United Kingdom)
(73) Owners :
  • CEREBRUS SOLUTIONS LIMITED
(71) Applicants :
  • CEREBRUS SOLUTIONS LIMITED (United Kingdom)
(74) Agent: SMART & BIGGAR LP
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2000-04-28
(87) Open to Public Inspection: 2000-11-09
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/GB2000/001669
(87) International Publication Number: WO 2000067168
(85) National Entry: 2001-11-07

(30) Application Priority Data:
Application No. Country/Territory Date
9910111.5 (United Kingdom) 1999-04-30

Abstracts

English Abstract


A method and apparatus for prioritising alarms in an account fraud detection
system. The method involves assigning a numeric weight to each of a plurality
of behavioural characteristics of an alarm raised against an account, and
computing a fraud score for that alarm responsive to those numeric weights.
Numeric bounds may be imposed on the score, and a term may be added dependent
on the number of alarms raised on the account.


French Abstract

L'invention concerne un procédé et un appareil donnant la priorité à des alarmes déclenchées dans un système de détection d'utilisations frauduleuses d'un compte. Ce procédé consiste à attribuer un poids numérique à chaque caractéristique de comportement d'une alarme déclenchée et à calculer les utilisations frauduleuses détectées par l'alarme réagissant à ces poids numériques. On peut définir des limites numériques selon le résultat et on peut ajouter une échéance d'après le nombre d'alarmes déclenchées sur le compte.

Claims

Note: Claims are shown in the official language in which they were submitted.


-17-
CLAIMS
1. A method of prioritising alarms in an account fraud detection
system comprising the steps of:
assigning a numeric weight to each of a plurality of behavioural
characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said
numeric weights.
2. A method according to claim 1 wherein said step of -computing
comprises the step of:
forming a product of a plurality of said numeric weights.
3. A method of prioritising alarms in an account fraud detection
system comprising the steps of:
assigning a numeric weight to each of a plurality of behavioural
characteristics of each of one or more of alarms raised against an
account;
computing a fraud score for each of said one or more alarms
responsive to said numeric weights;
computing an account fraud score responsive to said one or
more fraud scores.
4. A method according to claim 3 wherein said step of computing a
fraud score for each of said one or more alarms comprises the step of:
forming a product of a plurality of said numeric weights.
5. A method according to claim 3 wherein said step of computing
an account fraud score comprises the step of:
selecting a largest of said one or more fraud scores.
6. A method according to any one of claims 3 - 5 wherein said
step of computing an account fraud score comprises the step of:

-18-
imposing a numeric bound on the value of said account fraud
score.
7. A method according to any one of claims 3 - 6 wherein said
step of computing an account fraud score for each of said one or more
alarms comprises the step of:
adding a term dependent on the number of alarms raised.
8. A method of prioritising alarms in an account fraud detection
system comprising the steps of:
performing the method of any one of claims 3 - 7 on a plurality
of accounts whereby to compute an account fraud score for each of said
accounts;
providing a sorted list of accounts responsive to said account
fraud scores.
9. A method according to claim 8 additionally comprising the step
of:
displaying said sorted list of accounts.
10. A method according to claim 9 wherein the step of displaying
said sorted list of accounts comprises the step of:
displaying with each account an indication of its associated
account fraud score.
11. A method according to any one of claims 3 - 10 wherein said
characteristics include one or more characteristics drawn from the set
consisting of: alarm capability, alarm sub-capability, velocity, bucket size,
and account age.
12. Apparatus arranged for prioritising alarms in an account fraud
detection system comprising:

-19-
first apparatus arranged to assign a numeric weight to each of a plurality
of behavioural characteristics of an alarm raised against an account;
second apparatus arranged to compute a fraud score for said
alarm responsive to said numeric weights.
13. Apparatus arranged for prioritising alarms in an account fraud
detection system comprising the steps of:
first apparatus arranged to assign a numeric weight to each of a
plurality of behavioural characteristics of each of one or more of alarms
raised against an account;
second apparatus arranged to compute a fraud score for each
of said one or more alarms responsive to said numeric weights;
third apparatus arranged to compute an account fraud score
responsive to said one or more fraud scores.
14. Software on a machine readable medium arranged for
prioritising alarms in an account fraud detection system and arranged to
perform the steps of:
assigning a numeric weight to each of a plurality of behavioural
characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said
numeric weights.
15. Software on a machine readable medium arranged for
prioritising alarms in an account fraud detection system and arranged to
perform the steps of:
assigning a numeric weight to each of a plurality of behavioural
characteristics of each of one or more of alarms raised against an
account;
computing an fraud score for each of said one or more alarms
responsive to said numeric weights;
computing an account fraud score responsive to said one or
more fraud scores.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
ACCOUNT FRAUD SCORING
FIELD OF THE INVENTION
The present invention relates to a method and apparatus for account
s fraud scoring and a system incorporating the same.
BACKGROUND TO THE INVENTION
In recent years there has been a rapid increase in the number of
commercially operated telecommunications networks in general and in
particular wireless telecommunication networks. Associated with this
to proliferation of networks is a rise in fraudulent use of such networks the
fraud typically taking the form of gaining illicit access to the network, and
then using the network in such a way that the fraudulent user hopes
subsequently to avoid paying for the resources used. This may for
example involve misuse of a third party's account on the network so that
is the perpetrated fraud becomes apparent only when the third party is
charged for resources which he did not use.
In response to this form of attack on the network, fraud detection tools
have been developed to assist in the identification of such fraudulent use.
Such a fraud detection tool may, however, produce thousands of alarms
Zo in one day. In the past these alarms have been ordered either
chronologically according to when they have occurred, or in terms of their
importance, or a combination of both. Alarm importance provided a
rudimentary order based on the significance of the alarm raised, although
it has many failings: such a system takes no account of how alarms
zs interact.
Since fraudulent use of a single account can cost a network operator a
large sum of money within a short space of time it is important that the
operator be able to identify and deal with the most costly forms of fraud at
the earliest possible time. The existing methods of chronological ordering
3o and alarm importance ordering are, however, inadequate in that regard.
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-2-
OBJECT OF THE INVENTION
The invention seeks to provide an improved method and apparatus for
classifying and prioritising identified instances of potential account fraud.
SUMMARY OF THE INVENTION
s According to a first aspect of the present invention there is provided a
method of prioritising alarms in an account fraud detection system
comprising the steps of: assigning a numeric weight to each of a plurality
of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric
io weights.
Advantageously, the score gives a meaningful representation of the
seriousness of a potential fraud associated with the raised alarm.
Preferably, said step of computing comprises the step of: forming a
product of a plurality of said numeric weights.
is According to a further aspect of the present invention there is provided a
method of prioritising alarms in an account fraud detection system
comprising the steps of: assigning a numeric weight to each of a plurality
of behavioural characteristics of each of one or more of alarms raised
against an account; computing a fraud score for each of said one or more
2o alarms responsive to said numeric weights; computing an account fraud
score responsive to said one or more fraud scores.
Preferably, said step of computing a fraud score comprises the step of:
forming a product of a plurality of said numeric weights.
Preferably, said step of computing an account fraud score comprises the
2s step of: selecting a largest of said one or more fraud scores.
Preferably, said step of computing an account fraud score comprises the
step of: imposing a numeric bound on the value of said account fraud
score.
Preferably, said step of computing an account fraud score for each of said,
30 one or more alarms comprises the step of: adding a term dependent on
the number of alarms raised.
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-3-
Preferably, said step of computing an account fraud score comprises the
steps of: selecting a largest of said fraud scores; adding a term
dependent on the number of alarms raised.
Advantageously, this prioritises accounts according to the seriousness of
s potential fraud associated with them.
According to a further aspect of the present invention there is provided a
method of prioritising alarms in an account fraud detection system
comprising the steps of: performing the method of claim 3 on a plurality of
accounts whereby to compute an account fraud score for each of said
Io accounts; providing a sorted list of accounts responsive to said account
fraud scores.
The method may also comprise the step of: displaying said sorted list of
accounts.
Advantageously, this allows an operator to rapidly identify high risk
is account usage and hence concentrate resources on those high risk,
potentially high cost frauds.
Preferably, the step of displaying said sorted list of accounts comprises
the step of: displaying with each account an indication of its associated
account fraud score.
zo In a preferred embodiment, said characteristics include one or more
characteristics drawn from the set consisting of: alarm capability, alarm
sub-capability, velocity, bucket size, and account age.
The invention also provides for a system for the purposes of fraud
detection which comprises one or more instances of apparatus
2s embodying the present invention, together with other additional
apparatus.
According to a further aspect of the present invention there is provided an
apparatus arranged for prioritising alarms in an account fraud detection
system comprising: first apparatus arranged to assign a numeric weight
3o to each of a plurality of behavioural characteristics of an alarm raised
against an account; second apparatus arranged to compute a fraud score
for said alarm responsive to said numeric weights.
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-4-
According to a further aspect of the present invention there is provided an
apparatus arranged for prioritising alarms in an account fraud detection
system comprising the steps of: first apparatus arranged to assign a
numeric weight to each of a plurality of behavioural characteristics of each
s of one or more of alarms raised against an account; second apparatus
arranged to compute a fraud score for each of said one or more alarms
responsive to said numeric weights; third apparatus arranged to compute
an account fraud score responsive to said one or more fraud scores.
According to a further aspect of the present invention there is provided
io software on a machine readable medium arranged for prioritising alarms
in an account fraud detection system and arranged to perform the steps
of: assigning a numeric weight to each of a plurality of behavioural
characteristics of an alarm raised against an account; computing a fraud
score for said alarm responsive to said numeric weights.
is According to a further aspect of the present invention there is provided
software on a machine readable medium arranged for prioritising alarms
in an account fraud detection system and arranged to perform the steps
of: assigning a numeric weight to each of a plurality of behavioural
characteristics of each of one or more of alarms raised against an
2o account; computing an fraud score for each of said one or more alarms
responsive to said numeric weights; computing an account fraud score
responsive to said one or more fraud scores.
The preferred features may be combined as appropriate, as would be
apparent to a skilled person, and may be combined with any of the
Zs aspects of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to show how the invention may be carried into effect,
embodiments of the invention are now described below by way of
example only and with reference to the accompanying figures in which:
3o Figure 1 shows a schematic diagram of an account fraud scoring
apparatus in accordance with the present invention.
Figure 2 shows a schematic diagram of an account fraud prioritising
apparatus in accordance with the present invention.
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-5-
Figures 3(a)-(d) show successive columns of a table showing an
examples of account fraud score calculations in accordance with the
present invention.
DETAILED DESCRIPTION OF INVENTION
s Referring to Figure 1, there is shown a schematic diagram of a system
arranged to perform account fraud scoring. In particular the system
shown relates to telecommunications system account fraud scoring and
comprises a source 100 of Call Detail Records (CDRs) arranged. to
provide CDR's to a plurality of fraud detectors 110, 120. In this specific
to embodiment, a first detector 110 is a neural network whilst 'a second
detector 120 is arranged to apply thresholds (and/or rules) to the received
CRS's.
The neural network fraud detector 110 is arranged to receive a
succession of CDR's and to provide in response a series of outputs
is indicating either a Neural Network Fraudulent Alarm (NN(F)), a Neural
Network Expected Alarm (NN(E)), or a third category not indicative of an
alarm. (The third category may be implemented by the neural.network not
generating an output.)
Each NN(E) alarm provided by the neural network 110 is then mapped
ao 111 to an associated Alarm Capability Factor (ACF) which is a numeric
value indicative of the importance or risk associated with the alarm.
Each NN(F) provided by the neural network 110 is mapped 112 to a
confidence level indicative of the confidence with which the neural
network predicts that the account behaviour which raised the alarm is
2s fraudulent. This confidence level may then be normalised with respect to
the Alarm Capability Factors arising from NN(E)'s and Threshold alarms
(described below) to provide an Alarm Capability Factor for each NN(F).
The threshold detector 120 is arranged to receive a succession of CDR's
from the CDR source 100 and to provide in response a series of outputs
3o indicative of whether the series of CDR's to date has exceeded any of one
or more threshold values associated with different characteristics of the
CDR series, any one of which might be indicative of fraudulent account
usage.
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-6-
Fraud score 140 is then calculated 130 from the Alarm Capability Factors
(ACF), Velocity Factors (VF), and Bucket Factor (BF) which are described
in detail below. In a preferred embodiment, the score is calculated as a
product:
s Fraud Score = Alarm Capability Factor x Velocity Factor x Bucket Factor
(1)
In a preferred embodiment, a further factor, a sub-capability factor, is
added to the equation to cater for variations of risk within a given broad
category of alarms associated with the alarm capability factor.
io Fraud Score = Alarm Capability Factor x Velocity Factor x Bucket Factor
x Alarm Sub-Capability Factor (2)
Fraud scores are computed for each alarm type raised against a given
account and the highest of these scores is taken as the base account
fraud score.
is An additional term is then added which takes into account the fact that
multiple alarms on the score account may be more indicative of a
potential fraud risk than a single alarm. In a most preferred embodiment a
fixed, multiple alarm factor is determined and then a multiple of this factor
is added to the base account fraud score to give a find account fraud
2o score. The multiple used is simply the number of alarms on the account.
Details of these specific factors and others are given in more detail below.
Turning now to Figure 2, the account fraud scoring system 1 of Figure 1
typically forms part of a fraud detection system.
The CDR data 100 provided to the scoring mechanism 210 described
2s above is obtained from the telecommunications network 200.
The resulting account fraud scores calculated per account may then be
sorted (220) so as to identify those accounts most suspected of being
used fraudulently. This information may then be presented to an operator
via, for example a Graphical User Interface (GUI) 230, either simply by
30 listing the accounts in order of fraud likelihood, or by also showing some
indication of the associated account fraud score (for example by
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
_7_
displaying the actual account fraud score), or by any other appropriate
means.
Referring now to the table shown in Figures 3(a)-(d), an example is given
of the numerical values assigned to the various account characteristics.
s The first column simply assigns a number to each of the main alarm types
listed in column 2. Rows having no explicitly named alarm type relate to
the same alarm type as appears most closely above.
Column 4 similarly lists alarm sub-types where applicable whilst column 9
indicates bucket size for two applicable alarm types.
io Columns 3, 5, 8, and 10 respectively list the alarm capability factors, sub-
capability factors, velocity factors, and bucket factors associated with
each alarm variant.
In the table shown no specific traffic values and threshold values are
shown, since these are specific to a particular account at a particular time.
is Instead, typical resulting velocity factor values (e.g. 1, 1.35) are shown
in
column 8 for illustrative purposes.
Column 11 shows the effect of applying the sub-capability factor, velocity
factor and bucket factor to each basic alarm capability factor.
Column 12 is blank, indicating that all the accounts listed in columns 15-
20 32 are considered in this example to be well-established accounts, with a
default account age factor of 1Ø In the case of newly opened accounts
on higher account age factor, for example 1.2 might be employed.
Column 13 shows the effect of applying the account age factor to the
product of preceding factors shown in column 11.
2s Columns 15-32 show nine examples of account fraud score calculations
for separate accounts. Each successive pair of columns shows how
many of each kind of alarm have been raised against that account,
alongside the fraud score associated with that alarm.
At the foot of each pair of columns, a base account fraud score is shown
30 (being the maximum fraud score computed for any alarm raised against
that account) along with the total number of alarms raised against that
account.
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
_g_
These two figures, in conjunction with the fixed multiple alarm fraud factor,
set in this example at 0.65, are used to compute the final account fraud
score in each case by adding to the base account fraud score a term
being the fixed multiple alarm fraud factor times the number of alarms
s raised.
In the example shown, the resulting account fraud scores range from
60.25 on account 7 to 90.65 on account 6.
The selection of precise values for the various factors used in the
calculation is a matter of experience and experiment and will vary
io according to the field of application. In the example shown, sub-capability
factors, velocity factors, and bucket factors all fall approximately in the
range 1-1.5, whilst the basic alarm capability factors range from 30 to 90.
To achieve the desired scoring, one associates with each alarm a level of risk
that is factored by a number of related elements. With each increase in the
is number of such related elements, there is an increase in the level of
granularity in the scoring mechanism and a consequent potential increase in
precision and efficiency of the scoring mechanism.
Too many elements in the scoring equation, however, tends to make it
very volatile, with a higher probability of algorithmic inaccuracies, and also
2o increased risk of any such errors causing a ricochet effect through the
fraud scoring engine. The margin for error in configuring the scoring
mechanism, and indeed the parameters for the rules and thresholds
themselves, is also reduced as the number of elements increases since
they are the building blocks on which scoring is based.
Zs In short, too few factors result in a robust but insufficiently accurate
system whilst too many factors produce an initially more labour intensive
set-up with the potential for being highly accurate, although if configured
incorrectly, the opposite could be true. The solution is a compromise
between the two extremes: the system needs to be durable yet accurate.
3o In the most preferred embodiment therefore, five significant factors are
employed:
~ Alarm Capability Factor
~ Sub-Capability Factor
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
_g_
~ Bucket Factor
~ Velocity Factor
~ Account Age Factor
The Alarm Capability Factor indicates the relative hierarchical position of
s the risk associated with a given alarm relative to risks associated with
other alarms.
The Sub-Capability Factor gives a further refinement of the indication of
the hierarchical position of the risk associated with a given alarm relative
to risks associate with other alarms.
io Bucket Factor is a measure of the volume of the potential fraud.
Velocity Factor is a measure of the rate at which the fraud is being
perpetrated.
Account Age Factor is a measure of how old the account is: new
accounts behaviour may be less predictable than older established usage
is patterns, and more susceptible to fraud.
All neural network and threshold alarm capabilities are apportioned a
figure upon which further calculations are made, increasing or decreasing
the score as commensurate with the risk present. The Account Fraud
Score created should accurately reflect the level of risk associated with
2o the course of events causing the production of an alarm. This calculation
should primarily consider the speed with which money is and may be
defrauded, and the volume of revenue defrauded, as these indicate loss
to the telecommunications company concerned; questions of cost are
always paramount. For example if a criminal has used $5,000 worth of
2s traffic over 4 hours, this is more significant than if the same individual
had
done so over 8 hours.
The Sub-Capability Factor is added to increase or decrease the risk
associated with specific types of alarm. Many alarm types have a finer
level of granularity as appropriate to that specific alarm. Many alarm types
3o are sub-divided, for example, into different sub-types of alarms for
different call destinations as the inherent risk is different for different
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
- 10-
destinations. For example international calls are more often associated
with fraud than calls to mobile telephones.
The longer that an account is in operation fraudulently, the greater the
cost will be, so a good fraud management system will aim to detect fraud
s as early as possible. Thus the analyst wishes, ideally, to see all alarms
after the shortest time period, in order that he may stop the illegal action
at the earliest opportunity.
The problem is addressed by calculating a ratio between a) the quantity of
traffic pertinent to the particular alarm type within a poll and b) a
threshold
io value for the alarm. Trigger Value divided by Threshold Value accurately
and expeditiously alarms any account where there is a large sudden
increase in traffic for that customer. This is because, for example, the 1
hour bucket will always have the lowest threshold for a given capability
and therefore any increase in traffic will proportionately increase the fraud
is score more in any 1 hour bucket than in a corresponding longer period. In
the example in table 1 below, a single extra unit of traffic represents a 2%
rise to the 1 hour bucket but only a 1 % rise for the 4 hour bucket:
Tahlp 1 ~ FYamnle velocity calculation
1 Hour 4 Hour Bucket
Bucket
Threshold 50 100
Value
Poll 1 Tri er Value10 10
Velocity 10/50 = 0.2 10/100 = 0.1
Factor
Poll 2 Tri er Value10+1 10+1
Velocity 11 /50 = 11 /100 = 0.11
0.22
Factor
Difference 2% 1
Relative
to
Threshold
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-11-
This then gives an additional factor, namely rate of change of traffic
relative to given thresholds, whereby to allow the account fraud scoring
system to prioritise alarms so that the high velocity frauds can be
investigated earlier than slower, and hence potentially less costly ,
s examples of fraud.
In addition to the above, an account age factor may be applied to increase
the risk score associated with new accounts. Over time, the account
operators' knowledge of each customer will improve as more data (such
as payment information, bank details, and view call pattern) is received
io about normal usage patterns and, as a consequence, it will become less
likely that the customer will attempt to perpetrate a fraud.
For example, for new accounts, an account age factor of 1.2 might be
applied, whilst an established account may have a factor of 1.
Furthermore, performance of certain confirmatory functions by the
is ~ account owner. may be required after certain time periods and if the
account owner fails to perform these then the account will be suspended
As well as considering the volume or momentum of the fraud, it is also
relevant to consider the immediate volume of potential fraud present in
any given situation. Therefore a factor indicative of increases in the
Zo bucket size associated with the alarm can be applied to ensure that a
measure of the quantity of fraud is directly represented in the resulting
fraud score, independent of a factor representative of the velocity. A
bucket is a time duration over which an alarm has been raised.
In the normal course of events, the 1 hour bucket alarms will be alarmed
2s first because they have the smallest thresholds assigned to them. In the
unlikely event that a fraudster manages to perpetrate fraud over a longer
period without triggering such a small bucket an alarm, then it is desirable
to generate an indication at the earliest opportunity should an alarm on a
larger bucket be triggered.
3o Therefore if a 168 hour (1 week) alarm is raised, this is of considerable
significance and should be weighted accordingly. Consequently, it is
appropriate to increase the weighting applied to larger time buckets. The
aim is to ensure that such a larger bucket alarm would be proportionately
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-12-
more prominent dependent upon the size of the time bucket and the
associated risk.
Some alarms do not lend themselves directly to thresholds, but are
merely concerned simply with whether an specific event has occurred.
s For example in a telecommunications network account system, The
Neural Network Fraudulent, Neural Network Expected, Hot A Numbers,
Hot B Numbers, Overlapping Calls, Single IMEI/Multiple IMSI and Single
IMSI/Multiple IMEI alarms, by their very nature, do not lend themselves to
thresholds. In these cases the only significance is that a particular CDR
io has been involved in a particular kind of call or whether the profile has
exhibited a particular form of suspect behaviour.
The velocity factor (Trigger value/ Threshold value) and Bucket factor are
both superfluous in conjunction with the above alarm types (though they
may for simplicity be assigned nominal values of 1 which when applied
is will have a null modifying effect) and the only true modifier is Account
Age
Factor. This is not a serious issue since Hot A & B Numbers, Single
IMEI/Multiple IMSI, and Single IMSI/Multiple IMEI will typically be
allocated a high basic Alarm Capability Factor since these kinds of alarm
will certainly need to be examined as priorities by a reviewing fraud
2o analyst.
This approach serves once again to achieve the overall aim that the risk
associated with an alarm be accurately reflected in the final score
allocated to that alarm.
In some cases it is possible that the score resulting directly from the
2s combinations of factors listed above may exceed reasonable bounds, for
example in cases where many factors each have a high value individually
indicative of high fraud risk. This may give rise to fraud scores well
outside normal range. Whilst such scores may be left unamended, since
their high value will clearly stand out relative to other scores, it is also
3o reasonable to take the approach that score values beyond a given
threshold all be treated equally since, with such high scores all indicative
of high fraud risk, there is little benefit in differentiating between them:
at
those score levels the difference in score is more likely to be an artefact of
the scoring system than the actual differentiation of fraud risk. The same
3s approach may be applied to very low scores. In such cases then, scores
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
- 13-
may be normalised to lie within fixed bounds: scores lying above or below
those bounds being amended to the maximum or minimum bound as
appropriate. In practice such a situation should not be common due to the
accuracy of the various factor figures given.
s For example, an Account Fraud Score may be normalised within the
calculation to ensure that a normalised score between 0 and 100 is
produced. All scores under or equal to 0 will be mapped to 0; all scores
over or equal to 100 will be mapped to 100.
A situation may occur where multiple alarms are raised for one account in
io one poll and it is desirable to cater for this in determining axe Account
Fraud Score. The decision on how to treat multiple alarm breaches is
based on an assessment of whether there is a greater chance of fraud in
an account with multiple threshold breaches or alarms.
It is inappropriate to aggregate the scores produced by multiple alarms
is since the increase in risk signaled by multiple alarms is not normally
proportional to the increase in score that would be created by aggregation
of the scores produced.
It is reasonable however to assume that there would be an increase in the
risk associated with an account if another alarm were added to an already
2o present alarm: that is, for example, the risk associated with a given alarm
is less than the risk associated with two or more of those alarms.
It is also reasonable however to assume that two separate alarms of
different types may or may not be as significant a concern as one other
alarm. The level of concern must be translated to the Account Fraud
2s Score and should not be influenced by the number of alarms arbitrarily.
That is , the risk associated with an alarm of type A and an alarm of type
B together may be less than, equal to, or greater than the risk associated
with one alarm of type C.
This mearis that the Account Fraud Score should be increased for
3o multiple alarms but the risk associated with the highest risk alarm
generated must first be considered. Accordingly, a fixed addition is made
to the score dependent upon the number of alarms as described below:
Number of Alarms x Fixed Multiple Alarm Factor (3)
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-14-
It is beneficial to be able to assign different factors for determination of
fraud scores for each account type as the increase or decrease in the
level of risk associated is not uniform for all account types. For example, a
business account calling PRS might indicate a greater risk compared to a
s residential customer, whereas a business calling the USA would be of
less concern than in a residential account.
In isolation or if combined with Account Type, time slot will add an extra
dimension to the calculation of Account Fraud Score. Different frauds may
be perpetrated at different times of day with certain traffic types
to representing a greater risk at night or the weekend.
We now consider how to incorporate the neural network alarms in the
Account Fraud Scoring mechanism as with neural network alarms, a
confidence is calculated as to the accuracy of its decision.
The percentage confidence calculated by the neural network is used as
is the alarm capability factor .and processed as per other alarms. The
confidence given by the neural network must be integral to the score
given for that alarm, since the confidence is a statement as to the
probability that an account is exhibiting fraudulent behaviour.
The confidence should be the basis for any calculation and accordingly is
2o used as the prime factor calculating the Account Fraud Score, the alarm
capability factor. Furthermore, the alarm confidence for fraudulent neural
network alarms must be unaffected in the calculation from alarm
confidence to individual alarm capability factor except for a
standardisation factor which converts the percentage into an alarm priority
2s proportionate to the other alarm priorities and proportionate to its value
in
terms of assessing and quantifying risk. In short, the figure should be
adjusted to ensure it is relative to other alarm capability factors. It is
again
true that it would be a detraction from the value of the neural network
confidence calculation process if it were changed more than minimally.
3o The method for converting the confidence into an Alarm Capability Factor is
as described below:
Alarm Capability Factor = AIarmConfidence(NN(F)) / X (4)
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-15-
where AIarmConfidence(NN(F)) is the Neural Network Fraudulent Alarm
Confidence and X is a standardisation factor for Neural Network
Fraudulent Alarms.
Neural Network Fraudulent alarms must be assessed with all other alarms
s generated, or persisting, for an account in order to ensure that the alarm,
and the account, posing the most risk is prioritised above the remainder..
This proposed "clean" processing keeps the ordering by Account Fraud
Scoring as pure as possible; the assigned confidence is not adjusted by
other factors outside the neural network although it is integrated within the
1o scoring process. -
One has thought through the elements to be included within the Account
Fraud Scoring mechanism, why they are to be included, how they
represent risk and the appropriate method of dealing with each alarm
type. The conclusion is that all alarms are processed through the scoring
is mechanism in the same fashion, only the prime figure, the Alarm
Capability Factor is a fixed figure for Neural Network Expected Alarms
and Threshold alarms while for Neural Network Fraudulent Alarms, the
confidence is standardised to associate a relational and reasonable level
of significance.
2o For Neural Network Expected alarms, the confidence values will be 0-20%
as opposed to a range of 0-100% for fraudulent neural network alarms.
These expected alarms tend to indicate behaviour which is suspicious or
unusual although not immediately identifiable as fraud. By their very
nature, they will alert the user to areas of uncertainty. There is no
Zs suggestiori that the expected behavioural neural network alarms are not
valid; quite the opposite, sirice it is important that this task be performed.
The idea that small deviations in the neural network's confidence can be
interpreted is a little spurious because the neural network is judging how
much it doesn't know the behaviour being presented to it.
3o Thus there is more to be lost, in terms of complication and processing,
than would be gained by allowing the percentage confidence to affect the
Alarm Capability factor. Indeed it might also prove misleading, reducing
the accuracy of the alarm generation engine. Use of a fixed value for the
Alarm Capability factor, as opposed to a variable level resolves this issue.
SUBSTITUTE SHEET (RULE 26)

CA 02371730 2001-11-07
WO 00/67168 PCT/GB00/01669
-16-
So for Neural Network Fraudulent alarms the percentage confidence is
normalised and integrated into scoring mechanism; for Neural Network
Expected alarms a fixed Alarm Capability factor is used as per threshold
alarms.
s In summary then, the method takes different alarms or other types of
information, homogenises them through scoring the risk embodied in each
element of the mechanism, taking the highest scored alarm for each
account on any one time and then adding an extra value to the score
dependent upon the number of alarms raised. The resulting value is the
io account fraud score.
Any range or device value given herein may be extended or altered
without losing the effect sought, as will be apparent to the skilled person
for an understanding of the teachings herein.
SUBSTITUTE SHEET (RULE 26)

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: IPC from PCS 2022-09-10
Inactive: IPC from PCS 2022-09-10
Application Not Reinstated by Deadline 2006-04-28
Inactive: Dead - RFE never made 2006-04-28
Deemed Abandoned - Failure to Respond to Maintenance Fee Notice 2006-04-28
Inactive: Status info is complete as of Log entry date 2005-10-05
Inactive: Abandoned - No reply to Office letter 2005-08-18
Inactive: Transfer information requested 2005-05-18
Inactive: Abandon-RFE+Late fee unpaid-Correspondence sent 2005-04-28
Inactive: Single transfer 2005-03-21
Amendment Received - Voluntary Amendment 2005-01-20
Inactive: Transfer information requested 2005-01-12
Inactive: Correspondence - Transfer 2004-08-09
Extension of Time for Taking Action Requirements Determined Compliant 2004-04-05
Letter Sent 2004-04-05
Letter Sent 2004-04-05
Extension of Time for Taking Action Requirements Determined Compliant 2004-04-05
Letter Sent 2004-04-05
Inactive: Extension of time for transfer 2004-03-11
Inactive: Transfer reinstatement 2004-03-11
Reinstatement Requirements Deemed Compliant for All Abandonment Reasons 2004-03-11
Inactive: Delete abandonment 2003-07-10
Inactive: Status info is complete as of Log entry date 2003-04-22
Inactive: Status info is complete as of Log entry date 2003-03-27
Letter Sent 2003-03-26
Inactive: Abandoned - No reply to Office letter 2003-03-11
Inactive: Abandoned - No reply to Office letter 2003-02-12
Amendment Received - Voluntary Amendment 2003-01-30
Inactive: Transfer information requested 2002-12-11
Inactive: Transfer information requested 2002-12-11
Inactive: Office letter 2002-12-11
Inactive: Cover page published 2002-09-18
Amendment Received - Voluntary Amendment 2002-08-16
Inactive: Courtesy letter - Evidence 2002-08-06
Inactive: Notice - National entry - No RFE 2002-07-30
Inactive: Office letter 2002-04-30
Inactive: Single transfer 2002-04-29
Application Received - PCT 2002-03-12
Application Published (Open to Public Inspection) 2000-11-09

Abandonment History

Abandonment Date Reason Reinstatement Date
2006-04-28

Maintenance Fee

The last payment was received on 2005-04-25

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Fee History

Fee Type Anniversary Year Due Date Paid Date
MF (application, 2nd anniv.) - standard 02 2002-04-29 2001-11-07
Basic national fee - standard 2001-11-07
Reinstatement (national entry) 2001-11-07
Registration of a document 2003-02-13
MF (application, 3rd anniv.) - standard 03 2003-04-28 2003-04-04
Extension of time 2004-03-11
Reinstatement 2004-03-11
MF (application, 4th anniv.) - standard 04 2004-04-28 2004-04-01
MF (application, 5th anniv.) - standard 05 2005-04-28 2005-04-25
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
CEREBRUS SOLUTIONS LIMITED
Past Owners on Record
BEN GRADY
GRAHAM JOLLIFFE
PHILIP WILLIAM HOBSON
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative drawing 2002-08-05 1 5
Cover Page 2002-09-16 1 33
Abstract 2001-11-07 1 48
Claims 2001-11-07 3 104
Description 2001-11-07 16 759
Drawings 2001-11-07 6 191
Notice of National Entry 2002-07-30 1 208
Request for evidence or missing transfer 2002-11-12 1 105
Courtesy - Abandonment Letter (Office letter) 2003-04-15 1 167
Notice of Reinstatement 2004-04-05 1 170
Reminder - Request for Examination 2004-12-30 1 115
Courtesy - Abandonment Letter (Request for Examination) 2005-07-07 1 167
Courtesy - Abandonment Letter (Office letter) 2005-09-29 1 166
Courtesy - Abandonment Letter (Maintenance Fee) 2006-06-27 1 175
PCT 2001-11-07 1 35
PCT 2001-11-29 1 54
PCT 2001-11-08 1 37
PCT 2002-04-26 1 20
Correspondence 2002-07-30 1 31
PCT 2001-11-08 2 73
Correspondence 2002-12-11 1 20
Correspondence 2004-03-11 1 37
Correspondence 2004-04-08 1 15
Correspondence 2005-01-12 1 22
Correspondence 2005-05-18 1 23