Language selection

Search

Patent 2618938 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 2618938
(54) English Title: DATA CONSISTENCY CONTROL METHOD AND SOFTWARE FOR A DISTRIBUTED REPLICATED DATABASE SYSTEM
(54) French Title: METHODE ET LOGICIEL DE CONTROLE DE LA COHERENCE DES DONNEES D'UN SYSTEME DE BASES DE DONNEES REPARTIES ET DUPLIQUEES
Status: Granted and Issued
Bibliographic Data
(51) International Patent Classification (IPC):
  • G6F 16/178 (2019.01)
  • G6F 16/11 (2019.01)
  • G6F 16/18 (2019.01)
(72) Inventors :
  • WALL, JOHN (Canada)
  • LOESER, JOHN PAUL (Canada)
  • TRAN, KHOA (Canada)
  • STROE, MARIUS DAN (Canada)
(73) Owners :
  • SYMCOR INC.
(71) Applicants :
  • SYMCOR INC. (Canada)
(74) Agent: SMART & BIGGAR LP
(74) Associate agent:
(45) Issued: 2018-04-10
(22) Filed Date: 2008-01-24
(41) Open to Public Inspection: 2009-07-24
Examination requested: 2013-01-15
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data: None

Abstracts

English Abstract


A distributed replicated database system having a plurality of federated
database systems and methods of updating and reading database records from
the distributed replicated database system are disclosed. Each federated
database system contains a complete copy of a database. Moreover, each
federated database system comprises at least one server divided into at least
one logical partition. A logical partition contains records of the database
and all
logical partitions in a federated database system cumulatively stores all
records
in the database. A data structure is maintained which indicates whether the
records in a given logical partition are up-to-date. When an update or insert
request is received, the data structure is modified to indicate that all
logical
partitions storing a copy of the record to be updated, or partitions into
which the
new record is to be inserted, are not up-to-date. When the record has been
updated or inserted, the data structure is modified to indicate that the
logical
partition storing the record is up-to-date. When a read request is received,
the
record is read from an up-to-date logical partition storing the record.


French Abstract

Un système de bases de données dupliquées distribué comportant une pluralité de systèmes de bases de données fédérés et des méthodes de mise à jour et de lecture des enregistrements de base de données à partir du système de bases de données dupliquées distribué sont divulgués. Chaque système de bases de données fédéré contient une copie complète dune base de données. De plus, chaque système de bases de données fédéré comprend au moins un serveur divisé en au moins une partition logique. Une partition logique contient des enregistrements de la base de données et toutes les partitions logiques dans un système de bases de données fédéré enregistre de manière cumulative tous les enregistrements dans la base de données. Une structure de données est maintenue qui indique que les enregistrements dune partition logique donnée sont à jour. Lorsquune demande de mise à jour ou dinsertion est reçue, la structure de données est modifiée pour indiquer que toutes les partitions logiques enregistrant une copie de lenregistrement doivent être mises à jour ou les partitions dans lesquelles le nouvel enregistrement doit être inséré ne doivent pas être mises à jour. Lorsque lenregistrement a été mis à jour ou inséré, la structure de données est modifiée pour indiquer que la partition logique dans laquelle lenregistrement est enregistré doit être mise à jour. Lorsquune demande de lecture est reçue, lenregistrement est lu à partir de la partition logique mise à jour stockant lenregistrement.

Claims

Note: Claims are shown in the official language in which they were submitted.


25
WHAT IS CLAIMED IS:
1. A distributed replicated database system storing a database having a
plurality of
records, said database system comprising:
a plurality of federated database systems each storing a complete copy of said
database, each federated database system comprising at least one server with
said at least one server divided into at least one logical partition, said at
least one
logical partition containing at least some of said plurality of records in
said
database and all logical partitions at a federated database system
cumulatively
storing all of said plurality of records in said database; and
a computing device hosting an application for updating said distributed
replicated
database system so that changes to said database are first stored in at least
one
copy of said database and then propagated to any other copies of said
database,
said computing device in communication with a computer readable medium
storing a data structure, said data structure containing an indicator for each
logical partition in each of said federated database systems of said plurality
of
federated database systems, each indicator indicating whether all those
records
in a given logical partition have been updated to reflect changes to those of
said
plurality of records in said database that are stored in said given logical
partition,
as a change in said database is propagated to each copy of said database in
said plurality of federated database systems,
wherein read requests for a given record may be serviced from any one of said
plurality of federated database system having a logical partition containing
said
given record where the indicator for that logical partition indicates that all
records
in that logical partition have been updated to reflect changes to those of
said
plurality of records in said database that are stored in that logical
partition even if
the indicator for another logical partition in another federated database
system
containing said given record indicates that all records in said another
logical

26
partition have not been updated to reflect changes to those of said plurality
of
records in said database stored in said another logical partition.
2. The distributed replicated database system of claim 1, wherein one of said
plurality
of federated database systems comprises said computer readable medium.
3. The distributed replicated database system of claim 1, wherein a copy of
said data
structure is stored at each of said plurality of federated database systems
and
updates to a data structure at one of said plurality of federated database
systems
are propagated to other copies of said data structure.
4. The distributed replicated database system of claim 1, wherein each of said
at least
one server hosts a database management system.
5. The distributed replicated database system of claim 1, wherein said
plurality of said
federated database systems are interconnected by a computer communications
network.
6. The distributed replicated database system of claim 1, wherein said data
structure
contains an association between said at least one logical partition and said
at least
one server.
7. The distributed replicated database system of claim 1, wherein said data
structure
contains an indicator of the records of said database that are contained in
each of
said at least one logical partition, across said federated database systems.
8. A method of managing a distributed replicated database system storing a
database
having a plurality of records, said database system comprising:
a plurality of federated database systems each storing a complete copy of said
database, each federated database system comprising at least one server with
said at

27
least one server divided into at least one logical partition, said at least
one logical
partition containing at least some of said plurality of records in said
database and all
logical partitions at a federated database system cumulatively storing all of
said plurality
of records in said database; and
a computing device hosting an application for updating said distributed
replicated
database system so that changes in said database are propagated to each copy
of said
database; said method comprising:
maintaining at said computing device a data structure, said data structure
containing an
indicator for each logical partition in said database system across said
plurality of
federated database systems, each indicator indicating whether all those
records in a
given logical partition have been updated to reflect changes to those of said
plurality of
records in said database that are stored in said given logical partition, as a
change in
said database is propagated to each copy of said database in said plurality of
federated
database systems;
servicing read requests for a given record to said database from one of said
plurality of
federated database system having a logical partition containing said given
record where
the indicator for that logical partition indicates that all records in that
logical partition
have been updated to reflect changes to those of said plurality of records in
said
database that are stored in that logical partition and the indicator for
another logical
partition in another federated database system containing said given record
indicates
that records in said another logical partition have not been updated to
reflect changes to
those of said plurality of records in said database stored in said another
logical partition.
9. The method of claim 8, further comprising:
receiving an instruction to update one of said plurality of records;

28
identifying from said data structure all logical partitions across said
plurality of
federated database systems in said replicated database system storing said one
of
said plurality of records;
modifying said data structure to indicate that said all logical partitions
across said
plurality of federated database systems storing said one of said plurality of
records
are not up-to-date.
updating said record at a first federated database system; and
modifying said data structure to indicate that the logical partition at said
first
federated database system storing said record is up-to-date.
10.The method of claim 9, further comprising:
updating said record at all other logical partitions that are not up-to-date;
and
modifying said data structure to indicate that all other logical partitions
storing across
said plurality of federated database systems said record are up-to-date.
11. The method of claims 9, wherein a copy of said data structure is stored at
each of
said plurality of federated database systems, and wherein said modifying said
data
structure further comprises propagating said modification to said data
structure to all
copies of said data structure.
12. The method of claim 9, wherein a copy of said data structure is stored at
each of
said plurality of federated database systems, and wherein said modifying said
data
structure further comprises propagating said modification to said data
structure to all
copies of said data structure.

29
13. The method of claim 10, wherein a copy of said data structure is stored at
each of
said plurality of federated database systems, and wherein said modifying said
data
structure further comprises propagating said modification to said data
structure to all
copies of said data structure.
14. The method of claim 10, wherein said updating of said record at all other
logical
partitions that are not up-to-date comprises sending an update request to the
federated
database systems hosting said not up-to-date partitions.
15. The method of claim 8, further comprising:
at a first federated database system, receiving an instruction to read one of
said
plurality of records;
identifying an up-to-date logical partition for servicing said read, said
identifying
comprising selecting, based on said data structure, a logical partition
storing said
record where the indicator for that logical partition indicates that all
records in that
logical partition have been updated to reflect change to those of said
plurality of
records in said database that are stored in that logical partition; and
reading said one of said plurality of records from a server storing said up-to-
date
logical partition.
16. The method of claim 15, wherein where said up-to-date logical partition
storing said
record is not located at said first federated database system, said reading
said record
comprises sending a read request to the federated database system hosting said
up-to-
date logical partition.
17. The method of claim 8 further comprising inserting a record into said
distributed
replicated database system, said method comprising:
receiving an instruction to insert one of said plurality of records;

30
identifying all logical partitions in said replicated database system into
which said
one of said plurality of records is to be inserted;
modifying said data structure to indicate that said all logical partitions
across said
plurality of federated database systems into which said one of said plurality
of
records is to be inserted are not up-to-date.
inserting said record at a first federated database system; and
modifying said data structure to indicate that the logical partition at said
first
federated database system into which said record was inserted is up-to-date.
18.The method of claim 17, further comprising:
inserting said record at all other logical partitions that are not up-to-date;
and
modifying said data structure to indicate that all other logical partitions
into which
said record were inserted are up-to-date.
19. The method of claim 17, wherein a copy of said data structure is stored at
each of
said plurality of federated database systems, and wherein said modifying said
data
structure further comprises propagating said modification to said data
structure to all
copies of said data structure.
20. The method of claim 17, wherein a copy of said data structure is stored at
each of
said plurality of federated database systems, and wherein said modifying said
data
structure further comprises propagating said modification to said data
structure to all
copies of said data structure.

31
21. The method of claim 18, wherein a copy of said data structure is stored at
each of
said plurality of federated database systems, and wherein said modifying said
data
structure further comprises propagating said modification to said data
structure to all
copies of said data structure.
22. The method of claim 18, wherein said inserting of said record at all other
logical
partitions that are not up-to-date comprises sending an insert request to the
federated
database systems hosting said not up-to-date partitions.
23. Computer readable medium storing processor executable instructions that
when
loaded at a computing device comprising a processor, cause said computing
device to
perform the method of claim 8.
24. Computer readable medium storing processor executable instructions that
when
loaded at a computing device comprising a processor, cause said computing
device to
perform the method of claim 15.
25. Computer readable medium storing processor executable instructions that
when
loaded at a computing device comprising a processor, cause said computing
device to
perform the method of claim 17.
26. The distributed replicated database system of claim 1, wherein said
indicator for
each partition further indicates whether records in a given logical partition
are currently
being updated, and that the given partition is therefore not available.
27. The method of claim 8, wherein each said indicator further indicates
whether
records in a given logical partition are currently being updated, and that the
given logical
partition is therefore not available.

32
28. The method of claim 15, wherein each said indicator further indicates
whether
records in a given logical partition are currently being updated, and that the
given logical
partition is therefore not available.
29. The method of claim 17, wherein each said indicator further indicates
whether
records in a given logical partition are currently being updated, and that the
given logical
partition is therefore not available.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02618938 2008-01-24
DATA CONSISTENCY CONTROL METHOD AND SOFTWARE FOR A
DISTRIBUTED REPLICATED DATABASE SYSTEM
FIELD OF THE INVENTION
[0001] The present invention relates generally to concurrency control in a
distributed replicated database system, and more particularly to maintaining
data
consistency in a replicated database system comprising a plurality of
federated
databases.
BACKGROUND OF THE INVENTION
[0002] Modern computer data storage applications rely on the storage of high
volumes of data in a redundant, fault tolerant manner. For example, archiving
of
document images requires such storage.
[0003] To this end, databases that are distributed and allow for redundancy
are known. Typically, the database is hosted on multiple physical computers,
and is either replicated or mirrored. In this way, multiple instances of the
data or
even the entire database may be maintained. In the event one instance of the
database fails, or is lost, the other instance may be accessed.
[0004] One known database architecture designed for the storage of large
amounts of data, integrates multiple autonomous database systems into a single
database ¨ referred to as a federated database. In this way, conventional
smaller databases using readily available software and hardware may be
arranged to co-operate and be combined to form a single, larger logical
database. Federated databases are, for example, described in McLeod and
Heimbigner (1985). "A Federated architecture for information management".
ACM Transactions on Information Systems Vol 3, Issue 3: 253-278, and "Sheth

CA 02618938 2016-11-09
2
and Larson (1990). "Federated Database Systems for Managing Distributed,
Heterogeneous, and Autonomous Databases". ACM Computing Surveys Vol 22, No.3:
183-236, and Barclay, T., Gray, J., and Chong W., "TerraServer Bricks ¨ A High
Availability Cluster Alternative" (2004), Microsoft Research Technical Report
MSR-TR-
2004-107.
[0005] As data is replicated across multiple instances of the databases,
maintaining
coherency between the instances of the database, and ensuring that only up-to-
date
data is used presents challenges. These challenges become more pronounced, in
a
federated database as the number of autonomous database systems increases.
[0006] Accordingly, there remains a need for methods, and software for
maintaining
data consistency in a replicated database system formed from one or more
federated
databases.
SUMMARY OF THE INVENTION
[0007] In accordance with one aspect, there is provided a distributed
replicated
database system storing a database having a plurality of records, the database
system
comprising: a plurality of federated database systems each storing a complete
copy of
the database, each federated database system comprising at least one server
with the
at least one server divided into at least one logical partition, the at least
one logical
partition containing at least some of the plurality of records in the database
and all
logical partitions at a federated database system cumulatively storing all of
the plurality
of records in the database; and a computing device hosting an application for
updating
the distributed replicated database system so that changes to the database are
first
stored in at least one copy of the database and then propagated to any other
copies of
the database, the computing device in communication with a computer readable
medium storing a data structure, the data structure containing an indicator
for each
logical partition in each of the federated database systems of the plurality
of federated
database systems, each indicator indicating whether all those records in a
given logical

3
partition have been updated to reflect changes to those of the plurality of
records in the
database that are stored in the given logical partition, as a change in the
database is
propagated to each copy of the database in the plurality of federated database
systems,
wherein read requests for a given record may be serviced from any one of the
plurality
of federated database system having a logical partition containing the given
record
where the indicator for that logical partition indicates that all records in
that logical
partition have been updated to reflect changes to those of the plurality of
records in the
database that are stored in that logical partition even if the indicator for
another logical
partition in another federated database system containing the given record
indicates
that all records in the another logical partition have not been updated to
reflect changes
to those of the plurality of records in the database stored in the another
logical partition.
[0008] In
accordance with another aspect, there is provided a method of managing a
distributed replicated database system storing a database having a plurality
of records,
the database system comprising: a plurality of federated database systems each
storing
a complete copy of the database, each federated database system comprising at
least
one server with the at least one server divided into at least one logical
partition, the at
least one logical partition containing at least some of the plurality of
records in the
database and all logical partitions at a federated database system
cumulatively storing
all of the plurality of records in the database; and a computing device
hosting an
application for updating the distributed replicated database system so that
changes in
the database are propagated to each copy of the database; the method
comprising:
maintaining at the computing device a data structure, the data structure
containing an
indicator for each logical partition in the database system across the
plurality of
federated database systems, each indicator indicating whether all those
records in a
given logical partition have been updated to reflect changes to those of the
plurality of
records in the database that are stored in the given logical partition, as a
change in the
database is propagated to each copy of the database in the plurality of
federated
database systems; servicing read requests for a given record to the database
from one
of the plurality of federated database system having a logical partition
containing the
given record where the indicator for
CA 2618938 2017-07-04

3a
that logical partition indicates that all records in that logical partition
have been updated
to reflect changes to those of the plurality of records in the database that
are stored in
that logical partition and the indicator for another logical partition in
another federated
database system containing the given record indicates that records in the
another
logical partition have not been updated to reflect changes to those of the
plurality of
records in the database stored in the another logical partition.
[0009]
[0010]
[0011] Other aspects and features of the present invention will become
apparent to
those of ordinary skill in the art upon review of the following description of
specific
embodiments of the invention in conjunction with the accompanying figures.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] In the figures which illustrate by way of example only, embodiments of
CA 2618938 2017-07-04

CA 02618938 2008-01-24
4
the present invention,
[0013] FIG. 1 is a simplified block diagram of a database system, including
a
plurality of federated database systems, exemplary of an embodiment of the
present invention;
[0014] FIG. 2 is a block diagram depicting the contents of temporary and
persistent memory storage of a computer system in the database system of FIG.
1;
[0015] FIG. 3 is a block diagram depicting the contents of temporary and
persistent memory storage of a computer system hosting various database
applications in the database system of FIG. 1;
[0016] FIG. 4 is a block diagram illustrating the relationship between the
data
stored at selected federated database systems in the database system of FIG.
1;
[0017] FIG. 5 is a block diagram illustrating exemplary logical
partitioning of
data to be stored in a federated database system in the database system of
FIG.
1;
[0018] FIG. 6 is a state diagram illustrating exemplary states of a logical
partition in the database system of FIG.1 ;
[0019] FIGS. 7A to 7E illustrate an exemplary format of a state table for
the
database system of FIG. 1;
[0020] FIGS. 8A to 8E illustrate an exemplary format of a table associating
database records with logical partitions in the database system of FIG. 1;
[0021] FIG. 9 is a flow chart depicting operation of the computer system of
FIG. 3 upon receiving a read request;
[0022] FIG. 10 is a flow chart depicting operation of the computer system
of
FIG. 3 upon receiving an update request;

CA 02618938 2008-01-24
[0023] FIG. 11 is a flow chart depicting operation of the computer system
of
FIG. 3 upon receiving an update request propagated from a computer system of
FIG. 2 or FIG. 3;
[0024] FIG. 12 illustrates an exemplary progression of an update of a
database record across logical partitions in the database system of FIG. 1;
[0025] FIG. 13 is a flow chart depicting operation of the computer system
of
FIG. 3 upon receiving an insert request; and
[0026] FIG. 14 is a flow chart depicting operation of the computer system
of
FIG. 3 upon receiving an insert request propagated from a computer system of
FIG. 2 or FIG. 3.
DETAILED DESCRIPTION
[0027] FIG. 1 depicts a database system 10, exemplary of an embodiment of
the present invention. Database system 10 is formed from a plurality of
independent federated database systems, namely, federated database system 1,
federated database system 2, federated database system 3 and federated
database system 4, all interconnected by way of computer communications
network 100. Network 100 may be a wide area network or may be a local area
network (packet switched, token ring, or other computer communications
network, known to those of ordinary skill). Conveniently, federated database
systems 1, 2, 3 and 4 may be interconnected by way of the Internet and may be
geographically remote from one another.
[0028] As may be appreciated by those of ordinary skill, a database system
includes a database (i.e. a structured collection of data ("records")) and
software
for managing the database, typically referred to as a database management
system (DBMS). Known DBMSs include, for example, Oracle and Microsoft SQL
database systems. A database system may be centralized, with a single DBMS

CA 02618938 2008-01-24
6
managing a single database residing on the same computer system. Likewise, a
database system may be distributed, with a single DBMS managing multiple
databases possibly residing on several different computer systems. Within a
database system, data may be replicated, to avoid data loss in the presence of
a
physical or logical failure or a component of the database system; such a
system
may be considered a replicated database system.
[0029] A federated database system is a collection of several autonomous
database systems managed by a federated database management system
(FDBMS) software. The component databases in a federated database system
may be autonomous (i.e. each component database may function as a stand-
alone database system) but may cooperate in the federated database system to
allow sharing of data between component database systems in the federated
database system.
[0030] In a federated database, data is accessed through the FDBMS, which
in turn makes use of one or more of the DBMS of the component databases. To
end users, the presence of the component databases (and DBMS) is typically
entirely transparent.
[0031] In the
database system of FIG. 1, each of federated database systems
1, 2, 3 and 4 may be hosted on a plurality of computer systems 22 hosting
component autonomous database systems, and one or more computer systems
24 hosting the FDBMS controlling the federated database system. For example,
federated database system 1 may have a plurality of computer systems 22-1
hosting component autonomous database systems. Each of the component
autonomous database systems hosted on computer systems 22-1 may
cooperate to form federated database system 1. Computer system 24-1 may
host the FDBMS through which the component federated databases residing on
computer systems 22-1 may be accessed. Computer systems 22 and 24 in each
of federated database system 1, 2, 3 and 4 may be interconnected by way of a
computer communications network 102, such as a local area network or a wide

CA 02618938 2008-01-24
7
area network.
[0032] As depicted, exemplary database system 10 has four component
federated database systems 1, 2, 3, 4; however, it may be appreciated that
database system 10 may have less or more than four component database
systems. As will become apparent, each federated database system 1, 2, 3 and
4 hosts an instance of a database. As such, the entire database is replicated
four times, once on each federated database system 1, 2, 3 and 4.
[0033] Moreover, as depicted, database systems 1, 2, 3 and 4 may be
autonomous federated database systems, each having its own FDBMS.
However, it may be appreciated that the functionality of one FDBMS may be
shared across more than one of database systems 1, 2, 3 and 4, such that one
FDBMS may function as an FDBMS for more than one of the database systems.
[0034] As
illustrated in FIG. 2, each of computer systems 22-1 to 22-4 of FIG.
1 may be a conventional computer system having a processor 25, a network
interface 21, storage device 20, and memory 27 hosting operating system 23,
such as Windows XP, Vista, Linux, or Apple OSX, database application 26, other
applications 28 and application data 29. Other applications 28 may include
conventional applications typically running on conventional computer systems
(e.g. word processing applications). Database application 26 may be a database
management system (DBMS). Known database management systems include,
for example, OracleTM, MiCrOSOftSQLTM, and xBase database systems.
Processor 25 may execute operating system 23, database application 26 and
other applications 28. Network interface 21 may be any conventional piece of
hardware, such as an Ethernet card, token ring card, etc. that allows computer
system(s) 22 to communicate over networks 100 and 102.
[0035] Each of computer systems 24-1 to 24-4 may be similar to computer
system(s) 22 and may also be a conventional computer system (FIG. 3). Briefly,
computer system 24 may include a processor 30, a network interface 31, storage
device 32, and memory 37 hosting operating system 33, other applications 39

CA 02618938 2008-01-24
8
and application data 36. However, computer system 24 may further host
load/update application 34 and controller application 38.
[0036] In particular, controller application 38 and load/update application
34
may receive, over network 100 queries and update requests directed to the copy
of database 40 stored at database systems 1, 2, 3 and 4 respectively. Also,
data
may be sent to and from database systems 1, 2, 3, and 4 over network 100 by
way of load/update application 34 and controller application 38. Specifically,
controller application 38 may service read requests and load/update
application
34 may service insert and update requests. In the course of servicing inserts
and
updates, load/update application 34 may change states in the system state
table(s), as will be further described below.
[0037] As discussed above, data may sometimes be replicated in a database
system. A replicated database system may be desired so that, for example, the
database system is tolerant of hardware failures, for example, hard disk
failures
resulting in corruption of data. Moreover, it may be desired to replicate data
so
that one or more other copies of data may be accessible should the primary
copy, or the copy that is to be accessed, be unavailable.
[0038] FIG. 4 depicts an exemplary data replication scheme across selected
federated database systems in database system 10 (namely, database systems
1, 2, and 3). In an exemplary embodiment of database system 10, each of
federated database systems 1, 2 and 3 stores a complete instance of a database
40. Thus, data is replicated across database systems 1, 2 and 3. However, data
may also be replicated within each of database systems 1, 2 and 3 as described
below.
[0039] Exemplary database system 1 includes computer systems 22-1a, 22-
lb, 22-1c and 22-1d (FIG. 4). Database 40 may be distributed across computer
systems 22-la and 22-1b such that computer systems 22-1a and 22-1b stores
some but not all of database 40, but together, computer systems 22-la and 22-
1b store all of database 40. As noted, computer system 22-1a may itself host
an

CA 02618938 2008-01-24
9
autonomous centralized database system, controlled by database application 26.
Similarly, computer system 22-1 b may itself host an autonomous centralized
database system, controlled by database application 26. The two autonomous
database systems hosted on computer systems 22-la and 22-1b may be linked
together, over network 102-1, to form federated database system 1 (FIG. 1).
[0040] Data on computer systems 22-la and 22-lb may be mirrored on
computer systems 22-1c and 22-1d. As may be appreciated by those of ordinary
skill, data mirroring is a technique wherein identical copies of data are
stored on
separate storage devices (e.g. the second physical copy of the data is
identical to
the primary copy). Also, as one of ordinary skill may appreciate, multiple
techniques for mirroring data exist. For example, data may be mirrored to
storage
devices in computer systems 22-1c and 22-1d using the redundant array of
independent drives (RAID) scheme. Thus, in this manner, database system 1
itself hosts two complete copies of database 40.
[0041] Moreover, data may also be mirrored across database systems 1 and
2, allowing each of computer systems 22-2a and 22-2b to store a complete copy
of database 40 with data on computer systems 22-2a and 22-2b mirrored on
storage devices of computer systems 22-2c and 22-2d. Thus, database system 2
also contains two complete copies of database 40. Conveniently, transactions
performed on a copy of database 40 may immediately be propagated to mirrored
copies of database 40 in a manner known to those skilled in the art.
[0042] Federated database system 3, at computer systems 22-3a, 22-3b, 22-
3c and 22-3d, also stores a complete copy of database 40; however, database
40 at database system 3 may be structured differently than at database systems
1 and 2. Specifically, the data of database 40 may be physically distributed
across computer systems 22-3a-d differently from how it is distributed across
computer systems 22-la and 22-lb (and mirrored systems 22-1c and 22-1d, 22-
2a and 22-2b, and 22-2c and 22-2d). An exemplary way of maintaining
coherency of the copy of database 40 in database system 1 with the copy of

CA 02618938 2008-01-24
-io
database 40 in database system 3 is for database system 3 to perform the same
database transactions (e.g. inserts, updates and deletes) on its copy of
database
40 as was performed at database system 1. As a result, the copy of database 40
at database system 3 may contain identical data to the copy of database 40 at
computer system 1, though the data may be physically structured differently.
[0043] As previously noted, within each of database systems 1, 2, 3 and 4,
database 40 may be physically distributed across several computer systems 22,
and more specifically, across several different autonomous database systems.
Schemes for distributing a database across several different computer systems,
and across several database systems, may be known to those of ordinary skill.
[0044] In addition to physically distributing the data across several
computer
systems 22, database 40 may also be logically partitioned. Conveniently, at a
particular physical computer system, the data may be further partitioned into
logical partitions. Consequently, database 40 may be both physically and
logically distributed as further detailed below.
[0045] For purposes of illustration, FIG. 5 depicts exemplary partitioning
of
data in database 40, into a plurality of logical partitions 52a-52f (i.e. six
logical
partitions), and physically distributing those logical partitions 52a-52f
across two
computer systems, such as computer systems 22a and 22b (FIG. 4). In the
depicted embodiment, exemplary database 40 may be a structured collection of
financial statement data 50. Further, financial statement data 50 may have at
least two attributes: a client name and a transaction date. Using these two
attributes, statement data 50 may be partitioned into logical partitions 52a-
52f as
follows.
[0046] First, statement data 50, containing financial transaction data for
two
clients, Client 1 and Client 2, may be split or divided by client name (FIG.
5,
S500). Statement data 50 may further be divided into two groups, one
corresponding to Client Is statement data and the other corresponding to
Client
2's statement data.

CA 02618938 2008-01-24
11
[0047] Client 1's statement data may be further divided by date (FIG. 5,
S502), and likewise Client 2's statement data (FIG. 5, S504). For example,
each
of Client 1 and Client 2's statement data may be divided into transactions
taking
place during the first third of the year (placed into logical partition 52a
for Client 1
or partition 52d for Client 2), the second third of the year (placed into
logical
partition 52b for Client 1 or partition 52e for Client 2) and the last third
of the year
(placed into logical partition 52c for Client 1 or partition 52e for Client
2). Logical
partitions 52a to 52c may be stored on computer system 22-1a, and logical
partitions 52d to 52f stored on computer system 22-1 b.
[0048] As previously explained, computer systems 22-la and 22-lb may each
host an autonomous database system. Thus, records contained in logical
partitions 52a to 52c may be stored in an autonomous database system hosted
on computer system 22-1a. Likewise, records contained in logical partitions
52d
to 52f may be stored in an autonomous database system hosted on computer
system 22-1 b. In this manner, statement data 50 is logically distributed
across six
logical partitions (i.e. 52a-52f) and those six logical partitions physically
distributed across two computer systems (i.e. 22-la and 22-lb), each hosting
an
autonomous database system. It may also be apparent that each of logical
partitions 52a-52f contains a subset of the records of database 40, and
cumulatively, partitions 52a-52f contain all of the records of database 40.
[0049] Moreover, as shown in FIG. 4, the six logical partitions 52a-52f may
be
physically distributed across four computer systems 22-3a, 22-3b, 22-3c, and
22-
3d in database system 3. Of course, other manners of dividing and distributing
statement data 50 may also be employed.
[0050] As noted, maintaining concurrency in distributed databases is often
difficult. Specifically, multiple users may attempt to access database 40 at
one
time and difficulties may arise when a user attempts to read a record in
database
40 at the same time that an update operation is occurring on that record.
Different algorithms or techniques to address these difficulties have been

CA 02618938 2008-01-24
12
proposed and are known to those of ordinary skill. For example, locks may be
employed to lock the record that is being updated so that no other users may
access (e.g. read) the record. Disadvantageously, if there is only one copy of
the
record in the database system, all other users must wait for the update
operation
to complete before their read requests may be completed.
[0051] In the case of a replicated distributed database, when a particular
record is being updated, other users trying to access (e.g. read) that record
may
be directed to other copies of the record in database system 10. However, when
a particular copy of a record is being, or has been, updated but that update
has
not yet been propagated to other copies of the record those other copies of
the
record may not reflect the most recent update to the record.
[0052] To this end, a method of concurrency control exemplary of an
embodiment of the present invention will be explained with reference to FIGS.
6
to 14. Significantly, one of three states may be associated with each logical
partition in database system 10. A state diagram showing the three states,
active
(A) state 602, unavailable (U) state 604, and dirty (D) state 606, and state
transitions is illustrated in FIG. 6.
[0053] A partition may be in an active state 602 when its component records
reflect the most recent update to database 40. A partition may be in the dirty
state 606 when an update has been performed at an equivalent partition (i.e.
another partition containing a copy of at least one record in the partition)
but the
partition has not yet been updated (i.e. the component records in the
partition do
not reflect the most recent update to database 40), or when a new record is
awaiting insertion into the partition. A partition may be in the unavailable
state
when one or more of its component records is in the process of being updated
or
new records are being inserted. When a partition is in the unavailable state
604,
component records may not be accessed (e.g. read).
[0054] A partition may remain in an active state 602 while its component
records still reflect the most recent update to database 40. Moreover, a
partition

CA 02618938 2008-01-24
13
may transition from the active state 602 to the unavailable state 604 when an
update operation is initiated on one or more of its component records or
record
are being inserted into the partition. A partition may transition from the
active
state to the dirty state 606 when an update is being, or has been, performed
on
an equivalent partition (i.e. another partition containing a copy of the
subject
record) but the partition has not yet performed the update, or when the
partition is
awaiting insertion of new records.
[0055] A partition may remain in a dirty state 606 while the
partition continues
to not reflect the most recent update to database 40 (i.e. either because the
partition has not yet been updated, or because the partition is awaiting
insertion
of records). A partition may transition from the dirty state 606 to an
unavailable
state 604 when an update or insert operation is initiated on one or more
records
at that partition.
[0056] A partition may remain in an unavailable state 604 when
the insert or
update operation on more or more of its component records remains in progress.
Upon completion of the update/insert, the partition may transition from an
unavailable state 604 back to an active state 602.
[0057] Conveniently, the state of each logical partition in
database system 10
may be tracked using state tables. FIGS. 7A to 7E illustrate an exemplary
format
of a state table for the partitions in database systems 1-5. Specifically,
each of
state tables 70, 72 and 74 may associate each partition with a server and a
state.
[0058] As more specifically detailed in FIG. 7A, state table 70
for database
system 1 has six logical partitions (identified as Partition 1 ¨ Partition 6)
distributed across two computer systems, 22-la (identified as Server 1) and 22-
1 b (identified by Server 2). Each computer system on which a logical
partition
resides may act as a server, which server performs the function of accepting
,
operation requests (e.g. read/update/insert requests) and returning the
requested
information (e.g. database records). Thus, if it is desired, for example, to
access
Partition 1 in database system 1, an access request may be directed to Server
1

CA 02618938 2015-01-19
14
in database system 1 over networks 100 and/or network 102. Hereinafter, a
logical
partition may be considered to reside on a particular physical server (i.e.
"Server X" is a
way of identifying which computer system the records of a partition resides
upon). Also,
as previously explained, each of the partitions may be in one of three states:
active,
dirty and unavailable.
[0059] A component record of a partition may be read via controller
application 38,
and updated via load/update application 34 for the database system. Moreover,
a new
record may be inserted into a partition via load/update application 34. In
particular,
controller applications 38, and load/update application 34 may pass the
read/update/insert request to the database application 26 of the database
system
storing the record.
[0060] Returning to FIG. 7A, the six logical partitions of the instance of
database 40
stored in database system 1 is distributed across two servers (corresponding
to
computer systems 22-la and 22-1b in FIG. 5). More specifically, row 76 of
state table
70 specifies that Partition 1 resides on Server 1 and is in the active state;
row 78
specifies that Partition 2 resides on Server 1 and is in the dirty state; row
80 specifies
that Partition 3 resides on Server 1 and is in the active state; row 82
specifies that
Partition 4 resides on Server 2 and is in the unavailable state; row 84
specifies that
Partition 5 resides on Server 2 and is in the active state; and row 86
specifies that
Partition 6 resides on Server 2 and is in the dirty state.
[0061] FIG. 7B, depicting state table 72 for database system 2, is
identical to state
table 70 because the copy of database 40 stored in database system 2 is a
mirrored
copy of database 40 stored in database system 1.
[0062] FIG. 7C depicts state table 74 for database system 3. As previously
discussed, the instance of database 40 stored in database system 3 may be
structured
differently than at database system 1. Specifically, the six logical
partitions of database
40 may be distributed over four servers. As shown, the instances, at database
system

CA 02618938 2015-01-19
14a
3, of Partitions 1, 2, 3 and 4 are in the active state, Partition 5 in the
dirty state and
Partition 6 in the unavailable state.

CA 02618938 2008-01-24
[0063] FIG. 7D depicts state table 81 for database system 4. The six
logical
partitions of the instance of database 40 stored at database system 4 may be
distributed over six servers. As shown, Partitions 1, 2, 3 and 5 are in the
active
state, and Partitions 4 and 6 in the dirty and unavailable states
respectively.
[0064] FIG. 7E depicts state table 83 for database system 5. The six
logical
partitions of the instance of database 40 stored at database system 5 may be
distributed across three servers. Partitions 1 and 5 are in the active state,
Partitions 2, 3 and 4 are in the dirty state and Partition 6 is in the
unavailable
state.
[0065] It may be appreciated that FIGS. 7A to 7E depict the states of
partitions in database systems 1, 2, 3, 4 and 5 at a point in time and that
the
state of any one of the partitions may change to another state if an event
occurs
that prompts a state transition (FIG. 6).
[0066] FIGS. 8A to 8E depict an exemplary format for partition tables 90,
92,
94, 96 and 98 corresponding to database systems 1, 2, 3, 4 and 5 (not shown in
FIG. 1) respectively. Tables 90, 92, 94, 96 and 98 specify for each logical
partition start and end record identifiers for the component records of the
partition. A record identifier may be a unique number assigned to each record
(e.g. a primary key) providing means by which a record may be uniquely
identified. In this example, the record identifier is an ordinal number,
however, it
may be appreciated that other unique record identifiers may be employed (e.g.
client account number, client's social insurance number, client name, etc.).
Also,
it may be appreciated that other manners of identifying partitions, other than
by
number, may also be employed.
[0067] Referring to table 90 in FIG. 8A, row 100 specifies that Partition 1
contains records having record identifiers 0000-0100; row 102 specifies that
Partition 2 contains records having record identifiers 0101-0205; row 104
specifies that Partition 3 contains records having record identifiers 0206-
0389;
row 106 specifies that Partition 4 contains records having record identifiers
0390-

CA 02618938 2008-01-24
16
0450; row 108 specifies that Partition 5 contains records having record
identifiers
0451-0898; and row 110 specifies that Partition 6 contains records having
record
identifiers 0899-1000.
[0068] Although as described, two tables are kept for each database system
(e.g. for database system 1, state table 70 and partition table 90), the two
tables
may be joined into one (i.e. a table having the fields Server No., Partition
No.
State, Record ID Start, and Record ID End). Moreover, separate tables need not
be kept for each database system, but rather, one table for all the partitions
in
database system 10 may be kept. That is, tables 70, 72, 74, 81, 83, 90, 92,
94,
96 and 98 may be joined into one table.
[0069] Tables 70, 72, 74, 81, 83, 90, 92, 94, 96 and 98, or the single
table if
joined into one table, (collectively "system state table") may be stored at
one of
computer systems 22 or (at another computing device, not shown) at each of
database systems 1, 2, 3, 4 or 5. Alternatively, one copy of the table(s) may
be
kept at a location accessible to controller applications 38 and load/update
applications 34, for example, at one of computer systems 22 at each of
database
systems 1, 2, 3, 4 or 5 to which all other controller applications 38 and
load/update applications 34 in database system 10 have access (over network
100 and/or network 102).
[0070] In operation and with reference to FIGS. 9 to 12, assume that a user
of
database system 10 desires to obtain an exemplary record having record
identifier 0396. An appropriate query (e.g. the query may originate as an XML
request, and then translated into an SQL statement) may be directed to the
controller application 38 over network 100 hosted on one or more of computer
system(s) 22, of one of database systems 1, 2, 3, 4 or 5. The query may, for
example, be generated by an application making use of system 22. The
application could, for example, be a document or data archiving application,
an
accounting application, or any other application requiring storage of data on
database 40.

CA 02618938 2015-01-19
17
[0071] Flow diagram 9000 (FIG. 9) depicts operation of controller
application 38
upon receiving a read request. First, controller application 38 may receive a
read
request for a record ID 0396 (S9002), from e.g. another application. Using
partition table
90 (row 106), controller application 38 may determine that record ID 0396 is
stored on
Partition 4 residing on Server 2 (S9004). Controller application 38 may then
determine
from state table 70 whether Partition 4 is available (S9006). As illustrated,
Partition 4 is
in an unavailable state (table 70, row 106). If, however, Partition 4 is in an
active state,
then controller application 38 may return record ID 0396, by e.g. sending a
request to
database application 26 hosted on Server 2 (S9010), in a manner understood by
those
skilled in the art.
[0072] However, in the present example, since Partition 4 is in an
unavailable state
(indicating, for example, that one or more records in Partition 4 may be in
the process of
being updated or inserted) controller application 38 may identify another
instance of
Partition 4 (either within database system 1, or another one of database
systems 2, 3, 4,
or 5) (S9012). Operation continues until controller application 38 identifies
an available
instance of Partition 4 from which record 0396 may be read.
[0073] At step S9012, controller application 38 may identify another
instance of
Partition 4 by consulting the system state table. For example, controller
application 38
may identify from the system state table that other instances of Partition 4
exist on
Servers 2 in database systems 2, 3, and 5, and on Server 4 in database system
4
(FIGS. 8B-8E). Notably, the instance of Partition 4 in database system 3,
hosted on
Server 2, is in the available state. Accordingly, controller application 38
may send a
read request for record 0396 to database application 26 hosted on Server 2 in
database
system 3 (or controller application 38 for database system 3).
[0074] Also notably, if there is a failure (e.g. of hardware or
software) at a particular
database system, controller application 38 may redirect a read request to
another
database application 26 in another database system, and that other database

CA 02618938 2015-01-19
18
application 26 may service the read request. Example failures may include
network
failures, physical failures (e.g. of the server hosting the desired
partition), or software
failures. When an instance of controller application 38 fails, requests may be
redirected
to another operable instance of controller application 38.
[0075] The method of updating a record in a database system 10 is described
with
reference to FIGS. 10, 11, and 12. Flow diagram 1000 (FIG. 10) depicts
operation of
load/update application 34 upon receiving an update request. Load/update
application
34 may receive an instruction (e.g. in the form of an XML request) over
network 100 to
update a particular record (S1002) in database 40 from a user or from an
application
such as a database application, a data storage application, or the like.
[0076] Using the system state table, load/update application 34 may
first identify the
logical partition storing the record to be updated (S1004). Load/update
application 34
may then change state of the local partition to unavailable (indicating that
the local
instance of the partition storing the record is being updated) and change the
state of
remote instances of the partition to dirty (indicating that the partition(s)
do not contain
up-to-date data) (S1006).
[0077] Next, load/update application 34 may update (e.g. through
database
application 26) the record in the local instance of the partition (S1008).
Upon completion
of the update, load/update application 34 may change the state of the local
instance of
the partition to available thus indicating that records in the local instance
of the partition
are up-to-date (51010).
[0078] Lastly, load/update application 34 may propagate the "update"
request to the
other database systems hosting the other remote instances of the partition
(S1012).
[0079] Notably, changes to the state table may be propagated to other
instances of
the state table (S1006A) at other database systems after steps

CA 02618938 2008-01-24
19
S1006 and S1010. Such changes may be propagated by, e.g. broadcasting a
change request, or propagating a copy of the state table itself. Other methods
of
communicating changes to a copy of the state table to other copies of the
state
table may be apparent to those skilled in the art.
[0080] FIG. 11 depicts operation of load/update application 34 upon
receiving
an "update" request that has been propagated by another load/update
application
34 at another database system. In this case, load/update application 34
receives
a propagated update request (S1102) and changes the state of the local
instance
of the partition storing the record to be updated to "unavailable" (S1104).
Next,
load/update application 34 may update the record, via database application 26,
in
the local instance of the partition (S1106). Upon completion of the update,
load/update application 34 may change the state of the local partition back to
available (S1108). Changes to the state table may be propagated (S1104A) to
other copies of the state table after steps S1104 and S1108.
[0081] To further illustrate, FIG. 12 depicts an exemplary progression of
an
update to a record, e.g. record ID 0396 stored in Partition 4, in database
system
from steps S1200 (initial state) to S1210 (completion of the update), across
database systems 1-5.
[0082] At the initial state, S1200 (table 1212), all instances of Partition
4 are
available.
[0083] At S1202 (table 1214) the instances of Partitions 4 at database
systems 1 and 2 undergo the update (as indicated by the unavailable state). It
may be appreciated that since database system 2 is a mirror of database system
1, the instances of Partition 4 at the two database systems may be updated
simultaneously. However, the instances of Partition 4 at database systems 3, 4
and 5 are marked dirty thus indicating that these instances of Partition 4 are
not
up-to-date.
[0084] At time S1204 (table 1216), record ID 0396 has been updated, and the

CA 02618938 2008-01-24
instances of Partition 4 at database systems 1 and 2 return to the available
state.
The update request may be next propagated (e.g. via an instruction sent over
network 100) to database system 3, and accordingly, the instance of Partition
4
at database system 3 may be marked unavailable. The instances of Partition 4
at
database systems 4 and 5 remain in the dirty state.
[0085] At S1206 (table 1218), record ID 0396 at Partition 4 at database
system 3 has been updated and therefore this instance of Partition 4 returns
to
the active state. The update request may next be propagated to database system
4, and accordingly, the instance of Partition 4 at database system 4 may be
marked unavailable.
[0086] At S1208 (table 1220), the update to record ID 0396 in Partition 4
at
database system 4 is complete and the partition returns to the active state.
The
update request may lastly be propagated to database system 5. The instance of
Partition 4 at database system 5 may accordingly be marked unavailable.
[0087] At S1210 (table 1222), all updates to all instances of Partition 4
(and
hence, copies of record ID 0396) in database system 10 are complete and all
instances of Partition 4 return to the active state.
[0088] Conveniently, because updates are propagated in a staggered manner
(with the exception of mirrored partitions which may be, but need not
necessarily
be, updated simultaneously), at least one up-to-date copy of record ID 0396
may
always be available in database system 10. Moreover, at least one copy of
record ID 0396 may be available at all times, though all copies may not
reflect
the most recent update. Consequently, the need for locks may be obviated.
[0089] The method of inserting a new record into database system 10
proceeds in a similar manner, as illustrated in flow diagrams 1300 and 1400
(FIGS. 13 and 14). Specifically, load/update application 34 receives a request
to
insert a new record (FIG. 13, S1302). Load/update application 34 then
identifies
which partition the new record will be inserted into (S1304). For example, new

CA 02618938 2008-01-24
21
records may be added to the last partition, and if the last partition is full,
then a
new partition may be created. Other methods of identifying which partition a
new
record will be inserted into may be known to those skilled in the art.
[0090] Once the partition is identified, load/update application 34 may
change
the state of the local instance of the partition in the system state table to
"unavailable" and the states of remote instances of the partition to "dirty"
(S1306).
It may be appreciated that should an instance of controller application 38
attempt
to read a record that is awaiting insertion at an instance of the partition,
the
record may not yet exist. However, the requestor may be alerted that the
partition
is in the dirty state, and may therefore not reflect the most recent update to
database 40. Conveniently, controller application 38 may then attempt to read
from another instance of the partition that is available (ref. FIG. 9).
[0091] Next, the new record may be inserted into the local instance of the
partition (S1308), via database application 26, and the state of the local
instance
of the partition changed to "available" (S1310).
[0092] Finally, the insert request may be propagated to other the database
systems awaiting the insertion of the new record (S1312). Changes to the state
table may be propagated to other copies of the state table (S1306A) after
S1306
and S1310.
[0093] Upon receiving a propagated insert request, load/update application
34
(S1402) may change the state of the local instance of the partition to
unavailable
(S1404), may insert the new record into the local partition (S1406), via
database
application 26, and may change the state of the local partition back to
available
(S1408). Changes to the state table may be propagated to other copies of the
state table (S1404A) after S1404 and S1408.
[0094] Having described the functions of controller application 38 and
load/update application 34, it may be appreciated that each of computer
systems
22 may host any number of operable instances of controller application 38 at

CA 02618938 2008-01-24
22
each of database systems 1-5. In this manner, a plurality of read requests may
be serviced concurrently. However, while multiple instances of load/update
application 34 may also exist, only one instance of load/update application 34
may be operable at each of database systems 1-5. Load/update application 34
may only modify the copy of the system state table stored at the local
database
system. This is to ensure that the copies of the system state table at each of
database systems 1-5 are modifiable by one application only, to minimize any
difficulties that may arise with respect to concurrency control.
[0095] Moreover, to further miminize concurrency issues, an insert/update
may proceed to completion across all database systems 1-5 in database system
before the next insert/update may be initiated. To this end, a flag may, for
example, be kept in the system state table indicating that an insert/update is
in
progress. Only when the flag has been cleared, i.e. indicating that no inserts
or
updates are in progress, may the next insert/update be initiated by a
load/update
application 34. Other methods of ensuring that a load/update request proceeds
to
completion before the next insert/update is initiated may be apparent to those
skilled in the art.
[0096] Other modifications to database system 10 may be possible without
affecting the functionality described above, as known to those of ordinary
skill.
[0097] For example, since each of computer systems 22 may be a
conventional computer system, controller application 38 and load/update
application 34 may be hosted on any one of computer systems 22, instead of on
a separate computer system 24, as described above.
[0098] In another embodiment of the invention, instead of one load/update
application 34 instance servicing inserts and updates of records at one
database
system as described above, (i.e. one operable instance of load/update
application 34 per database system 1-5), an instance of a load/update
application
34 operating at one of database systems 1-5, may instead be responsible for
servicing a specific set of logical partitions across database system 10. That
is,

CA 02618938 2008-01-24
23
using the above example, one instance of load/update application 34 (e.g.
operating at database system 1) may, for example, be responsible for servicing
all instances of Partition 3 across database system 10. In this embodiment,
when
an insert or update request to Partition 3 is received by the load/update
application 34, it may be responsible for inserting and updating records in
all
instances of Partition 3 (i.e. copies of Partition 3 at its local database
system 1
and copies of Partition 3 at remote database systems 2, 3, 4 and 5).
Load/update
application 34, may update remote instances of Partition 3 by, for example,
calling the relevant database applications 26 at the remote database systems.
To
minimize concurrency issues, each load/update application 34 may only modify
the state(s) of the partition(s) under its responsibility.
[0099] In yet another embodiment of the invention, only one instance of
load/update application 34 (the "master load/updater") may be operable across
database system 10. In this alternate embodiment of system 10, controller
application 38 may not propagate insert/update requests to other database
systems (S1012, S1312), but the master load/updater may instead directly
invoke the appropriate database application 26 at the relevant database
system(s). Thus, the operations depicted in flow diagrams 1100 and 1400 would
not be required.
[00100] In yet another embodiment of the invention, only one copy of the
system state table may be maintained in database system 10, for example, at
one of computer systems 22 or 24. In this embodiment, it may be desirable that
there be only one operable instance of load/update application 34 (the "master
load/update() across database system 10 so as to minimize concurrency control
difficulties, as detailed above. In this embodiment, changes to the system
state
table by load/update application 34 would not need to be propagated (i.e.
steps
S1006A, S1104A, S1306A and S1404A would not be needed) since only one
copy of the system state table is maintained.
[00101] Conveniently, in all embodiments, should the computer system

CA 02618938 2008-01-24
24
hosting the operable instances of applications 38 and 34 malfunction, another
copy of applications 38 and 34 hosted on another computer system 22 or 24 in
the database system may take over as the operable instance(s). It may be
appreciated that this duplication increases the fault tolerance of each of
database
systems 1, 2, 3, 4 and 5 and of database system 10 as a whole.
[00102] Of course, the above-described embodiments are intended to be
illustrative only and in no way limiting. The described embodiments of
carrying
out the invention are susceptible to many modifications of form, arrangement
of
parts, details and order of operation. The invention, rather, is intended to
encompass all such modification within its scope, as defined by the claims.

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: IPC deactivated 2021-10-09
Common Representative Appointed 2019-10-30
Common Representative Appointed 2019-10-30
Inactive: IPC assigned 2019-03-31
Inactive: IPC assigned 2019-03-31
Inactive: First IPC assigned 2019-03-31
Inactive: IPC assigned 2019-03-31
Inactive: IPC expired 2019-01-01
Grant by Issuance 2018-04-10
Inactive: Cover page published 2018-04-09
Inactive: Final fee received 2018-02-23
Pre-grant 2018-02-23
Maintenance Request Received 2017-10-23
Notice of Allowance is Issued 2017-08-28
Letter Sent 2017-08-28
4 2017-08-28
Notice of Allowance is Issued 2017-08-28
Inactive: Approved for allowance (AFA) 2017-08-22
Inactive: Q2 passed 2017-08-22
Examiner's Interview 2017-08-03
Amendment Received - Voluntary Amendment 2017-08-02
Amendment Received - Voluntary Amendment 2017-07-04
Inactive: Multiple transfers 2017-06-23
Maintenance Request Received 2017-01-23
Inactive: S.30(2) Rules - Examiner requisition 2017-01-03
Inactive: Report - No QC 2016-12-30
Amendment Received - Voluntary Amendment 2016-11-09
Inactive: S.30(2) Rules - Examiner requisition 2016-05-09
Inactive: Report - No QC 2016-05-07
Inactive: Correspondence - Formalities 2016-04-15
Change of Address or Method of Correspondence Request Received 2016-04-15
Maintenance Request Received 2016-01-22
Amendment Received - Voluntary Amendment 2015-12-04
Inactive: S.30(2) Rules - Examiner requisition 2015-06-04
Inactive: Report - No QC 2015-05-29
Maintenance Request Received 2015-01-20
Amendment Received - Voluntary Amendment 2015-01-19
Inactive: S.30(2) Rules - Examiner requisition 2014-07-17
Inactive: Report - No QC 2014-06-30
Maintenance Request Received 2014-01-20
Letter Sent 2013-01-22
Amendment Received - Voluntary Amendment 2013-01-15
Request for Examination Requirements Determined Compliant 2013-01-15
All Requirements for Examination Determined Compliant 2013-01-15
Request for Examination Received 2013-01-15
Maintenance Request Received 2013-01-11
Application Published (Open to Public Inspection) 2009-07-24
Inactive: Cover page published 2009-07-23
Letter Sent 2009-05-28
Inactive: Declaration of entitlement - Formalities 2009-04-16
Inactive: Single transfer 2009-04-16
Inactive: IPC assigned 2008-05-07
Inactive: First IPC assigned 2008-05-07
Inactive: Filing certificate - No RFE (English) 2008-03-07
Application Received - Regular National 2008-02-29
Amendment Received - Voluntary Amendment 2008-02-05

Abandonment History

There is no abandonment history.

Maintenance Fee

The last payment was received on 2017-10-23

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
SYMCOR INC.
Past Owners on Record
JOHN PAUL LOESER
JOHN WALL
KHOA TRAN
MARIUS DAN STROE
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column (Temporarily unavailable). To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Description 2008-01-23 24 1,144
Claims 2008-01-23 7 252
Abstract 2008-01-23 1 30
Drawings 2008-01-23 16 269
Representative drawing 2009-06-25 1 5
Cover Page 2009-07-19 2 47
Drawings 2008-02-04 16 269
Description 2015-01-18 27 1,291
Claims 2015-01-18 8 351
Description 2016-11-08 26 1,203
Claims 2016-11-08 8 295
Abstract 2016-11-08 1 28
Description 2017-07-03 26 1,125
Claims 2017-07-03 8 269
Claims 2017-08-01 8 269
Representative drawing 2018-03-07 1 5
Cover Page 2018-03-07 2 45
Filing Certificate (English) 2008-03-06 1 158
Courtesy - Certificate of registration (related document(s)) 2009-05-27 1 102
Reminder of maintenance fee due 2009-09-27 1 111
Reminder - Request for Examination 2012-09-24 1 118
Acknowledgement of Request for Examination 2013-01-21 1 176
Commissioner's Notice - Application Found Allowable 2017-08-27 1 163
Correspondence 2008-03-06 1 17
Correspondence 2009-04-15 3 69
Correspondence 2009-05-27 1 16
Fees 2009-11-22 1 35
Fees 2013-01-10 1 68
Fees 2014-01-19 2 78
Fees 2015-01-19 2 81
Amendment / response to report 2015-12-03 5 288
Maintenance fee payment 2016-01-21 2 78
Correspondence 2016-04-14 2 89
Examiner Requisition 2016-05-08 12 634
Amendment / response to report 2016-11-08 24 1,035
Examiner Requisition 2017-01-02 5 234
Maintenance fee payment 2017-01-22 2 78
Amendment / response to report 2017-07-03 21 778
Interview Record 2017-08-02 1 26
Amendment / response to report 2017-08-01 3 111
Maintenance fee payment 2017-10-22 2 84
Final fee 2018-02-22 2 67