Sélection de la langue

Search

Sommaire du brevet 2345309 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 2345309
(54) Titre français: SYSTEME DE GESTION DE BASE DE DONNEES RELATIONNELLES A HAUTES PERFORMANCES
(54) Titre anglais: HIGH PERFORMANCE RELATIONAL DATABASE MANAGEMENT SYSTEM
Statut: Réputée abandonnée et au-delà du délai pour le rétablissement - en attente de la réponse à l’avis de communication rejetée
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • G06F 16/21 (2019.01)
  • G06F 16/27 (2019.01)
  • G06F 16/28 (2019.01)
(72) Inventeurs :
  • CHRISTENSEN, LOREN (Canada)
(73) Titulaires :
  • LINMOR INC.
(71) Demandeurs :
  • LINMOR INC. (Canada)
(74) Agent: GOWLING WLG (CANADA) LLP
(74) Co-agent:
(45) Délivré:
(22) Date de dépôt: 2001-04-26
(41) Mise à la disponibilité du public: 2002-03-18
Requête d'examen: 2001-04-26
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Non

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
2,319,918 (Canada) 2000-09-18

Abrégés

Abrégé anglais


A high performance relational database management system, leveraging the
functionality of a high speed communications network, comprising at least one
performance monitor server computer connected to the network for receiving
network
management data objects from at least one data collector node device so as to
create a
distributed database. A histogram routine running on the performance
monitoring server
computers partitions the distributed database into data hunks. The data hunks
are then
imported into a plurality of delegated database engine instances running on
the
performance monitoring server computers so as to parallel process the data
hunks. A
performance monitor client computer connected to the network is then typically
used to
access the processed data to monitor object performance.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


11
What is claimed is:
1. A high performance relational database management system, leveraging the
functionality of a high speed communications network, comprising the steps of:
(i) receiving collected data objects from at least one data collection node
using
at least one performance monitoring computer whereby a distributed database is
created;
(ii) partitioning the distributed database into data hunks using a histogram
routine
running on at least one performance monitoring server computer;
(iii) importing the data hunks into a plurality of delegated database engine
instances located on at least one performance monitoring server computer so as
to parallel process the data hunks whereby processed data is generated; and
(iv) accessing the processed data using at least one performance client
computer
to monitor data object performance.
2. The system according to claim 1, wherein at least one database engine
instance is
located on the performance monitor server computers on a ratio of one engine
instance
to one central processing unit whereby the total number of engine instances is
at least two
so as to enable the parallel processing of the distributed database.
3. The system according to claim 2, wherein at least one database engine
instance is used
to maintain a versioned master vector table.
4. The system according to claim 3, wherein the versioned master vector table
generates
a histogram routine used to facilitate the partitioning of the distributed
database.
5. The system according to claim 4, wherein the histogram routine comprises
the steps
of:

12
(i) dividing the total number of active object identifiers by the desired
number of
partitions so as to establish the optimum number of objects per partition;
(ii) generating an n point histogram of desired granularity from the active
indices;
and
(iii) summing adjacent histogram routine generated values until a target
partition
size is reached but not exceeded.
6. The system according to claim 1, wherein the performance monitor server
comprises
an application programming interface compliant with a standard relational
database query
language.
7. A high performance relational database management system, leveraging the
functionality of a high speed communications network, comprising:
(i) at least one performance monitor server computer connected to the network
for receiving network management data objects from at least one data
collection
node device whereby a distributed database is created;
(ii) a histogram routine running on the performance monitoring server
computers
for partitioning the distributed database into data hunks;
(iii) at least two database engine instances running on the performance
monitoring
server computers so as to parallel process the data hunks whereby processed
data
is generated; and
(iv) at least one performance monitor client computer connected to the network
for accessing the processed data whereby data object performance is monitored.
8. The system according to claim 7, wherein at least one database engine
instance is
located on the performance monitoring server computers on a ratio of one
engine instance
to one central processing unit whereby the total number of engine instances
for the
system is at least two so as to enable the parallel processing of the
distributed database.

13
9. The system according to claim 8, wherein at least one database engine
instance is used
to maintain a versioned master vector table.
10. The system according to claim 9, wherein the versioned master vector table
generates
a histogram routine used to facilitate the partitioning of the distributed
database.
11. The system according to claim 10, wherein the histogram routine comprises
the steps
of:
(i) dividing the total number of active object identifiers by the desired
number of
partitions so as to establish the optimum number of objects per partition;
(ii) generating an n point histogram of desired granularity from the active
indices;
and
(iii) summing adjacent histogram routine generated values until a target
partition
size is reached but not exceeded.
12. The system according to claim 7, wherein the performance monitor server
comprises
an application programming interface compliant with a standard relational
database query
language.
13. The system according to claim 7, wherein at least one performance monitor
client
computer is connected to the network so as to communicate remotely with the
performance monitor server computers.
14. A storage medium readable by an install server computer in a high
performance
relational database management system including the install server, leveraging
the
functionality of a high speed communications network, the storage medium
encoding a
computer process comprising:

14
(i) a processing portion for receiving collected data objects from at least
one data
collection node using at least one performance monitoring computer whereby a
distributed database is created;
(ii) a processing portion for partitioning the distributed database into data
hunks
using a histogram routine running on at least one performance monitoring
server
computer;
(iii) a processing portion for importing the data hunks into a plurality of
delegated
database engine instances located on at least one performance monitoring
server
computer so as to parallel process the data hunks whereby processed data is
generated; and
(iv) a processing portion for accessing the processed data using at least one
performance client computer to monitor data object performance.
15. The system according to claim 14, wherein at least one database engine
instance is
located on the data processor server computers on a ratio of one engine
instance to one
central processing unit whereby the total number of engine instances is at
least two so as
to enable the parallel processing of the distributed database.
16. The system according to claim 15, wherein one of the database engine
instances is
designated as a prime database engine instance used to maintain a versioned
master
vector table.
17. The system according to claim 16, wherein the versioned master vector
table
generates a histogram routine used to facilitate the partitioning of the
distributed
database.
18. The system according to claim 14, wherein the histogram routine comprises
the steps
of:

15
(i) dividing the total number of active object identifiers by the desired
number of
partitions so as to establish the optimum number of objects per partition;
(ii) generating an n point histogram of desired granularity from the active
indices;
and
(iii) summing adjacent histogram routine generated values until a target
partition
size is reached but not exceeded.
19. The system according to claim 14, wherein the performance monitor server
comprises
an application programming interface compliant with a standard relational
database query
language.

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 02345309 2001-04-26
1
High Performance Relational Database Management System
Field of the Invention
S
The present invention relates to the parallel processing of relational
databases
within a high speed data network, and more particularly to a system for the
high
performance management of relational databases.
Background of the Invention
Network management is a large field that is expanding in both users and
technology. On UNIX networks, the network manager of choice is the Simple
Network
Management Protocol (SNMP). This has gained great acceptance and is now
spreading
rapidly into the field of PC networks. On the Internet, Java-based SNMP
applications are
becoming readily available.
SNMP consists of a simply composed set of network communication
specifications that cover all the basics of network management in a method
that can be
configured to exert minimal management traffic on an existing network.
The problems seen in high capacity management implementations were only
manifested recently with the development of highly scalable versions of
relational
database management solutions. In the scalability arena, performance
degradation
becomes apparent when numbers of managed objects reach a few hundreds.
The known difficulties relate either to the lack of a relational database
engine and
query language in the design, or to memory intensive serial processing in the
implementation, specifically access speed scalability limitations, inter-
operability
problems, custom-designed query interfaces that don't provide the flexibility
and ease-of
use that a commercial interface would offer.

CA 02345309 2001-04-26
2
Networks are now having to manage ever larger number of network objects as
true scalability takes hold, and with vendors developing hardware having ever
finer
granularity of network objects under management, be they via SNMP or other
means, the
number of objects being monitored by network management systems is now in the
millions. Database sizes are growing at a corresponding rate, leading to
increased
processing times. As well, the applications that work with the processed data
are being
called upon to deliver their results in real-time or near-real-time, thereby
adding yet
another demand on more efficient database methods.
The current trend is towards hundreds of physical devices, which translates to
millions of managed objects. A typical example of an object would be a PVC
element
(VPI/VCI pair on an incoming or outgoing port) on an ATM (Asynchronous
Transfer
Mode) switch.
The effect of high scalability on the volume of managed objects grew rapidly
as
industry started increasing the granularity of databases. This uncovered still
another
problem that typically manifested as processing bottlenecks within the
network. As one
problem was solved it created another that was previously masked.
In typical management implementations, when scalabilityprocessing bottlenecks
appear in one area, a plan is developed and implemented to eliminate them, at
which
point they typically will just "move" down the system to manifest themselves
in another
area. Each subsequent processing bottleneck is uncovered through performance
bench
marking measurements once the previous hurdle has been cleared.
The limitations imposed by the lack of parallel database processing
operations,
and other scalability bottlenecks translates to a limit on the number of
managed objects
that can be reported on in a timely fashion.
The serial nature of the existing accessors precludes their application in
reporting
on large managed networks. While some speed and throughput improvements have
been
demonstrated by modifying existing reporting scripts to fork multiple
concurrent

CA 02345309 2001-04-26
3
instances of a program, the repeated and concurrent raw access to the flat
files imposes
a fundamental limitation on this approach.
For the foregoing reasons, there exists in the industry a need for an improved
relational database management system that provides for high capacity,
scalability,
backwards compatibility and real-time or near-real-time results.
Summary of the Invention
The present invention is directed to a high performance relational database
management system that satisfies this need. The system, leveraging the
functionality of
a high speed communications network, comprises receiving collected data
objects from
at least one data collection node using at least one performance monitoring
server
computer whereby a distributed database is created.
The distributed database is then partitioned into data hunks using a histogram
routine running on at least one performance monitoring server computer. The
data hunks
are then imported into at least one delegated database engine instance located
on at least
one performance monitoring server computer so as to parallel process the data
hunks
whereby processed data is generated. The processed data is then accessed using
at least
one performance monitoring client computer to monitor data object performance.
The performance monitor server computers are comprised of at least one central
processing unit. At least one database engine instance is located on the
performance
monitor server computers on a ratio of one engine instance to one central
processing unit
whereby the total number of engine instances is at least two so as to enable
the parallel
processing of the distributed database.
At least one database engine instance is used to maintain a versioned master
vector table. The versioned master vector table generates a histogram routine
used to
facilitate the partitioning of the distributed database.

CA 02345309 2001-04-26
4
This invention addresses the storage and retrieval of very large numbers of
collected network performance data, allowing database operations to be applied
in
parallel to subsections of the working data set using multiple instances of a
database by
making parallel the above operations, which were previously executed serially.
Complex
performance reports consisting of data from millions of managed network
objects can
now be generated in real time. This results in impressive gains in scalability
for real-time
performance management solutions. Each component has it's own level of
scalability.
Other aspects and features of the present invention will become apparent to
those
ordinarily skilled in the art upon review of the following description of
specific
embodiments of the invention in conjunction with the accompanying figures.
Brief Description of the Drawings
These and other features, aspects, and advantages of the present invention
will
become better understood with regard to the following description, appended
claims, and
accompanying drawings where:
Figure 1 is a schematic overview of the high performance relational database
management system;
Figure 2 is a schematic view of the performance monitor server computer and
its
components; and
Figure 3 is a schematic overview of the high performance relational database
management system.
Detailed Descr~tion of the Presently Preferred Embodiment
As shown in figure l, the high performance relational database management
system, leveraging the functionality of a high speed communications network
14,
comprises at least one performance monitor server computer 10 connected to the
network
14 for receiving network management data objects from at least one data
collection node
device 12 so as to create a distributed database 16.

CA 02345309 2001-04-26
As shown in figure 2, a histogram routine 20 running on the performance
monitoring server computers 10 partitions the distributed database 16 into
data hunks 24.
The data hunks 24 are then imported into a plurality of delegated database
engine
instances 22 running on the performance monitoring server computers 10 so as
to parallel
5 process the data hunks 24 whereby processed data 26 is generated.
As shown in figure 3, at least one performance monitor client computer 28
connected to the network 14 accesses the processed data 26 whereby data object
performance is monitored.
At least one database engine instance 22 is used to maintain a versioned
master
vector table 30. The versioned master vector table 30 generates the histogram
routine 20
used to facilitate the partitioning of the distributed database 16. In order
to divide the total
number on managed objects among the database engines 22, the histogram routine
20
divides indices active at the time of a topology update into the required
number of work
ranges. Dividing the highest active index by the number of sub-partitions is
not an option,
since there is no guarantee that retired objects will be linearly distributed
throughout the
partitions.
The histogram routine 20 comprises dividing the total number of active object
identifiers by the desired number of partitions so as to establish the optimum
number of
objects per partition, generating an n point histogram of desired granularity
from the
active indices, and summing adjacent histogram routine generated values until
a target
partition size is reached, but not exceeded. This could be understood as so
inherently
parallel that it is embarrassing to attack them serially from the active
indices.
In order to make the current distribution easily available to all interested
processes, a versioned master vector table 30 is created on the prime database
engine 32.
The topology and data import tasks refer to this table to determine the latest
index
division information. The table is maintained by the topology import process.

CA 02345309 2001-04-26
6
Objects are instantiated in the subservient topological tables by means of a
bulk
update routine. Most RDBMS's provide a facility for bulk update. This command
allows
arbitrarily separated and formatted data to be opened and read into a table by
the server
back end directly. A task is provided, which when invoked, opens up the object
table file
and reads in each entry sequentially. Each new or redistributed object record
is massaged
into a format acceptable to an update routine, and the result written to one
of n temporary
copy files or relations based on the object index ranges in the current
histogram. Finally,
the task opens a command channel to each back end and issues the copy command
and
update commands are issued to set "lastseen" times for objects that have
either left the
system's management sphere, or been locally reallocated to another back end.
The smaller tables are pre-processed in the same way, and are not divided
prior
to the copy. This ensures that each back end will see these relations
identically. In order
to distribute the incoming reporting data across the partitioned database
engines, a routine
1 S is invoked against the most recent flat file data hunk and it's output
treated as a streaming
data source. The distribution strategy is analogous to that used for the
topology data. The
data import transforms the routine output into a series of lines suitable for
the back end's
copy routine. The task compares the obj ect index of each performance record
against the
ranges in the current histogram, and appends it to the respective copy file. A
command
channel is opened to each back end and the copy command given. For data
import,
reallocation tracking is automatic since the histogram ranges are always
current.
One common paradigm used in distributed-memory parallel computing is data
decomposition, or partitioning. This involves dividing the working data set
into
independent partitions. Identical tasks, running on distinct hardware can then
operate on
different portions of the data concurrently. Data decomposition is often
favored as a first
choice by parallel application designers, since the approach minimizes
communication
and task synchronization overhead during the computational phase. For a very
large
relational database, partitioning can lead to impressive gains in performance.
When
certain conditions are met, many common database operations can be applied in
parallel
to subsections of the data set.

CA 02345309 2001-04-26
7
For example, if a table D is partitioned into work units D°, D~, ~~~ ,
D", then unary
operator f is a candidate for parallelism, if and only if
f (D) = f (Do)U f (D~)U' ' ' f (Dn)
Similarly, if a second relation O, is decomposed using the same scheme, then
certain binary operators can be invoked in parallel, if and only if
D, o = f (Do, Oo)U f (Di, ol)U' ' ' f (Dn, on)
The unary operators projection and selection, and binary operators union,
intersection and set difference are unconditionally partitionable. Taken
together, these
operators are members of a class of problems that can collectively be termed
"embarrassingly parallel". This could be understood as so inherently parallel
that it is
embarrassing to attack them serially.
Certain operators are amenable to parallelism conditionally. Grouping and Join
are in this category. Grouping works as long as partitioning is done by the
grouping
attribute. Similarly, a join requires that the join attribute also be used for
partitioning.
That satisfied, tables do not grow symmetrically as the number of total
managed objects
increases. The object and variable tables soon dwarf the others as more obj
ects are placed
under management. For one million managed objects and a thirty minute
transport
interval, the incoming data to be processed can be on the order of 154
Megabytes in size.
A million element object table will be about 0.25 Gigabytes at it's initial
creation. This
file will also grow over time, as some objects are retired, and new
discoveries appear.
Considering the operations required in the production of a performance report,
it is
possible to design a parallel database scheme that will allow a parallel join
of distributed
sub-components of the data and object tables by using the object identifiers
as the
partitioning attribute. The smaller attribute, class and variable tables need
not be
partitioned. In order to make them available for binary operators such as
joins, they need

CA 02345309 2001-04-26
g
only be replicated across the separate database engines. This replication is
cheap and easy
given the small size of the files in question.
The appearance and retirement of entities in tables is tracked by two time-
stamp
attributes, representing the time the entity became known to the system, and
the time it
departed, respectively. Versioned entities include monitored objects,
collection classes
and network management variables.
If a timeline contains an arbitrary interval spanning two instants, start and
end, an
entity can appear or disappear in one of seven possible relative positions. An
entity
cannot disappear before it becomes known, and it is not permissible for
existence to have
a zero duration. This means that there are six possible endings for the first
start position,
five for the second, and so on until the last.
One extra case is required to express an object that both appears and
disappears
within the subject interval. Therefore, the final count of the total number of
cases is
determined by the formula:
6
1+ ~ n
n=1
There are twenty-two possible entity existence scenarios for any interval with
a
real duration. Time domain versioning of tables is a salient feature of the
design.
A simple and computationally cheap intersection can be used since the domains
are equivalent for both selections. Each element of the table need only be
processed once,
with both conditions applied together.

CA 02345309 2001-04-26
9
Application programmers will access the distributed database via an
application
programming interface (API) providing C, C++, TCL and PERL bindings. Upon
initialization the library establishes read-only connections to the
partitioned database
servers, and queries are executed by broadcasting selection and join criteria
to each
server. Results returned are aggregated and returned to the application. To
minimize
memory requirements in large queries, provision is made for returning the
results as
either an input stream or cache file. This allows applications to process very
large data
arrays in a flow through manner.
A limited debug and general access user interface is provided in the form of
an
interactive user interface, familiar to many database users. The monitor
handles the
multiple connections and uses a simple query rewrite rule system to ensure
that returns
match the expected behavior of a non-parallel database. To prevent poorly
conceived
queries from swamping the system's resources, a built-in limit on the maximum
number
1 S of rows returned is set at monitor startup. Provision is made for
increasing the limit
during a session.
As the number of total managed objects increases, the corresponding object and
variable data tables increase at a non-linear rate. For example, it was found
through one
test implementation that one million managed objects with a thirty-minute data
sample
transport interval generated incoming performance management data on the order
of 154
Megabytes. A one million element object table will be about 250 Megabytes at
it's initial
creation. This file will also grow over time as some objects are retired and
new
discoveries appear.
Considering the operations required in the production of a performance report,
it
is possible to design a parallel database scheme that will allow a parallel
join of
distributed sub-components of the data and object tables by using the object
identifiers
as the partitioning attribute. This involves partitioning data and object
tables by index,
importing the partitioned network topology data delegated to multiple
instances of the

CA 02345309 2001-04-26
database engine, and invoking an application routine against the most recent
flat file
performance data hunk and directing the output to multiple database engines.
The API and user debug and access interfaces are compliant with standard
S relational database access methods thereby permitting legacy or in-place
implementations
to be compatible.
This invention addresses the storage and retrieval of very large numbers of
collected network performance data, allowing database operations to be applied
in
10 parallel to subsections of the working data set using multiple instances of
a database by
making parallel the above operations which were previously executed serially.
Complex
performance reports consisting of data from millions of managed network
objects can
now be generated in real time. This results in exceptional advancements in
scalability for
real-time performance management solutions, since each component has it's own
level
of scalability.
Today's small computers are capable of delivering several tens of millions of
operations per second, and continuing increases in power are foreseen. Such
computer
systems' combined computational power, when interconnected by an appropriate
high-
speed network, can be applied to solve a variety of computationally intensive
applications. In other words network computing, when coupled with prudent
application
design, can provide supercomputer-level performance. The network-based
approach can
also be effective in aggregating several similar multiprocessors, resulting in
a
configuration that might otherwise be economically and technically difficult
to achieve,
even with prohibitively expensive supercomputer hardware.
With this invention scalability limits are advanced, achieving an
unprecedented
level of monitoring influence.

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Inactive : CIB expirée 2022-01-01
Inactive : CIB en 1re position 2019-07-02
Inactive : CIB attribuée 2019-07-02
Inactive : CIB attribuée 2019-07-02
Inactive : CIB attribuée 2019-07-02
Inactive : CIB expirée 2019-01-01
Inactive : CIB enlevée 2018-12-31
Inactive : Morte - Aucune rép. dem. par.30(2) Règles 2006-11-27
Demande non rétablie avant l'échéance 2006-11-27
Réputée abandonnée - omission de répondre à un avis sur les taxes pour le maintien en état 2006-04-26
Inactive : Abandon. - Aucune rép dem par.30(2) Règles 2005-11-28
Inactive : Dem. de l'examinateur par.30(2) Règles 2005-05-27
Modification reçue - modification volontaire 2005-04-01
Modification reçue - modification volontaire 2004-09-13
Inactive : Dem. de l'examinateur par.30(2) Règles 2004-03-12
Inactive : Dem. de l'examinateur art.29 Règles 2004-03-12
Lettre envoyée 2003-08-08
Inactive : Transferts multiples 2003-06-27
Lettre envoyée 2002-11-07
Inactive : Supprimer l'abandon 2002-10-28
Inactive : Correspondance - Transfert 2002-09-10
Inactive : Renseignement demandé pour transfert 2002-06-17
Inactive : Correspondance - Transfert 2002-05-07
Inactive : Transfert individuel 2002-04-24
Demande publiée (accessible au public) 2002-03-18
Inactive : Page couverture publiée 2002-03-17
Inactive : CIB en 1re position 2001-06-14
Inactive : CIB attribuée 2001-06-14
Inactive : Lettre de courtoisie - Preuve 2001-06-05
Inactive : Certificat de dépôt - RE (Anglais) 2001-05-29
Exigences de dépôt - jugé conforme 2001-05-29
Demande reçue - nationale ordinaire 2001-05-29
Exigences pour une requête d'examen - jugée conforme 2001-04-26
Toutes les exigences pour l'examen - jugée conforme 2001-04-26

Historique d'abandonnement

Date d'abandonnement Raison Date de rétablissement
2006-04-26

Taxes périodiques

Le dernier paiement a été reçu le 2005-04-20

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Requête d'examen - générale 2001-04-26
Taxe pour le dépôt - générale 2001-04-26
Enregistrement d'un document 2002-04-24
TM (demande, 2e anniv.) - générale 02 2003-04-28 2003-04-11
Enregistrement d'un document 2003-06-27
TM (demande, 3e anniv.) - générale 03 2004-04-26 2004-04-22
TM (demande, 4e anniv.) - générale 04 2005-04-26 2005-04-20
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
LINMOR INC.
Titulaires antérieures au dossier
LOREN CHRISTENSEN
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Dessin représentatif 2002-02-20 1 19
Description 2001-04-26 10 469
Abrégé 2001-04-26 1 22
Revendications 2001-04-26 5 170
Dessins 2001-04-26 2 57
Page couverture 2002-03-15 1 51
Revendications 2004-09-13 5 179
Certificat de dépôt (anglais) 2001-05-29 1 164
Demande de preuve ou de transfert manquant 2002-04-29 1 109
Courtoisie - Certificat d'enregistrement (document(s) connexe(s)) 2002-11-07 1 109
Rappel de taxe de maintien due 2002-12-30 1 106
Courtoisie - Lettre d'abandon (R30(2)) 2006-02-06 1 166
Courtoisie - Lettre d'abandon (taxe de maintien en état) 2006-06-21 1 175
Correspondance 2001-05-29 1 24
Correspondance 2002-06-17 1 23
Taxes 2003-04-11 1 29
Taxes 2004-04-22 1 30
Taxes 2005-04-20 1 29