Language selection

Search

Patent 2345292 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2345292
(54) English Title: HIGH PERFORMANCE DISTRIBUTED DISCOVERY SYSTEM
(54) French Title: SYSTEME DE DECOUVERTE REPARTI A HAUT RENDEMENT
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04L 41/042 (2022.01)
  • H04L 41/12 (2022.01)
  • H04L 12/24 (2006.01)
(72) Inventors :
  • CHRISTENSEN, LOREN (Canada)
(73) Owners :
  • LINMOR INC. (Canada)
(71) Applicants :
  • LINMOR TECHNOLOGIES INC. (Canada)
(74) Agent: GOWLING WLG (CANADA) LLP
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2001-04-26
(41) Open to Public Inspection: 2002-04-03
Examination requested: 2001-04-26
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
2,322,117 Canada 2000-10-03

Abstracts

English Abstract





A high performance distributed discovery system, leveraging the functionality
of
a high speed communications network, for the discovery of the network topology
of a
high speed data network. The system comprises a plurality of discovery engines
on at
least one, and preferably a plurality of data collection node computers that
poll and
register managed network objects with the resulting distributed record
compilation
forming a distributed network topology database that is selectively accessed
by at least
one performance monitoring server computer to provide for network management.
A
plurality of discovery engine instances are located on the data collection
node computers
on a ratio of one engine instance to one central processing unit so as to
provide for the
parallel processing of the distributed network topology database.


Claims

Note: Claims are shown in the official language in which they were submitted.





What is claimed is:


1. A network topology distributed discovery system, leveraging the
functionality of a high
speed communications network, comprising the steps of:
(i) distributing records of discovered network devices using a plurality of
discovery engine instances located on at least one data collection node
computer
whereby the resulting distributed record compilation comprises a distributed
network topology database; and
(ii) importing the distributed network topology database onto at least one
performance monitor server computer so as to enable network management.
2. The system according to claim l, wherein at least one discovery engine
instance is
located on the data collection node computers on a ratio of one engine
instance to one
central processing unit whereby the total number of engine instances is at
least two so as
to enable the parallel processing of the distributed network topology
database.
3. The system according to claim l, wherein a vendor specific discovery
subroutine is
launched upon detection by the system of a non-MIB II standard device so as to
query the
vendor's private MIB using a vendor specific algorithm.
4. The system according to claim 1, wherein at least one performance monitor
client
computer is connected to the network so as to communicate remotely with the
performance monitor server computers.
5. A network topology distributed discovery system, leveraging the
functionality of a high
speed communications network, comprising:
(i) at least one data collection node computer connected to the network for
discovering network devices using a plurality of discovery engine instances
whereby a distributed network topology database is created; and
(ii) at least one performance monitor server computer having imported the
distributed network topology database whereby network management is enabled.




8
6. The system according to claim 5, wherein at least one discovery engine
instance is
located on the data collection node computers on a ratio of one engine
instance to one
central processing unit whereby the total number of engine instances for the
system is at
least two so as to enable the parallel processing of the network topology
database.
7. The system according to claim 5, wherein a vendor specific discovery
subroutine is
launched upon detection by the system of a non-MIB II standard device so as to
query the
vendor's private MIB using a vendor specific algorithm.
8. The system according to claim 5, wherein at least one performance monitor
client
computer is connected to the network so as to communicate remotely with the
performance monitor server computers.
9. A storage medium readable by an install server computer in a network
topology
distributed discovery system including the install server, leveraging the
functionality of
a high speed communications network, the storage medium encoding a computer
process
comprising:
(i) a processing portion for distributing records of discovered network
devices
using a plurality of discovery engine instances located on at least one data
collection node computer whereby the resulting distributed record compilation
comprises a distributed network topology database; and
(ii) a processing portion for importing the distributed network topology
database
onto at least one performance monitor server computer so as to enable network
management.
10. The system according to claim 9, wherein at least one discovery engine
instance is
located on the data collection node computers on a ratio of one engine
instance to one
central processing unit whereby the total number of engine instances is at
least two so as
to enable the parallel processing of the network topology database.




9

11. The system according to claim 9, wherein a vendor specific discovery
subroutine is
launched upon detection by the system of a non-MIB II standard device so as to
query the
vendor's private MIB using a vendor specific algorithm.
12. The system according to claim 9, wherein at least one performance monitor
client
computer is connected to the network so as to communicate remotely with the
performance monitor server computers.

Description

Note: Descriptions are shown in the official language in which they were submitted.



CA 02345292 2001-04-26
1
High Performance Distributed Discovery System
Field of the Invention
The present invention relates to the discovery of the network topology of
devices
comprising a high speed data network, and more particularly to a high
performance
distributed discovery system.
Background of the Invention
Today's high speed data networks contain an ever-growing number of devices.
A network needs to be monitored for the existence, disappearance, reappearance
and
status of traditional network devices such as routers, hubs and bridges and
more recently
high speed switching devices such as ATM, Frame Relay, DSL, VoIP and Cable
Modems.
In order to enable network monitoring, a process known as discovery is
typically
performed. Discovery is the process by which network management systems
selectively
poll a network to discover very large numbers of objects in a very short
period of time,
without introducing excessive network traffic. It is the function of a
discovery system to
discover devices on a network and the structure of that network. Discovery is
primarily
intended to get network management users quickly up to speed, track changes in
the
network, update network maps, and report on these changes.
Discovery typically further involves discovering the configuration of
individual
devices, their relationship, as well as discovering interconnection links or
implied
relationships.
In the past rapid discovery was not an issue, since the level of scalability
of
performance monitoring did not require the depth of discovery that is now
required.
Major advances in scalability have recently been achieved in performance
monitoring,


CA 02345292 2001-04-26
2
and as performance monitoring scales to manage larger and larger networks the
scalability of discovery must advance accordingly in order to deal with the
inevitable
increase in the number of network objects and react quickly to changes in
network
topology.
S
At present network devices are typically polled over long distances from the
network management system. This consumes valuable bandwidth and results in
increased
processing times and potential data loss. As well, customers often dislike
inadvertent
access around their firewalls, via the common connection to the network
performance
monitoring server computer. Therefore, what is needed is a method of object
discovery
that is proximal to the managed network.
For the foregoing reasons, there is a need for an economical method of network
topology discovery that provides for high speed polling, high obj ect
capacity, scalability,
and proximity to managed networks, while preserving security policies that are
inherent
in the network domain configuration.
Summary of the Invention
The present invention is directed to a high performance distributed discovery
system that satisfies this need. The system, leveraging the functionality of a
high speed
communications network, comprises distributing records of discovered network
devices
using a plurality of discovery engine instances located on at least one data
collection node
computer whereby the resulting distributed record compilation comprises a
distributed
network topology database. The distributed network topology database is
accessed using
at least one performance monitor server computer to facilitate network
management.
At least one discovery engine instance is located on the data collection node
computers on a ratio of one engine instance to one central processing unit
whereby the
total number of engine instances is at least two so as to enable the parallel
processing of
the distributed network topology database.


CA 02345292 2001-04-26
3
In aspects of the invention a vendor specific discovery subroutine is launched
upon detection by the system of a non-MIB II standard device so as to query
the vendor's
private MIB using a vendor specific algorithm.
Advances in overall scalability are achieved by dividing the workload of
network
topology discovery across several computing nodes. The discovery job is
distributed
across all the data collectors such that the only requirement for each data
collector is to
be able to reach, typically via TCP/IP and SNMP, the nades and networks for
which it
is responsible. This reachability requirement already exists for telemetry, in
any case, and
has therefore already been provided for.
Other aspects and features of the present invention will become apparent to
those
ordinarily skilled in the art upon review of the following description of
specific
embodiments of the invention in conjunction with the accompanying figures.
Brief Description of the Drawings
These and other features, aspects, and advantages of the present invention
will
become better understood with regard to the following description, appended
claims, and
accompanying drawings where:
Figure 1 is a schematic overview of the high performance distributed discovery
system.
Detailed Description of the Presently Preferred Embodiment
As shown in figure 1, the high performance distributed discovery system,
leveraging the functionality of a high speed communications network 14,
comprises at
least one data collection (DC) node computer 12 and at least one performance
monitor
(PM) server computer 18 in network 14 contact with the DC node computers 12.


CA 02345292 2001-04-26
4
The DC node computers 12 poll and register managed network 14 objects with
the resulting distributed record compilation forming a distributed network
topology
database 16 that is accessed by the PM server computers 18.
A plurality of discovery engine instances 20 are located on the DC node
computers 12 on a ratio of one engine instance 20 to one central processing
unit so as to
provide for the parallel processing of the distributed network topology
database 16.
The discovery engine 20 is comprised of a base program and a scalable family
of
vendor-specific discovery subroutines. The base program is designed to query
and
register any IP device and subsequently obtain detailed device, state and
topology
information for any IP device that responds to an SNMP query, such as any
device that
is managed by an SNMP agent. The base program discovers detailed information
for any
device that supports the standard MIB-II, but not the vendor's private MIB.
The discovery
of detailed information from a vendor's private MIB is accomplished through
what is
known as vendor-specific discovery subroutines.
These discovery subroutines are lightweight independent applications that are
launched whenever the main discovery program detects a particular vendor's
hardware.
The discovery subroutines contain vendor-specific algorithms designed to query
the
vendor's private MIB.
Launch points for each discovery subroutine are included in the main program.
So, if during the normal operation of discovery a valid element value is
encountered
identifying a specific vendor's hardware, the appropriate discovery subroutine
is
launched.
The DC node computers 12 are responsible for telemetry to the managed elements
and management of the topology database 16. The PM server computers 18 provide
system control and reporting interface.


CA 02345292 2001-04-26
S
The proximal topology of the DC node computers 12 in relation to the managed
network 14 provides for inherent scalability and a reduction in required
bandwidth. As
well, the ability to utilize excess memory and disk storage resources on the
DC node
computers 12 facilitates the discovery of larger networks. The aggregate
resources of
many DC node computers 12 is far greater than that available on any one PM
server
computer 18. Advances in overall scalability are achieved by dividing the
workload of
network topology discovery across several computing nodes. The discovery job
is
distributed across all the DC node computers 12 such that the only requirement
for each
DC node computer 12 is to be able to reach, typically via TCP/IP and SNMP, the
nodes
and networks for which it is responsible. This reachability requirement
already exists for
telemetry, in any case, and has therefore already been provided for.
All the discovery and topology database storage is taking place behind the
client's
firewall requiring only a minimal amount of management traffic to be exerted
on the
network to generate reports. PM server computers 18 are utilized to access the
distributed
network topology database 16 for object management.
In embodiments of the invention unique algorithms selectively discover network
devices based on "clues" picked up from existing information such as router
tables and
customer input.
The vendor specific discovery subroutines extend the base discovery
application
to provide for inter-operability with a multiplicity of ATM and FR vendors'
equipment.
All of the processing intensive data collection takes place as close to the
customers network and network devices as possible, thereby providing for
faster
discovery as well as distributed storage and processing. As well, the unwanted
side-effect
of the PM server computerl8 unwittingly becoming a router is removed, thereby
enhancing security.


CA 02345292 2001-04-26
6
Devices are reliably re-discovered, thereby enabling the tracking of changes
to a
network's topology as it evolves in real time or near real time.
The ability to limit what is discovered by criteria such as vendor & device
type
has been added thereby eliminating the need to specify the address of each
device when
discovering the network.
The system will not re-discover existing devices unless explicitly requested
to do
so, which is significant when discovering a large network that is typically
discovered in
stages.
The system handles timeouts in a more reliable manner. This is important on
wide
area networks where timeouts are more common during discovery.
Since all the discovery sub-tasks can be performed simultaneously, the overall
time to characterize the customer's network is reduced. This enables discovery
to deal
with larger networks in a faster manner, and eliminates the PM server
computer's 18
reachability requirement with respect to managed elements.
This invention allows Network Service Providers to automatically discover more
of the existing devices in their networks, permitting customers to reconcile
what is really
out in their network with what their administrative records tell them is out
there. It has
been shown that such verification can potentially lead to great cost savings
in operations,
as well as vastly improved discovery times as speed will now be directly
correlated with
the number of DC node computers 12 deployed.
The system provides for the rapid automatic mapping of a customer's network
for
the purpose of object management, down to unprecedentedly fine levels of
granularity.

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2001-04-26
Examination Requested 2001-04-26
(41) Open to Public Inspection 2002-04-03
Dead Application 2007-02-28

Abandonment History

Abandonment Date Reason Reinstatement Date
2006-02-28 R30(2) - Failure to Respond
2006-04-26 FAILURE TO PAY APPLICATION MAINTENANCE FEE

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Request for Examination $400.00 2001-04-26
Application Fee $300.00 2001-04-26
Registration of a document - section 124 $100.00 2002-04-24
Maintenance Fee - Application - New Act 2 2003-04-28 $100.00 2003-04-11
Maintenance Fee - Application - New Act 3 2004-04-26 $100.00 2004-04-22
Maintenance Fee - Application - New Act 4 2005-04-26 $100.00 2005-04-20
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
LINMOR INC.
Past Owners on Record
CHRISTENSEN, LOREN
LINMOR TECHNOLOGIES INC.
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative Drawing 2002-03-06 1 8
Cover Page 2002-04-05 1 40
Claims 2004-10-01 3 103
Description 2004-10-01 6 260
Abstract 2001-04-26 1 22
Description 2001-04-26 6 261
Claims 2001-04-26 3 101
Drawings 2001-04-26 1 15
Prosecution-Amendment 2004-10-01 10 404
Correspondence 2001-05-29 1 24
Assignment 2001-04-26 3 112
Assignment 2002-04-24 3 139
Assignment 2002-05-07 1 21
Correspondence 2002-06-17 1 23
Assignment 2002-09-10 3 132
Fees 2003-04-11 1 28
Assignment 2003-06-27 3 131
Prosecution-Amendment 2004-04-02 3 101
Fees 2004-04-22 1 29
Prosecution-Amendment 2005-04-01 1 27
Fees 2005-04-20 1 29
Prosecution-Amendment 2005-08-30 2 53