Language selection

Search

Patent 3162740 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3162740
(54) English Title: TRAFFIC SWITCHING METHODS AND DEVICES BASED ON MULTIPLE ACTIVE DATA CENTERS
(54) French Title: PROCEDE ET DISPOSITIF DE COMMUTATION DE TRAFIC BASES SUR UN CENTRE DE DONNEES MULTI-ACTIF
Status: Examination
Bibliographic Data
(51) International Patent Classification (IPC):
  • G6F 11/20 (2006.01)
(72) Inventors :
  • GE, YAO (China)
  • YANG, TAO (China)
  • GE, WEI (China)
  • WANG, XIN (China)
  • LIN, RENSHAN (China)
(73) Owners :
  • 10353744 CANADA LTD.
(71) Applicants :
  • 10353744 CANADA LTD. (Canada)
(74) Agent: JAMES W. HINTONHINTON, JAMES W.
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2020-06-19
(87) Open to Public Inspection: 2021-06-03
Examination requested: 2022-09-16
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/CN2020/097003
(87) International Publication Number: CN2020097003
(85) National Entry: 2022-05-24

(30) Application Priority Data:
Application No. Country/Territory Date
201911174942.0 (China) 2019-11-26

Abstracts

English Abstract

A multi-active data center-based traffic switching method and device. The method comprises: an application server executes an operation of obtaining traffic configuration information after receiving a task scheduling instruction (S41), the traffic configuration information being generated by a multi-active switching platform according to a preset rule when the multi-active switching platform determines that a data transmission fault occurs in a data center according to data transmission state information of each data center, the multi-active data center being provided with at least two data centers, and the traffic configuration information being used for indicating traffic distribution corresponding to each data center; the application server analyzes the traffic configuration information so as to obtain traffic distribution corresponding to the data center (S42); the application server determines, according to the traffic distribution and type information of a current task to be processed, whether the application server has the processing permission of the current task to be processed (S43); and if yes, the application server loads the task to perform task processing (S44). According to the method, automatic traffic switching when a fault occurs to a multi-active data center is realized.


French Abstract

L'invention concerne un procédé et un dispositif de commutation de trafic basés sur un centres de données multi-actif. Le procédé comprend les étapes suivantes : un serveur d'application exécute une opération consistant à obtenir des informations de configuration de trafic après avoir reçu une instruction de planification de tâche (S41), les informations de configuration de trafic étant générées par une plateforme de commutation multi-active selon une règle prédéfinie lorsque la plateforme de commutation multi-active détermine qu'un défaut de transmission de données s'est produit dans un centre de données en fonction des informations d'état de transmission de données de chaque centre de données, le centre de données multi-actif étant pourvu d'au moins deux centres de données, et les informations de configuration de trafic étant utilisées pour indiquer une répartition du trafic correspondant à chaque centre de données; le serveur d'application analyse les informations de configuration de trafic de façon à obtenir une répartition du trafic correspondant au centre de données (S42); le serveur d'application détermine, en fonction de la répartition du trafic et des informations de type d'une tâche actuelle à traiter, si le serveur d'application dispose de l'autorisation de traitement pour la tâche actuelle à traiter (S43); et le cas échéant, le serveur d'application charge la tâche pour effectuer le traitement de la tâche (S44). Selon le procédé, une commutation de trafic automatique est effectuée lorsqu'un défaut se produit dans un centre de données multi-actif.

Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS
What is claimed is:
1. A traffic switching method based on multiple active data centers,
characterized in that the
method comprises:
performing an operation to obtain traffic configuration information after an
application server
has received a task scheduling instruction, wherein the traffic configuration
information is
generated according to a preset rule by a multi-active switching platform when
it judges
according to data transmission status information of various data centers that
traffic switching is
required by a data center, the multiple active data centers include at least
two data centers, and
the traffic configuration information is employed to indicate traffic
distribution to each data
center;
parsing the traffic configuration information by the application server, and
obtaining traffic
distribution to which the data center in which the application server resides
corresponds;
judging by the application server according to the traffic distribution and
type information of a
task to be currently processed whether the application server has a permission
to process the task
to be currently processed; and
if yes, loading the task by the application server to process the task.
2. The method according to Claim 1, characterized in that the application
server obtains the traffic
configuration information through the following steps:
reading cache by the application server and judging whether the traffic
configuration information
is present in the cache;
if not, reading the traffic configuration information from the multi-active
switching platform by
the application server; and
reading, when the application server monitors that change occurs to the
traffic configuration
information of the multi-active switching platform, the changed traffic
configuration information
28

and synchronizing the changed traffic configuration information into the
cache.
3. The method according to Claim 1, characterized in that the step of judging
by the application
server according to the traffic distribution and type information of a task to
be currently processed
whether the application server has a permission to process the task to be
currently processed
includes:
judging, if the application server judges that the task to be currently
processed is an exclusive
task, whether the traffic distribution to which the data center in which the
application server
resides corresponds is empty;
if not, the application server possesses the permission to process the task to
be currently processed.
4. The method according to Claim 1, characterized in that the multiple active
data centers havea
master data center, that the traffic configuration information further
includes identification of the
master data center; and that the step of judging by the application server
according to the traffic
distribution and type information of a task to be currently processed whether
the application
server has a permission to process the task to be currently processed
includes:
judging, if the application server judges that the task to be currently
processed is a competitive
task, whether a data center identification to which the application server
corresponds is identical
with the master data center identification;
if yes, the application server possesses the permission to process the task to
be currently
processed.
5. The method according to Claim 1, characterized in that the traffic
distribution includes a set
of sub-library numbers with read-write permission to which each data center
corresponds, and
that the step of loading the task by the application server to process the
task includes:
searching by the application server for the task to be currently processed
from a task queue of
the cache, if the task is enquired out, judging whether the application server
has a permission to
process the task to be currently processed according to a sub-library number
to which the task to
be currently processed corresponds and according to a sub-library number with
read-write
29

permission of the data center in which the application server resides;
if yes, determining by the application server a status of the sub-library
number to which the task
to be currently processed corresponds as being processed and storing the same
in task
configuration information;
changing, if the task is processed to completion, by the application server
the status of the sub-
library number to which the task to be currently processed corresponds to to
be processed and
storing the same in the task configuration information.
6. A traffic switching method based on multiple active data centers,
characterized in that the
method comprises:
obtaining data transmission status information of various data centers by a
multi-active switching
platform, wherein the multiple active data centers include at least two data
centers; and
judging by the multi-active switching platform according to the status
information and a preset
condition, and generating traffic configuration information according to a
preset rule when it is
judged that traffic switching is required, so that an application server
obtains the traffic
configuration information and loads a task in conjunction with task
configuration information as
obtained to process the task after having received a task scheduling
instruction, wherein the
traffic configuration information is employed to indicate traffic distribution
to which each data
center corresponds.
7. The method according to Claim 6, characterized in that the step of judging
by the multi-active
switching platform according to the status information and a preset condition,
and generating
traffic configuration information according to a preset rule when it is judged
that traffic switching
is required includes:
performing traffic distribution, when the multi-active switching platform
judges according to the
status information that data transmission failure occurs to any data center,
according to current
traffic of data center(s) to which no failure occurs, a traffic threshold, and
a rule to distribute
traffic to which a competitive task corresponds to the same and single data
center, to generate
traffic configuration information that contains traffic distributions to which
the various data
3 0

centers correspond and an identification of a master data center that bears
the competitive task.
8. The method according to Claim 6, characterized in further comprising:
synchronizing the traffic configuration information into cache by the multi-
active switching
platform, so that the application server obtains the traffic configuration
information from the
cache; and
sending the latest traffic configuration information to the application server
when the multi-active
switching platform receives a traffic configuration information obtaining
request from the
application server.
9. A traffic switching device based on multiple active data centers,
characterized in that the device
comprises:
a traffic configuration information obtaining unit, for performing an
operation to obtain traffic
configuration information after having received a task scheduling instruction,
wherein the traffic
configuration information is generated according to a preset rule by a multi-
active switching
platform when it judges according to data transmission status information of
various data
centers that traffic switching is required by a data center, the multiple
active data centers include
at least two data centers, and the traffic configuration information is
employed to indicate traffic
distribution to which each data center corresponds;
a parsing unit, for parsing the traffic configuration information, and
obtaining traffic distribution
of the data center in which the application server resides;
a permission judging unit, for judging according to the traffic distribution
and type information
of a task to be currently processed whether there is a permission to process
the task to be currently
processed; and
a task processing unit, for obtaining task configuration information when it
is judged that there
is a processing permission, and loading a task in conjunction with the traffic
distribution to
process the task.
10. A traffic switching device based on multiple active data centers,
characterized in that the
3 1

device comprises:
a data transmission status information obtaining unit, for obtaining data
transmission status
information of various data centers, wherein the multiple active data centers
include at least two
data centers; and
a traffic configuration information unit, for judging according to the status
information and a
preset condition, and generating traffic configuration information according
to a preset rule when
it is judged that traffic switching is required, so that an application server
obtains the traffic
configuration information and loads a task in conjunction with task
configuration information as
obtained to process the task after having received a task scheduling
instruction, wherein the
traffic configuration information is employed to indicate traffic distribution
to which each data
center corresponds.
32

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 03162740 2022-05-24
TRAFFIC SWITCHING METHODS AND DEVICES BASED ON MULTIPLE ACTIVE
DATA CENTERS
BACKGROUND OF THE INVENTION
Technical Field
[0001] The present application relates to the field of data processing, and
more particularly to
traffic switching methods and devices based on multiple active data centers.
Description of Related Art
[0002] The disaster recovery system is a system supplied to the computer
information system to
deal with various data disasters. The disaster recovery system ensures safety
of user data
when the computer system is suffering such irresistible natural disasters as
fire, flood,
earthquake, and war, etc., and such manmade disasters as computer crime,
computer virus,
electric power shutdown, network/communication failure, hardware/software
errors and
human operational errors, etc., to cause such problems as data transmission
interruption
and loss of data, and so on.
[0003] At present, disaster recovery mostly employs the active standby mode,
that is, a disaster
recovery backup center is established far from the location where the computer
system is
run. The disaster recovery backup center does not bear any online business
traffic, as it
only periodically backs up data of the computer system and disposes the backed-
up data
in the disaster recovery backup center. When a disaster occurs to cause system
paralysis,
running of the system is then restored at the disaster recovery backup center
through the
backed-up data.
[0004] Because the disaster recovery backup center does not bear the actual
online business
1
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
traffic, should a disaster occur, we would not ascertain that the backup
center is useable;
moreover, since the backup system should be manually started, higher
requirement is put
on the system maintenance personnel, and the manual start is far from being
quick in
response to the disaster. The period of such delay would further make it
impossible to
record various data during machine shutdown.
[0005] To address the deficiencies of the active standby mode, the multi-
active strategy comes
to the fore as a novel technique to address problems concerning disaster
recovery. The
so-called multi-active means in fact the setup of identical databases at
plural sites
(computer rooms distanced relatively far from one another) to bear the
business traffic at
the same time, and can decide how to share the traffic among the sites
according to
business attributes such as user IDs and regions, etc., for instance, data
processing
requests of users from ID1 to ID49 are assigned to the first site for
processing, and data
processing requests of users from ID50 to ID99 are assigned to the second site
for
processing. When failure occurs to the first site, these business processing
requests can
be switched to the second site relatively quickly (at the minute level) and
smoothly, under
ideal circumstances, damage to the business is extremely small. Relative to
the active
standby mode, each site in the multi-active strategy possesses in real time
the capability
to bear the business traffic, and stability thereof is reliable.
[0006] Of course, the above traffic switching does not necessarily occur only
during failure of
data centers, as traffic switching is sometimes carried out also based on
other
circumstances, for example, the quantity of tasks is greatly increased at a
certain data
center during a special time period, it is then required to assign out a
portion of the tasks.
[0007] At present, when it is required to switch traffic under the multi-
active strategy, for
example when failure occurs to one of the sites, once the failure message is
notified to
the maintenance personnel, the maintenance personnel configures traffic
switching
information, starts the traffic switching process, and switches the traffic
between sites.
2
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
[0008] However, manual configuration necessitates the consumption of some
time, although
more quickness is achieved relative to the active standby mode, the delay by
this period
of time is sufficient enough for the system such as an e-commerce platform
under many
scenarios to produce great volume of data, and these data cannot be stored and
restored.
SUMMARY OF THE INVENTION
[0009] The present application provides traffic switching methods and devices
based on multiple
active data centers, so as to solve prior-art problems in which delay still
exists in traffic
switching of multiple active data centers, and data is lost during the time of
delay.
[0010] The present application provides the following solutions.
[0011] In the first aspect, there is provided a traffic switching method based
on multiple active
data centers, and the method comprises:
[0012] performing an operation to obtain traffic configuration information
after an application
server has received a task scheduling instruction, wherein the traffic
configuration
information is generated according to a preset rule when a multi-active
switching
platform judges according to data transmission status information of various
data centers
that traffic switching is required by a data center, the multiple active data
centers include
at least two data centers, and the traffic configuration information is
employed to indicate
traffic distribution to each data center;
[0013] parsing the traffic configuration information by the application
server, and obtaining
traffic distribution to which the data center in which the application server
resides
corresponds;
[0014] judging by the application server according to the traffic distribution
and type information
of a task to be currently processed whether the application server has a
permission to
process the task to be currently processed; and
3
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
[0015] if yes, loading the task by the application server to process the task.
[0016] Preferably, the application server obtains the traffic configuration
information through
the following steps:
[0017] reading cache by the application server and judging whether the traffic
configuration
information is present in the cache;
[0018] if not, reading the traffic configuration information from the multi-
active switching
platform by the application server.
[0019] Preferably, the method further comprises:
[0020] reading, when the application server monitors that change occurs to the
traffic
configuration information of the multi-active switching platform, the changed
traffic
configuration information and synchronizing the changed traffic configuration
information into the cache.
[0021] Preferably, the step of judging by the application server according to
the traffic
distribution and type information of a task to be currently processed whether
the
application server has a permission to process the task to be currently
processed includes:
[0022] judging, if the application server judges that the task to be currently
processed is an
exclusive task, whether the traffic distribution to which the data center in
which the
application server resides corresponds is empty;
[0023] if not, the application server possesses the permission to process the
task to be currently
processed.
[0024] Preferably, the traffic distribution includes a set of sub-library
numbers with read-write
permission to which each data center corresponds, and the step of judging
whether the
traffic distribution to which the data center in which the application server
resides
corresponds is empty includes:
[0025] judging whether the set of sub-library numbers with read-write
permission to which the
4
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
data center in which the application server resides corresponds is empty.
[0026] Preferably, the multiple active data centers have a master data center,
the traffic
configuration information further includes identification of the master data
center; and
the step of judging by the application server according to the traffic
distribution and type
information of a task to be currently processed whether the application server
has a
permission to process the task to be currently processed includes:
[0027] judging, if the application server judges that the task to be currently
processed is a
competitive task, whether a data center identification to which the
application server
corresponds is identical with the master data center identification;
[0028] if yes, the application server possesses the permission to process the
task to be currently
processed.
[0029] Preferably, the traffic distribution includes a set of sub-library
numbers with read-write
permission to which each data center corresponds, and the step of loading the
task by the
application server to process the task includes:
[0030] searching by the application server for the task to be currently
processed from a task
queue of the cache, if the task is enquired out, judging whether the
application server has
a permission to process the task to be currently processed according to a sub-
library
number to which the task to be currently processed corresponds and according
to a sub-
library number with read-write permission of the data center in which the
application
server resides;
[0031] if yes, determining by the application server a status of the sub-
library number to which
the task to be currently processed corresponds as being processed and storing
the same in
task configuration information;
[0032] changing, if the task is processed to completion, by the application
server the status of
the sub-library number to which the task to be currently processed corresponds
to to be
processed and storing the same in the task configuration information.
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
[0033] In the second aspect, there is provided a traffic switching method
based on multiple active
data centers, and the method comprises:
[0034] obtaining data transmission status information of various data centers
by a multi-active
switching platform, wherein the multiple active data centers include at least
two data
centers; and
[0035] judging by the multi-active switching platform according to the status
information and a
preset condition, and generating traffic configuration information according
to a preset
rule when it is judged that traffic switching is required, so that an
application server
obtains the traffic configuration information and loads a task in conjunction
with task
configuration information as obtained to process the task after having
received a task
scheduling instruction, wherein the traffic configuration information is
employed to
indicate traffic distribution to which each data center corresponds.
[0036] Preferably, the step of judging by the multi-active switching platform
according to the
status information and a preset condition, and generating traffic
configuration information
according to a preset rule when it is judged that traffic switching is
required includes:
[0037] performing traffic distribution, when the multi-active switching
platform judges
according to the status information that data transmission failure occurs to
any data center,
according to current traffic of data center(s) to which no failure occurs, a
traffic threshold,
and a rule to distribute traffic to which a competitive task corresponds to
the same and
single data center, to generate traffic configuration information that
contains traffic
distributions to which the various data centers correspond and an
identification of a
master data center that bears the competitive task.
[0038] Preferably, the method further comprises:
[0039] synchronizing the traffic configuration information into cache by the
multi-active
switching platform, so that the application server obtains the traffic
configuration
information from the cache; and
[0040] sending the latest traffic configuration information to the application
server when the
6
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
multi-active switching platform receives a traffic configuration information
obtaining
request from the application server.
[0041] In the third aspect, there is provided a traffic switching device based
on multiple active
data centers, and the device comprises:
[0042] a traffic configuration information obtaining unit, for performing an
operation to obtain
traffic configuration information after having received a task scheduling
instruction,
wherein the traffic configuration information is generated according to a
preset rule when
a multi-active switching platform judges according to data transmission status
information of various data centers that traffic switching is required by a
data center, the
multiple active data centers include at least two data centers, and the
traffic configuration
information is employed to indicate traffic distribution to which each data
center
corresponds;
[0043] a parsing unit, for parsing the traffic configuration information, and
obtaining traffic
distribution to which the data center in which the application server resides
corresponds;
[0044] a permission judging unit, for judging according to the traffic
distribution and type
information of a task to be currently processed whether there is a permission
to process
the task to be currently processed; and
[0045] a task processing unit, for obtaining task configuration information
when it is judged that
there is a processing permission, and loading a task in conjunction with the
traffic
distribution to process the task.
[0046] In the fourth aspect, there is provided a traffic switching device
based on multiple active
data centers, and the device comprises:
[0047] a data transmission status information obtaining unit, for obtaining
data transmission
status information of various data centers, wherein the multiple active data
centers include
at least two data centers; and
[0048] a traffic configuration information unit, for judging according to the
status information
and a preset condition, and generating traffic configuration information
according to a
7
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
preset rule when it is judged that traffic switching is required, so that an
application server
obtains the traffic configuration information and loads a task in conjunction
with task
configuration information as obtained to process the task after having
received a task
scheduling instruction, wherein the traffic configuration information is
employed to
indicate traffic distribution to which each data center corresponds.
[0049] According to the specific embodiments provided by the present
application, the present
application has disclosed the following technical effects.
[0050] The technical solutions of the present application make it possible to
generate and obtain
multi-active traffic configuration information automatically in real time
under the
scenario of multiple active data centers, and to proactively compensate to
obtain multi-
active traffic configuration information under the scenario of missing
configuration.
[0051] Scheduling of tasks in the present application is capable of
recognizing and parsing multi-
active traffic configuration information, and supporting automatic switching
of exclusive
tasks and competitive tasks between computer rooms for the execution of
business
operation.
[0052] Task configuration and anti-concurrent operation in the present
application are based on
distributed cache, whereby performance consumption of the database is lowered.
[0053] Of course, it suffices for the product of the present application to
achieve one of these
effects.
BRIEF DESCRIPTION OF THE DRAWINGS
[0054] In order to describe the embodiments of the present application or the
technical solutions
in prior-art technology more clearly, accompanying drawings required to be
used in the
8
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
embodiments are briefly introduced below. Apparently, the drawings introduced
below
are merely partial embodiments of the present application, and it is possible
for persons
ordinarily skilled in the art to acquire other drawings based on these
drawings without
spending creative effort in the process.
[0055] Fig. 1 is a view illustrating a system scenario provided by the present
application;
[0056] Fig. 2 is a flowchart illustrating processing of an exclusive task
provided by the present
application;
[0057] Fig. 3 is a flowchart illustrating processing of a competitive task
provided by the present
application;
[0058] Fig. 4 is a flowchart illustrating the method of Embodiment 1 of the
present application;
and
[0059] Fig. 5 is a flowchart illustrating the method of Embodiment 2 of the
present application.
DETAILED DESCRIPTION OF THE INVENTION
[0060] The technical solutions in the embodiments of the present application
will be clearly and
comprehensively described below with reference to the accompanying drawings in
the
embodiments of the present application. Apparently, the embodiments to be
described are
merely partial, rather than the entire embodiments of the present application.
All other
embodiments obtainable by persons ordinarily skilled in the art on the basis
of the
embodiments in the present application shall all be covered by the protection
scope of the
present application.
[0061] To make the present application easily comprehensible, terms appearing
in the present
9
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
application are firstly explained.
[0062] Multi-active switching platform: it is an administration platform
developed for the
configuration, administration, and execution of traffic switching of the
multiple active
data centers, by maintaining various application systems and component
information
within the platform, and through the configuration switching step and multi-
scenarios
switching tasks such as switching of the master data center level and
switching of non-
master data center level, execution and administration of switching of single
data center
traffic and switching of multiple active data center traffic are realized, the
traffic
switching task after failure of the multiple active data centers is
undertaken, and it is
ensured that switching be timely, comprehensive, visualizable and
controllable.
[0063] Cell: it is a set of data of minimally split dimensions and the data
center after splitting is
performed according to designated data dimensions. On the logical level, a
cell can
complete the entire business on data splits within the cell. When a user
request is
designated with a cell belonged to according to the dimension by which the
data is split,
the subsequent business of the user is completely enclosed within the cell.
One cell can
be a sub-library.
[0064] Data center LDC: it is a set unit consisting of plural Cells with
enclosable businesses. To
realize disaster recovery, various data centers of the multiple active data
centers are also
referred to as computer rooms whose geographical locations are usually
distanced
relatively far from one another.
[0065] Exclusive task: business data processed thereby exists only in a
certain Cell, without
intercrossing and being shared with other Cells.
[0066] Competitive task: business data processed thereby is competed among
various Cells, in
order to prevent the business data from being respectively processed by
different data
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
centers to render the data inconsistent, business data of competitive tasks
should be
uniformly controlled by the same and single data center. The data center
capable of
processing competitive tasks is referred to as master data center in the
present application.
[0067] Traffic configuration information: it is information for indicating
traffic distribution of
the various data centers, including identifications of the various data
centers and their
corresponding Cell sets, such as a set of sub-library numbers to which each
data center
corresponds and which represents the set of sub-library data manipulable by
each data
center.
[0068] For example, cache key is LdcInfo, and values are the values of the
various data centers
LDC and the responsible cell set. Exemplarily,
[{"effectiveLdc": "NJYH","cellList": "0,2,4,6,8,10,12,14"},
{"effectiveLdc": "NJGX YG","cellList": "1,3,5,7,9,11,13,15"}1, if a system
business
library has 16 sub-libraries, the total traffic can be partitioned into 16
Cells, if the entire
traffic is partitioned to the master data center, values configured for the
cellList of the
master data center are 0-15, and the sub data centers are empty; in the case
of two data
centers, traffic is partitioned by 1/2, then values configured for the
cellList of the master
data center are a set of even numbers in 0-15, and values configured for the
cellList of
the sub data center are a set of odd numbers in 0-15. So on and so forth, the
numerical
values configured in the cellList represent sub-library numbers with write
permission.
[0069] The traffic configuration information further includes data center
configuration where the
master data center resides: the cache key is MasterLdc (master data center),
the value is
an English acronym of the master data center. For example, if the master data
center is a
Yuhua computer room of Nanj ing, then the value is NJYH, and such
configuration is used
for the competitive task to judge whether the current server belongs to the
master data
center.
[0070] Environment variable: the variable named ldc is configured in the
server environment
variable, and the value is an English acronym of the data center in which the
current
11
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
server is deployed. For example, if it is deployed in the Yuhua computer room
of Nanjing,
NJYH is configured.
[0071] As shown in Fig. 1, the system of the present application includes
multiple active data
centers (Fig. 1 shows three data centers, namely machine rooms), each data
center
includes a multi-active switching platform, a task scheduling platform, an
application
cluster, and a redis distributed cache cluster. Corresponding to whether to
process
competitive tasks, the data centers can be divided into master data center
(master machine
room) and sub data center(s) (sub machine room(s)).
[0072] The multi-active switching platform is used to generate traffic
configuration information,
and it is possible in the present application for the multi-active switching
platform of the
master data center to generate traffic configuration information and
thereafter
synchronize the same to the multi-active switching platforms of the sub data
centers. On
monitoring that there is new traffic configuration information, the
application server of
the application cluster reads the traffic configuration information and
synchronizes the
traffic configuration information to the redis distributed cache cluster. The
task
scheduling platform is used to schedule tasks by dispatching task scheduling
instructions
to various application servers of the application cluster to process the
tasks. The
application server reads the traffic configuration information from the redis
distributed
cache cluster according to a task scheduling instruction, if reading fails,
the application
server reads the traffic configuration information directly from the multi-
active switching
platform and stores the same in the redis distributed cache cluster, to
facilitate quick
reading next time. The application server will subsequently read task
configuration
information from the redis distributed cache cluster, and execute relevant
task processing
according to the traffic configuration information and the task configuration
information.
This step will be described in detail later.
[0073] It is mentioned above that the multi-active switching platform
automatically generates
12
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
the traffic configuration information, and this is the first problem to be
solved by the
present application. The multi-active switching platform is utilized in the
present
application to monitor data transmission statuses of the various data centers,
such as the
data transmission rate, etc., when it is judged according to the monitoring
that failure
occurs to data transmission of any data center or any other event occurs to
trigger traffic
switching, the traffic configuration information is automatically generated
according to a
preset rule.
[0074] Taking for example the traffic configuration information being directed
to the set of sub-
libraries with readable-writable operations of the various data centers, the
preset rule can
be to distribute sub-libraries of a failed data center to other data centers
as evenly as
possible under the circumstance the original traffic distributions to the
other data centers
remain unchanged, and can also be to distribute sub-libraries of the failed
data center to
the data center with the least traffic at present. The preset rule can as well
be to redistribute
the entire traffic to the remaining data centers.
[0075] Moreover, it is further possible to perform operation in conjunction
with the current
statuses of remaining data centers that are not failed, for instance, business
volumes of
certain events of certain data centers are abruptly increased, then the
traffic should not be
distributed to these data centers as far as possible.
[0076] In addition, based on the presence of the aforementioned competitive
tasks, if the data
center to which failure occurs is the data center that is responsible for the
competitive
tasks, namely the master data center, it is further needed to designate a new
master data
center for the competitive tasks in the traffic configuration information.
[0077] In short, the traffic configuring rule can be set in advance at the
traffic switching platform,
so that the platform automatically generates the traffic configuration
information
according to this rule and data transmission statuses of the various data
centers.
13
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
[0078] Thereafter is involved how the application servers of the various data
centers recognize
the traffic configuration information and execute task processing based on the
recognition
result.
[0079] We can firstly base on the fact as to whether traffic distribution to
which the application
server belongs is empty to preliminarily judge whether the application server
can execute
the current task.
[0080] Each application server is equipped with information as to which data
center it belongs
at the start of its setup, such information is configured in the environment
variable value
of the application server, for instance, the environment variable value of an
application
server is "Beijing Haidian", then the application server belongs to the data
center named
"Beijing Haidian".
[0081] The application server parses the traffic configuration information and
can thus obtain
traffic distributions to which the various data centers correspond, such as
sets of sub-
library numbers with read-write permission to which the various data centers
correspond.
[0082] For instance, a database has 16 sub-libraries altogether, numbered
respectively as 1 to 16.
The traffic configuration information is parsed to determine that the master
data center
corresponds to sub-libraries 1-7, that the first sub data center corresponds
to sub-libraries
8-12, and that the second sub data center corresponds to sub-libraries 13-16.
If the
application server belongs to the first sub data center, the application
server has read-
write permission with respect to sub-libraries 8-12, that is, it can bear
traffic tasks relevant
to sub-libraries 8-12.
[0083] If the traffic configuration information is parsed to determine that
the traffic distribution
of the data center to which the application server belongs is empty, namely
not
14
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
corresponding to any sub-library, this indicates that the application server
does not have
read-write permission with respect to any sub-library, cannot execute any
task, and
directly exits the process at this time.
[0084] It is previously mentioned that tasks are divided into independent
tasks and competitive
tasks, as regards independent tasks, as shown in Fig. 2, it is judged by
parsing the traffic
configuration of the current data center in the multi-active traffic
configuration
information whether the current task is operable ¨ it is operable if the
traffic configuration,
such as the sub-library number, has a value, and it is not operable if the
traffic
configuration, such as the sub-library number, is an empty set.
[0085] The competitive tasks are processed by the special master data center.
Therefore, when
the task to be currently processed is a competitive task, it is further
required to judge
whether the application server is a server of the master data center. As shown
in Fig. 3,
corresponding to such requirement, an identification of the master data center
is further
provided in the traffic configuration information. The application server
obtains the
identification of the data center based on the environment variable, and
compares the
same with the identification of the master data center, if the two are
consistent, this
indicates that the application server is a server of the master data center
and usable for
executing the competitive task, if the two are inconsistent, this indicates
that the
application server is not a server of the master data center and cannot be
used for
executing the competitive task. When the current task is a competitive task,
the server
can directly exit the process.
[0086] The application server can judge in advance whether the current task
can be executed
through the task type, the traffic distribution to the data center, and the
information of the
identification of the master data center, and directly exits if it cannot
execute the current
task.
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
[0087] In the case it is preliminarily judged that there is permission to
execute the current task,
the application server further obtains task configuration, and loads to
execute the specific
task.
[0088] Specifically, the application server takes JOB QUEUE: task name as KEY
to obtain task
from the queue head of a Redis cached task queue, if no task is obtained,
JOB TASKPENDING: task name is taken as KEY to obtain task configuration
information of sub-libraries with write permission from Redis totally
scheduled task
cache and load one-by-one to the queue tail of the Redis cached task queue. If
the task
configuration information is not enquired out of the Redis total cache
according to the
sub-library numbers and the task name, the database is read, and the task is
enquired from
the public library and loaded to the Redis totally scheduled task cache, and
further
synchronized to the Redis cached task queue. The task configuration
information contains
the sub-library number to which the task corresponds, when the task
configuration
information of sub-libraries with write permission is obtained from the Redis
totally
scheduled task cache, the permissioned sub-library number of the application
server can
be combined to get the task that corresponds to the intersection of the
corresponding sub-
library number of the task for loading.
[0089] If the task is obtained from the queue head of the Redis queue in
accordance with the
JOB QUEUE: task name serving as KEY, it is then judged whether the task as
obtained
is currently within an operable range to avoid traffic switching at the
computer room after
the task has been loaded to the queue. If the task is within the operable
range, the task
status in the task configuration cache is updated from to be processed to
being processed,
and business data in the sub-library is processed when update succeeds, and
the task status
is updated to to be processed on completion of the processing. If update fails
or the task
status is being processed already or the currently obtained task is not in the
operable range,
task is continually obtained from the queue head of the Redis queue, until
messages in
the Redis queue have all been consumed. The exclusive task is judged whether
being in
16
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
the operable range according to the sub-library number of the task and the
CellList
configuration, and the competitive task is judged whether being in the
operable range
according to the master computer room LDC and the LDC in the environment
variable
of the current server.
[0090] The problem concerning concurrent operation can also occur in the above
process, in
view thereof, the present application provides the following method to avoid
concurrent
operation on the basis of Redis cache, and the method specifically comprises
the
following:
[0091] the application server takes JOB QUEUE: task name as KEY to obtain task
configuration
from a queue head of a Redis task queue, and judges whether the task
configuration is
obtained.
[0092] If the task configuration is obtained:
[0093] It is judged whether the task configuration is currently operable, to
avoid problematic
switching at the computer room after it has been loaded to the task queue. The
current
task is judged whether being operable by parsing CellList configuration of the
current
computer room LDC in the multi-active configuration in the case of an
exclusive task,
and the current task is judged whether being operable by parsing LDC
configuration of
the master computer room in the multi-active configuration and LDC
configuration in the
environment variable of the current server in the case of a competitive task.
[0094] 1. If not operable, processing of the current task is terminated, and
the next piece of task
configuration is obtained from the task queue to continue execution.
[0095] 2. If operable, task name + sub-library number are taken as KEY to set
up a Redis shared
lock, and timeout is current system time + timeout constant value
(millisecond),
specifically:
[0096] 2.1 If setup of the shared lock fails, processing of the current task
is terminated, and the
next piece of task configuration is obtained from the task queue to continue
execution.
[0097] 2.2 If setup of the shared lock succeeds, cache to which the task
configuration
17
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
corresponds is obtained from the totally scheduled task cache:
[0098] 2.21 If the cache to which the task configuration corresponds is not
obtained from the
totally scheduled task cache, task configuration of the sub-library is
enquired in the public
library, and loaded to the total task cache if enquired.
[0099] 2.22 The task status is judged in the total task cache:
[0100] 2.221 If the status is to be processed, the status is updated to being
proceeded. If update
fails, the shared lock is released to terminate processing of the current task
configuration,
and the next piece of task configuration is obtained from the task queue to
continue
execution. If update succeeds, the shared lock is released, and specific
business logic to
which the task corresponds is executed; when the business logic is executed to
completion,
the task status is changed into to be processed, processing of the current
task configuration
is terminated, and the next piece of task configuration is obtained from the
task queue to
continue execution.
[0101] 2.222 If the status is being processed, the shared lock is released,
processing of the current
task configuration is terminated, and the next piece of task configuration is
obtained from
the task queue to continue execution.
[0102] If the task configuration is not obtained:
[0103] It is judged whether it is required to load task configuration (if the
task is obtained for the
first time from the queue, the task configuration is empty, then the task
configuration
should be loaded; if the task has previously acquired task configuration, the
task
configuration is empty at the last obtainment, then it is not loaded, to
prevent the task
from being continually scheduled and executed without ending), if it is not
required to
load task configuration, exiting is effected; if it is required to load task
configuration, the
following steps are executed:
[0104] 1. JOB TASK LOAD LOCK: task name is taken as KEY, and current system
time +
invalidation time constant value (millisecond) are taken as Value to perform
setnx
operation on Redis to add shared lock, so as to prevent concurrent scheduling
from
causing repeated loading of the task to be processed to the redis task queue.
18
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
[0105] 1.1 If setup of the shared lock fails, the shared lock is checked as to
whether it is
invalidated, to prevent abnormal unlocking from causing the task always in the
locked
status; the shared lock is not invalidated if its value is greater than the
current system time,
and the shared lock is invalidated if its value is smaller than the current
system time:
[0106] 1.11. If the shared lock is not invalidated, exiting is effected;
[0107] 1.12. If the shared lock is invalidated, value of the shared lock is
obtained 0, GetSet
operation of Redis is then performed on the shared lock 0, the new value is
system
current time + invalidation time constant value (millisecond), and values
returned from
0 and 0 are compared. If the values returned from 0 and 0 are unequal, there
is
concurrent operationand direct exiting is effected. If the values returned
from 0 and 0
are equal, task configuration can be loaded, and JOB TASKPENDING: task name is
taken as KEY to obtain task total configuration cache from Redis:
[0108] 1.2 It is judged whether task configuration is obtained from cache: if
task configuration
is not obtained, a task scheduling sheet of the public library is enquired
according to the
task name, and the task configuration is loaded to the total task
configuration cache.
[0109] 1.3 Any task configuration whose status is being processed is screened
out.
[0110] 1.4 It is judged whether the current task is a competitive task or an
exclusive task
(differentiation is made according to functional businesses, task types are
determinable
and hard coded in codes before the codes are written):
[0111] 1.41. In the case of an exclusive task, an intersection between
CellList and the library
number of the task to be processed is calculated, the calculated intersection
is pushed to
the Redis queue whose KEY is JOB QUEUE: task name, and
JOB TASK LOAD LOCK: task name is released to serve as the shared lock of KEY;
[0112] 1.42. In the case of a competitive task, task configurations to be
processed are all pushed
to the Redis queue whose KEY is JOB QUEUE: task name, and
JOB TASK LOAD LOCK: task name is released to serve as the shared lock of KEY.
[0113] Seen as such, traffic switching of the multiple active data centers is
changed from the
original manual modification of the configuration file to automatic
recognition of the
19
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
instruction of the switching platform to perform switching in real time in the
present
application, whereby availability of the system is enhanced, and business
congestion time
and huge financial loss caused by traffic switching due to failures are
reduced.
[0114] Reading and writing of task configurations and prevention of concurrent
operations are
based on the Redis cache, whereby performance consumption of the database is
greatly
reduced, the upper limit of the concurrent quantity of tasks is enlarged, and
execution
speed of tasks is accelerated.
[0115] Enquiry of tasks to be processed is based on the Redis queue, whereby
the number of
times for which the system traverses the totally scheduled task cache
configurations is
greatly reduced, the number of times for which Redis is accessed is greatly
reduced, and
execution speed of tasks is accelerated.
[0116] Embodiment 1
[0117] In summary, Embodiment 1 of the present application provides a traffic
switching method
based on multiple active data centers, as shown in Fig. 4, the method
comprises the
following.
[0118] S41 - performing an operation to obtain traffic configuration
information after an
application server has received a task scheduling instruction, wherein the
traffic
configuration information is generated according to a preset rule when a multi-
active
switching platform judges according to data transmission status information of
various
data centers that data transmission failure occurs to a data center, the
multiple active data
centers include at least two data centers, and the traffic configuration
information is
employed to indicate traffic distribution to which each data center
corresponds.
[0119] The application server obtains the traffic configuration information
through the following
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
steps:
[0120] reading cache by the application server and judging whether the traffic
configuration
information is present in the cache;
[0121] if not, reading the traffic configuration information from the multi-
active switching
platform by the application server.
[0122] In addition, when the application server monitors that change occurs to
the traffic
configuration information of the multi-active switching platform, the changed
traffic
configuration information will be read and synchronized into the cache.
[0123] S42 ¨ parsing the traffic configuration information by the application
server, and
obtaining traffic distribution to which the data center in which the
application server
resides corresponds.
[0124] S43 ¨ judging by the application server according to the traffic
distribution and type
information of a task to be currently processed whether the application server
has a
permission to process the task to be currently processed.
[0125] This step specifically includes: judging, if the application server
judges that the task to be
currently processed is an exclusive task, whether the traffic distribution to
which the data
center in which the application server resides corresponds is empty;
[0126] if not, the application server possesses the permission to process the
task to be currently
processed.
[0127] The traffic distribution can include a set of sub-library numbers with
read-write
permission to which each data center corresponds;
[0128] the step of judging whether the traffic distribution to which the data
center in which the
application server resides corresponds is empty includes:
[0129] judging whether the set of sub-library numbers with read-write
permission to which the
21
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
data center in which the application server resides corresponds is empty.
[0130] S44 - if yes, loading the task by the application server to process the
task.
[0131] In a preferred embodiment, the multiple active data centers have a
master data center, and
the traffic configuration information further includes identification of the
master data
center;
[0132] the step of judging by the application server according to the traffic
distribution and type
information of a task to be currently processed whether the application server
has a
permission to process the task to be currently processed includes:
[0133] judging, if the application server judges that the task to be currently
processed is a
competitive task, whether a data center identification to which the
application server
corresponds is identical with the master data center identification;
[0134] if yes, the application server possesses the permission to process the
task to be currently
processed.
[0135] In a preferred embodiment, the traffic distribution includes a set of
sub-library numbers
with read-write permission to which each data center corresponds, and the step
of loading
the task by the application server to process the task includes:
[0136] searching by the application server for the task to be currently
processed from a task
queue of the cache, if the task is enquired out, judging whether the
application server has
a permission to process the task to be currently processed according to a sub-
library
number to which the task to be currently processed corresponds and according
to a sub-
library number with read-write permission of the data center in which the
application
server resides;
[0137] if yes, determining by the application server a status of the sub-
library number to which
the task to be currently processed corresponds as being processed and storing
the same in
task configuration information;
[0138] changing, if the task is processed to completion, by the application
server the status of
22
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
the sub-library number to which the task to be currently processed corresponds
to to be
processed and storing the same in the task configuration information.
[0139] Embodiment 2
[0140] Corresponding to the aforementioned application server, Embodiment 2 of
the present
application provides a traffic switching method based on multiple active data
centers, as
shown in Fig. 5, the method comprises:
[0141] S51 - obtaining data transmission status information of various data
centers by a multi-
active switching platform, wherein the multiple active data centers include at
least two
data centers; and
[0142] S52 -judging by the multi-active switching platform according to the
status information
and a preset condition, and generating traffic configuration information
according to a
preset rule when it is judged that traffic switching is required, so that an
application server
obtains the traffic configuration information and loads a task in conjunction
with task
configuration information as obtained to process the task after having
received a task
scheduling instruction, wherein the traffic configuration information is
employed to
indicate traffic distribution to which each data center corresponds.
[0143] The step of judging by the multi-active switching platform according to
the status
information and a preset condition, and generating traffic configuration
information
according to a preset rule when it is judged that traffic switching is
required includes:
[0144] performing traffic distribution, when the multi-active switching
platform judges
according to the status information that data transmission failure occurs to
any data center,
according to current traffic of data center(s) to which no failure occurs, a
traffic threshold,
and a rule to distribute traffic to which a competitive task corresponds to
the same and
single data center, to generate traffic configuration information that
contains traffic
distributions to which the various data centers correspond and an
identification of a
master data center that bears the competitive task.
23
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
[0145] Preferably, the method further comprises:
[0146] synchronizing the traffic configuration information into cache by the
multi-active
switching platform, so that the application server obtains the traffic
configuration
information from the cache; and
[0147] sending the latest traffic configuration information to the application
server when the
multi-active switching platform receives a traffic configuration information
obtaining
request from the application server.
[0148] Embodiment 3
[0149] Corresponding to the above Embodiment 1, Embodiment 3 of the present
application
provides a traffic switching device based on multiple active data centers, and
the device
comprises:
[0150] a traffic configuration information obtaining unit, for performing an
operation to obtain
traffic configuration information after having received a task scheduling
instruction,
wherein the traffic configuration information is generated according to a
preset rule when
a multi-active switching platform judges according to data transmission status
information of various data centers that traffic switching is required by a
data center, the
multiple active data centers include at least two data centers, and the
traffic configuration
information is employed to indicate traffic distribution to which each data
center
corresponds; preferably, the traffic configuration information obtaining unit
is
specifically employed for reading cache and judging whether the traffic
configuration
information is present in the cache; if not, reading the traffic configuration
information
from the multi-active switching platform;
[0151] a parsing unit, for parsing the traffic configuration information, and
obtaining traffic
distribution to which the data center in which the application server resides
corresponds;
[0152] a permission judging unit, for judging according to the traffic
distribution and type
information of a task to be currently processed whether there is a permission
to process
24
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
the task to be currently processed; preferably, the permission judging unit is
specifically
employed for judging, when it is judged that the task to be currently
processed is an
exclusive task, whether the traffic distribution to which the data center in
which the
application server resides corresponds is empty; if not, determining that
there is the
permission to process the task to be currently processed; and
[0153] a task processing unit, for obtaining task configuration information
when it is judged that
there is a processing permission, and loading a task in conjunction with the
traffic
distribution to process the task.
[0154] Embodiment 4
[0155] Corresponding to the above Embodiment 2, Embodiment 4 of the present
application
provides a traffic switching device based on multiple active data centers, and
the device
comprises:
[0156] a data transmission status information obtaining unit, for obtaining
data transmission
status information of various data centers, wherein the multiple active data
centers include
at least two data centers; and
[0157] a traffic configuration information unit, for judging according to the
status information
and a preset condition, and generating traffic configuration information
according to a
preset rule when it is judged that traffic switching is required, so that an
application server
obtains the traffic configuration information and loads a task in conjunction
with task
configuration information as obtained to process the task after having
received a task
scheduling instruction, wherein the traffic configuration information is
employed to
indicate traffic distribution to which each data center corresponds.
[0158] Preferably, the traffic configuration information unit is specifically
employed for
performing traffic distribution, when it is judged according to the status
information that
data transmission failure occurs to any data center, according to current
traffic of data
center(s) to which no failure occurs, a traffic threshold, and a rule to
distribute traffic to
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
which a competitive task corresponds to the same and single data center, to
generate
traffic configuration information that contains traffic distributions to which
the various
data centers correspond and an identification of a master data center that
bears the
competitive task.
[0159] Preferably, the device further comprises:
[0160] a traffic configuration information synchronizing unit, for
synchronizing the traffic
configuration information into cache, so that the application server obtains
the traffic
configuration information from the cache; and
[0161] a traffic configuration information sending unit, for sending the
latest traffic configuration
information to the application server when a traffic configuration information
obtaining
request is received from the application server.
[0162] As can be known through the description to the aforementioned
embodiments, it is clearly
learnt by person skilled in the art that the present application can be
realized through
software plus a general hardware platform. Based on such understanding, the
technical
solutions of the present application, or the contributions made thereby over
the state of
the prior art, can be essentially embodied in the form of a software product,
and such a
computer software product can be stored in a storage medium, such as an
ROM/RAM, a
magnetic disk, an optical disk etc., and includes plural instructions enabling
a computer
equipment (such as a personal computer, a cloud server, or a network device
etc.) to
execute the methods described in various embodiments or some sections of the
embodiments of the present application.
[0163] The various embodiments are progressively described in the Description,
identical or
similar sections among the various embodiments can be inferred from one
another, and
each embodiment stresses what is different from other embodiments.
Particularly, with
respect to the system or system embodiment, since it is essentially similar to
the method
embodiment, its description is relatively simple, and the relevant sections
thereof can be
26
Date Recue/Date Received 2022-05-24

CA 03162740 2022-05-24
inferred from the corresponding sections of the method embodiment. The system
or
system embodiment as described above is merely exemplary in nature, units
therein
described as separate parts can be or may not be physically separate, parts
displayed as
units can be or may not be physical units, that is to say, they can be located
in a single
site, or distributed over a plurality of network units. It is possible to
select partial modules
or the entire modules based on practical requirements to realize the
objectives of the
embodied solutions. It is understandable and implementable by persons
ordinarily skilled
in the art without spending creative effort in the process.
[0164] The traffic switching methods and devices provided by the present
application are
described in detail above, specific concrete examples are employed in this
paper to
enunciate the principles and modes of execution of the present application,
and
descriptions of the above embodiments are merely meant to help understand the
methods
and core conceptions of the present application; at the same time, persons
generally
skilled in the art may make variations to both the specific modes of execution
and the
range of application based on the conception of the present application. In
summary, the
contents of this Description shall not be understood to restrict the present
application.
27
Date Recue/Date Received 2022-05-24

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Amendment Received - Response to Examiner's Requisition 2024-05-06
Amendment Received - Voluntary Amendment 2024-05-06
Examiner's Report 2024-01-04
Inactive: Report - No QC 2024-01-02
Letter Sent 2023-02-03
Inactive: Correspondence - PAPS 2022-12-23
Request for Examination Received 2022-09-16
All Requirements for Examination Determined Compliant 2022-09-16
Request for Examination Requirements Determined Compliant 2022-09-16
Letter sent 2022-06-23
Priority Claim Requirements Determined Compliant 2022-06-22
Request for Priority Received 2022-06-22
Inactive: IPC assigned 2022-06-22
Inactive: First IPC assigned 2022-06-22
Application Received - PCT 2022-06-22
National Entry Requirements Determined Compliant 2022-05-24
Application Published (Open to Public Inspection) 2021-06-03

Abandonment History

There is no abandonment history.

Maintenance Fee

The last payment was received on 2023-12-15

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Fee History

Fee Type Anniversary Year Due Date Paid Date
Basic national fee - standard 2022-05-24 2022-05-24
MF (application, 2nd anniv.) - standard 02 2022-06-20 2022-05-24
Request for examination - standard 2024-06-19 2022-09-16
MF (application, 3rd anniv.) - standard 03 2023-06-19 2022-12-15
MF (application, 4th anniv.) - standard 04 2024-06-19 2023-12-15
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
10353744 CANADA LTD.
Past Owners on Record
RENSHAN LIN
TAO YANG
WEI GE
XIN WANG
YAO GE
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column (Temporarily unavailable). To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Claims 2024-05-05 7 457
Drawings 2022-05-23 4 362
Claims 2022-05-23 5 213
Abstract 2022-05-23 1 23
Description 2022-05-23 27 1,239
Cover Page 2022-09-16 1 74
Representative drawing 2022-09-16 1 44
Amendment / response to report 2024-05-05 16 696
Courtesy - Letter Acknowledging PCT National Phase Entry 2022-06-22 1 592
Courtesy - Acknowledgement of Request for Examination 2023-02-02 1 423
Examiner requisition 2024-01-03 5 189
National entry request 2022-05-23 13 1,320
International search report 2022-05-23 5 170
Amendment - Abstract 2022-05-23 2 122
Request for examination 2022-09-15 8 296
Correspondence for the PAPS 2022-12-22 4 149