Language selection

Search

Patent 3129984 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3129984
(54) English Title: METHOD AND SYSTEM FOR ACCESSING DISTRIBUTED BLOCK STORAGE SYSTEM IN USER MODE
(54) French Title: METHODE ET SYSTEME POUR ACCEDER A UN SYSTEME DE STOCKAGE DE BLOCS DISTRIBUE DANS UN MODE UTILISATEUR
Status: Examination Requested
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06F 16/27 (2019.01)
(72) Inventors :
  • SHEN, JIAN (China)
(73) Owners :
  • 10353744 CANADA LTD. (Canada)
(71) Applicants :
  • 10353744 CANADA LTD. (Canada)
(74) Agent: HINTON, JAMES W.
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2021-09-03
(41) Open to Public Inspection: 2022-03-03
Examination requested: 2022-09-16
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
202010919809.X China 2020-09-03

Abstracts

English Abstract


The present invention discloses a method and a system of accessing distributed
block storage
system in user state, LIO TCMU of a computing-node includes an accessing
module, and the
method comprises: receiving a data read request sent by the iSCSI initiator
via the accessing
module, after being connected between the LIO TCMU and the iSCSI initiator;
judging whether
there is target data corresponding to the data read request in a cache of the
accessing module; and
if yes, returning the target data to a data accessor, if not, generating a
corresponding thread in a
preconfigured thread pool in the accessing module, to facilitate the thread to
request
corresponding target data from the distributed block storage cluster and
return the target data to
the data accessor. By adding such functions as caching, pre-reading and write-
merging etc. in the
client side, the present invention moves forward tasks originally processed by
the server side to
the client side, thereby also reducing bandwidth overhead of cluster and
providing services to
more computing-nodes, to enhance servicing capability and response speed of
the entire cluster
and improve accessing performance.


Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS
What is claimed is:
1. A method of accessing distributed block storage system in user state, the
distributed block
storage system containing a computing node and a distributed block storage
cluster,
characterized in that the computing node includes iSCSI initiator and LIO
TCMU, wherein
the LIO TCMU is provided with an accessing module, and that the method
comprises the
following steps:
receiving a data read request coming from a data accessor and sent by the
iSCSI initiator via
the accessing module, after the LIO TCMU and the iSCSI initiator having been
connected
with each other;
judging whether there is target data corresponding to the data read request in
a cache of the
accessing module; and
if yes, returning the target data to the data accessor; and
if not, generating a corresponding thread in a preconfigured thread pool in
the accessing
module, so as to facilitate the thread to request target data corresponding to
the data read
request from the distributed block storage cluster and return the target data
to the data
accessor.
2. The method of accessing distributed block storage system in user
state according to Claim 1,
characterized in that the step of generating a corresponding thread in a
preconfigured thread
pool in the accessing module, so as to facilitate the thread to request target
data corresponding
to the data read request from the distributed block storage cluster and return
the target data
to the data accessor further includes:
writing the target data into the cache after the target data corresponding to
the data read
request has been requested from the distributed block storage cluster through
execution of
the thread.
3. The method of accessing distributed block storage system in user state
according to Claim 1
27
Date recue / Date received 2021-11-03

or 2, characterized in further comprising:
receiving a data write request coming from the data accessor and sent by the
iSCSI initiator
via the accessing module, the data write request including to-be-processed
data to be written
into the distributed block storage cluster;
writing the to-be-processed data into a cache of the computing node, and
generating a
corresponding data write task based on the data write request;
periodically executing the data write task to preprocess the to-be-processed
data; and
generating a corresponding thread in the preconfigured thread pool, and
writing the
preprocessed to-be-processed data into the distributed block storage cluster
through
execution of the thread.
4. The method of accessing distributed block storage system in user state
according to Claim 1
or 2, characterized in that the accessing module is communicable with the
distributed block
storage cluster through a preset communication protocol, so as to perform read
and/or write
operation(s).
5. A method of accessing distributed block storage system in user state, the
distributed block
storage system containing a computing node and a distributed block storage
cluster,
characterized in that the computing node includes a virtual machine deployed
on a physical
server, wherein the virtual machine includes an accessing module, and that the
method
comprises the following steps:
receiving a data read request sent by a data accessor via the accessing
module;
judging whether there is target data corresponding to the data read request in
a cache of the
accessing module; and
if yes, returning the target data to the data accessor; and
if not, generating a corresponding thread in a preconfigured thread pool in
the accessing
module, so as to facilitate the thread to request target data corresponding to
the data read
request from the distributed block storage cluster and return the target data
to the data
28
Date recue / Date received 2021-11-03

accessor.
6. The method of accessing distributed block storage system in user
state according to Claim 5,
characterized in that the step of generating a corresponding thread in a
preconfigured thread
pool in the accessing module, so as to facilitate the thread to request target
data corresponding
to the data read request from the distributed block storage cluster and return
the target data
to the data accessor further includes:
writing the target data into the cache after the target data corresponding to
the data read
request has been requested from the distributed block storage cluster through
execution of
the thread.
7. The method of accessing distributed block storage system in user state
according to Claim 5
or 6, characterized in further comprising:
receiving a data write request sent by the data accessor via the accessing
module, the data
write request including to-be-processed data to be written into the
distributed block storage
cluster;
writing the to-be-processed data into the cache of the computing node, and
generating a
corresponding data write task based on the data write request;
periodically executing the data write task to preprocess the to-be-processed
data; and
generating a corresponding thread in the preconfigured thread pool, and
writing the
preprocessed to-be-processed data into the distributed block storage cluster
through
execution of the thread.
8. The method of accessing distributed block storage system in user state
according to Claim 5
or 6, characterized in that the accessing module is communicable with the
distributed block
storage cluster through a preset communication protocol, so as to perform read
and/or write
operation(s).
9. A distributed block storage system, comprising a computing node and a
distributed block
storage cluster, characterized in that the computing node includes iSCSI
initiator and LIO
29
Date recue / Date received 2021-11-03

TCMU, wherein the LIO TCMU is provided with an accessing module, and that the
accessing
module includes:
a data receiving module, for receiving a data read request coming from a data
accessor and
sent by the iSCSI initiator, after the LIO TCMU and the iSCSI initiator having
been
connected with each other;
a data judging module, for judging whether there is target data corresponding
to the data read
request in a cache of the accessing module;
a data returning module, for returning the target data to the data accessor;
and
a data requesting module, for generating a corresponding thread in a
preconfigured thread
pool in the accessing module, so as to facilitate the thread to request target
data corresponding
to the data read request from the distributed block storage cluster and return
the target data
to the data accessor.
10. A distributed block storage system, comprising a computing node and a
distributed block
storage cluster, characterized in that the computing node includes a virtual
machine deployed
on a physical server, wherein the virtual machine includes an accessing
module, and that the
accessing module includes:
a data receiving module, for receiving a data read request sent by a data
accessor;
a data judging module, for judging whether there is target data corresponding
to the data read
request in a cache of the accessing module;
a data returning module, for returning the target data to the data accessor;
and
a data requesting module, for generating a corresponding thread in a
preconfigured thread
pool in the accessing module, so as to facilitate the thread to request target
data corresponding
to the data read request from the distributed block storage cluster and return
the target data
to the data accessor.
Date recue / Date received 2021-11-03

Description

Note: Descriptions are shown in the official language in which they were submitted.


METHOD AND SYSTEM FOR ACCESSING DISTRIBUTED BLOCK STORAGE
SYSTEM IN USER MODE
BACKGROUND OF THE INVENTION
Technical Field
[0001] The present invention relates to the field of distributed storage
technology, and more
particularly to a method and a system of accessing distributed block storage
system in user
state.
Description of Related Art
[0002] Conventionally, distributed block storage is mainly employed to provide
cloud disk
services for physical servers and virtual machines. In the state of the art,
the SCSI standard
block interface is generally not directly provided while distributed block
storage is being
realized, rather, the iSCSI mode is employed to uniformly provide accessing
interfaces to
the outside, as shown in Fig. 1. To facilitate realization, many solutions
chose to perform
secondary development based on TGT at the initial state of design, so as to
realize support
by iSCSI Target to the distributed block storage.
[0003] However, such modus operandi in the state of the art has the following
inherent
disadvantages:
[0004] Designs are uniformly made with respect to the application scenario of
the physical
server and the application scenario of the virtual machine, and there is no
optimized design
directed to their respective characteristics;
[0005] Realization by the use of TGT exhibits inferior performance in the
scenario in which
many initiators access one unified Target; and
[0006] TGT creates, by default, sixteen 10 threads for each LUN, while there
is no function of
automatic adjustment according to pressure, and resources are relatively
wasted.
[0007] In summary, there is an urgent need to propose a novel method of
accessing distributed
block storage system, so as to address the aforementioned problems.
1
Date recue / Date received 202 1-1 1-03

SUMMARY OF THE INVENTION
[0008] In order to solve prior-art problems, embodiments of the present
invention provide a
method and a system of accessing distributed block storage system in user
state, so as to
solve problems existent in the state of art.
[0009] To solve one or more of the aforementioned technical problems, the
present
invention proposes the following technical solutions.
[0010] According to the first aspect, there is provided a method of accessing
distributed block
storage system in user state, the distributed block storage system contains a
computing
node and a distributed block storage cluster, the computing node includes
iSCSI initiator
and LIO TCMU, the LIO TCMU is provided therein with an accessing module, and
the
method comprises the following steps:
[0011] receiving a data read request coming from a data accessor and sent by
the iSCSI initiator
via the accessing module, after the LIO TCMU and the iSCSI initiator having
been
connected with each other;
[0012] judging whether there is target data corresponding to the data read
request in a cache of
the accessing module; and
[0013] if yes, returning the target data to the data accessor, if not,
generating a corresponding
thread in a preconfigured thread pool in the accessing module, so as to
facilitate the thread
to request target data corresponding to the data read request from the
distributed block
storage cluster and return the target data to the data accessor.
[0014] Further, the step of generating a corresponding thread in a
preconfigured thread pool in
the accessing module, so as to facilitate the thread to request target data
corresponding to
the data read request from the distributed block storage cluster and return
the target data to
the data accessor further includes:
[0015] writing the target data into the cache after the target data
corresponding to the data read
request has been requested from the distributed block storage cluster through
execution of
2
Date recue / Date received 202 1-1 1-03

the thread.
[0016] Further, the method further comprises:
[0017] receiving a data write request coming from the data accessor and sent
by the iSCSI
initiator via the accessing module, the data write request including to-be-
processed data to
be written in the distributed block storage cluster;
[0018] writing the to-be-processed data into a cache of the computing node,
and generating a
corresponding data write task based on the data write request;
[0019] periodically executing the data write task to preprocess the to-be-
processed data; and
[0020] generating a corresponding thread in the preconfigured thread pool, and
writing the
preprocessed to-be-processed data into the distributed block storage cluster
through
execution of the thread.
[0021] Further, the accessing module is communicable with the distributed
block storage cluster
through a preset communication protocol, so as to perform read and/or write
operation(s).
[0022] According to the second aspect, there is provided a method of accessing
distributed
block storage system in user state, the distributed block storage system
contains a
computing node and a distributed block storage cluster, the computing node
includes a
virtual machine deployed on a physical server, the virtual machine includes an
accessing
module, and the method comprises the following steps:
[0023] receiving a data read request sent by a data accessor via the accessing
module;
[0024] judging whether there is target data corresponding to the data read
request in a cache of
the accessing module; and
[0025] if yes, returning the target data to the data accessor, if not,
generating a corresponding
thread in a preconfigured thread pool in the accessing module, so as to
facilitate the thread
to request target data corresponding to the data read request from the
distributed block
storage cluster and return the target data to the data accessor.
3
Date recue / Date received 202 1-1 1-03

[0026] Further, the step of generating a corresponding thread in a
preconfigured thread pool in
the accessing module, so as to facilitate the thread to request target data
corresponding to
the data read request from the distributed block storage cluster and return
the target data to
the data accessor further includes:
[0027] writing the target data into the cache after the target data
corresponding to the data read
request has been requested from the distributed block storage cluster through
execution of
the thread.
[0028] Further, the method further comprises:
[0029] receiving a data write request sent by the data accessor via the
accessing module, the
data write request including to-be-processed data to be written in the
distributed block
storage cluster;
[0030] writing the to-be-processed data into the cache of the computing node,
and generating a
corresponding data write task based on the data write request;
[0031] periodically executing the data write task to preprocess the to-be-
processed data; and
[0032] generating a corresponding thread in the preconfigured thread pool, and
writing the
preprocessed to-be-processed data into the distributed block storage cluster
through
execution of the thread.
[0033] Further, the accessing module is communicable with the distributed
block storage cluster
through a preset communication protocol, so as to perform read and/or write
operation(s).
[0034] According to the third aspect, there is provided a distributed block
storage system, the
system comprises a computing node and a distributed block storage cluster, the
computing
node includes iSCSI initiator and LIO TCMU, wherein the LIO TCMU is provided
with
an accessing module, and the accessing module includes:
[0035] a data receiving module, for receiving a data read request coming from
a data accessor
and sent by the iSCSI initiator, after the LIO TCMU and the iSCSI initiator
having been
connected with each other;
4
Date recue / Date received 202 1-1 1-03

[0036] a data judging module, for judging whether there is target data
corresponding to the data
read request in a cache of the accessing module;
[0037] a data returning module, for returning the target data to the data
accessor; and
[0038] a data requesting module, for generating a corresponding thread in a
preconfigured
thread pool in the accessing module, so as to facilitate the thread to request
target data
corresponding to the data read request from the distributed block storage
cluster and return
the target data to the data accessor.
[0039] According to the fourth aspect, there is provided a distributed block
storage system, the
system comprises a computing node and a distributed block storage cluster, the
computing
node includes a virtual machine deployed on a physical server, the virtual
machine includes
an accessing module, and the accessing module includes:
[0040] a data receiving module, for receiving a data read request sent by a
data accessor;
[0041] a data judging module, for judging whether there is target data
corresponding to the data
read request in a cache of the accessing module;
[0042] a data returning module, for returning the target data to the data
accessor; and
[0043] a data requesting module, for generating a corresponding thread in a
preconfigured
thread pool in the accessing module, so as to facilitate the thread to request
target data
corresponding to the data read request from the distributed block storage
cluster and return
the target data to the data accessor.
[0044] Technical solutions provided by the embodiments of the present
invention bring about
the following advantageous effects.
[0045] In the method and system of accessing distributed block storage system
in user state
provided by the embodiments of the present invention, by receiving a data read
request
coming from a data accessor and sent by the iSCSI initiator via the accessing
module after
the LIO TCMU and the iSCSI initiator have been connected with each other,
judging
whether there is target data corresponding to the data read request in a cache
of the
Date recue / Date received 202 1-1 1-03

accessing module, if yes, returning the target data to the data accessor, if
not, generating a
corresponding thread in a preconfigured thread pool in the accessing module,
so as to
facilitate the thread to request target data corresponding to the data read
request from the
distributed block storage cluster and return the target data to the data
accessor, and by
adding such functions as caching, pre-reading and write merging etc. in the
client side, the
tasks originally processed by the server side are moved forward to the client
side, thereby
also reducing bandwidth overhead of the cluster, making it possible to provide
services to
more computing nodes, and lowering the total ownership cost of the cluster by
the
enterprise, at the same time of enhancing servicing capability and response
speed of the
entire cluster and improving the accessing performance.
[0046] In the method and system of accessing distributed block storage system
in user state
provided by the embodiments of the present invention, by receiving a data read
request
sent by a data accessor via the accessing module, judging whether there is
target data
corresponding to the data read request in a cache of the accessing module, if
yes, returning
the target data to the data accessor, if not, generating a corresponding
thread in a
preconfigured thread pool in the accessing module, so as to facilitate the
thread to request
target data corresponding to the data read request from the distributed block
storage cluster
and return the target data to the data accessor, and by adding such functions
as caching,
pre-reading and write merging etc. in the client side, the tasks originally
processed by the
server side are moved forward to the client side, thereby also reducing
bandwidth overhead
of the cluster, making it possible to provide services to more computing
nodes, lowering
the total ownership cost of the cluster by the enterprise, reducing component
parts of the
framework as a whole, simplifying the framework, and making it convenient for
maintenance and deployment, at the same time of enhancing servicing capability
and
response speed of the entire cluster and improving the accessing performance.
BRIEF DESCRIPTION OF THE DRAWINGS
[0047] To more clearly explain the technical solutions in the embodiments of
the present
invention, drawings required for use in the following explanation of the
embodiments are
6
Date recue / Date received 202 1-1 1-03

briefly described below. Apparently, the drawings described below are merely
directed to
some embodiments of the present invention, while it is further possible for
persons
ordinarily skilled in the art to acquire other drawings based on these
drawings, and no
creative effort will be spent in the process.
[0048] Fig. 1 is a view illustrating the architecture of a prior-art
distributed block storage system
shown according to an exemplary embodiment;
[0049] Fig. 2 is a view illustrating the architecture of a separately designed
distributed block
storage system shown according to an exemplary embodiment;
[0050] Fig. 3 is a view illustrating the architecture of a distributed block
storage system under
the application scenario of a physical server shown according to an exemplary
embodiment;
[0051] Fig. 4 is a view illustrating the architecture of a distributed block
storage system under
the application scenario of a virtual machine shown according to an exemplary
embodiment;
[0052] Fig. 5 is a flowchart of the method of accessing distributed block
storage system in user
state shown according to an exemplary embodiment;
[0053] Fig. 6 is a flowchart of the method of accessing distributed block
storage system in user
state under the application scenario of a physical server shown according to
an exemplary
embodiment; and
[0054] Fig. 7 is a flowchart of the method of accessing distributed block
storage system in user
state under the application scenario of a virtual machine shown according to
an exemplary
embodiment.
DETAILED DESCRIPTION OF THE INVENTION
[0055] To make more lucid and clear the objectives, technical solutions and
advantages of the
present invention, technical solutions in the embodiments of the present
invention will be
described more clearly and completely below with reference to the accompanying
drawings
in the embodiments of the present invention. Apparently, the embodiments
described below
are partial, rather than the entire, embodiments of the present invention. All
other
7
Date recue / Date received 202 1-1 1-03

embodiments achievable by persons ordinarily skilled in the art on the basis
of the
embodiments in the present invention without creative effort shall all fall
within the
protection scope of the present invention.
[0056] Embodiment 1
[0057] As noted in the Description of Related Art, and with reference to Fig.
1, the SCSI
standard block interface is generally not directly provided while distributed
block storage
is being realized, rather, the iSCSI mode is employed to provide accessing
interfaces to the
outside. For instance, the physical server performs access still via iSCSI,
but realization of
TGT itself restricts its use under commercial environment, as computing
resources cannot
be utilized highly effectively. Moreover, the activity of TGT open source
community is
also not very high, and there is certain worry in the use of TGT. In view of
these problems,
in order to facilitate realization under the application scenario of a
physical server, many
solutions chose to perform secondary development based on TGT at the initial
state of
design, so as to realize support by iSCSI Target to the distributed block
storage. However,
the modus operandi under such application scenario has the following inherent
disadvantages:
[0058] Realization by the use of TGT exhibits inferior performance in the
scenario in which
plural initiators access a single target; and
[0059] TGT creates, by default, sixteen 10 threads for each LUN, while there
is no function of
automatic adjustment according to pressure, and resources are relatively
wasted.
[0060] To achieve function reuse and to reduce the difficulty of engineering
realization, a
method of accessing distributed block storage system in user state is
creatively proposed
in Embodiment 1 of the present invention, the method is adapted for
application in a
scenario of a physical server, an accessing module that realizes unified
distributed block
storage is disposed in the LIO TCMU of the computing node in this method, and
interface
accessing can be provided with the Lib mode. This accessing module mainly
realizes
functions of the client side of the distributed block storage, directly
communicates with the
8
Date recue / Date received 202 1-1 1-03

distributed block storage cluster via a private protocol, enables LIO to
directly support a
self-defined distributed block device type, can hence directly access the
distributed block
storage cluster, performs such operations as read and write, and enhances the
servicing
capability and response speed of the entire cluster.
[0061] Fig. 2 is a view illustrating the architecture of a separately designed
distributed block
storage system shown according to an exemplary embodiment, and Fig. 3 is a
view
illustrating the architecture of a distributed block storage system under the
application
scenario of a physical server shown according to an exemplary embodiment. With

reference to Figs. 2 and 3, in the embodiments of the present invention, in
order to replace
the TGT mode, LIO is selected for use in performing secondary development, so
as to
provide an iSCSI interface to access the distributed block device.
Specifically speaking,
the distributed block storage accessing module is accessed via LIO TCMU
interface, and
LIO backend storage types are increased. LIO TCMU is a user-state interface
provided by
an LIO kernel module, and thus can be avoided the difficulty of kernel
development and
system interference. With further reference to Figs. 2 and 3, the distributed
block storage
system at least includes a computing node and a distributed block storage
cluster, the
distributed block storage cluster includes a plurality of distributed block
storage devices,
the computing node includes iSCSI initiator and LIO TCMU, wherein LIO TCMU is
provided with an accessing module, through which can be realized such
functions as
caching, pre-reading and write merging, etc. directly at the client side, and
the tasks
originally processed by the server side are moved forward to the client side,
whereby
servicing capability and response speed of the entire cluster are enhanced.
[0062] Specifically, the foregoing solution can be realized via the following
steps.
[0063] Step 1 ¨ realizing distributed block storage accessing module, which
accessing module
includes, but is not limited to include, such functions as link managing
function, message
transceiving function, data caching function, pre-reading function, write
caching function,
etc.
[0064] Specifically, in order to enhance the reading/writing performance of
the distributed
9
Date recue / Date received 202 1-1 1-03

block device, an accessing module (LibBlockSys) is disposed in the LIO TCMU of
the
computing node, and this accessing module externally provides operation
interfaces with
respect to the block storage device, including, but not limited to, creating a
block device,
turning on the block device, turning off the block storage device, reading
operation, writing
operation, capacity expanding operation, and snapshot operation etc. The
accessing module
can at least realize the following functions.
[0065] The accessing module can establish multi-link communications with the
each node of
the distributed block storage cluster, concurrent capacity is enhanced, and
the number of
links is automatically and dynamically adjusted according to the message
pressing extent.
[0066] The accessing module can be realized as a multi-thread mode, and the
threads are mainly
divided into two types: TO transceiving threads and TO processing threads. The
two types
of threads each constitute a thread pool, namely TO transceiving thread pool
responsible
for transceiving network data, and TO processing thread pool responsible for
specifically
processing data, such as controlling message analysis, message processing, EC
processing,
and so on.
[0067] The accessing module can realize cache mechanism of local data. In the
process of use,
the reading operation preferentially hits local cache, and sends a read
request to the cluster
if hitting is not done. The writing operation is first cached in a local
memory or SSD, write
data is subsequently merged, aggregated, and removed of redundancy via timed
tasks, and
a write request is then sent to the cluster.
[0068] As should be noted here, as a preferred embodiment in the embodiments
of the present
invention, the cache mechanism can adopt B-tree storage and LRU mode to cache
hotspot
data during specific implementation.
[0069] Step 2¨ adding the accessing module in TCMU-runner, and invoking a
distributed block
storage client side interface to support the new storage type, wherein TCMU-
runner is a
daemon that processes userspace side of LIO user backstore.
[0070] Specifically, after the accessing module (LibBlockSys) has been
realized, it is possible
Date recue / Date received 202 1-1 1-03

to invoke this accessing module in TCMU and Hypervisor software. Under the
application
scenario of the physical server, secondary development is performed on TCMU-
runner,
and a self-defined block device accessing module is added therein, so that LIO
is enabled
to support distributed block storage, and to directly access the distributed
block storage
cluster. During specific implementation, Makefile file is first amended, that
is to say,
CMakefile.txt. in the item open-iscsi/tcmu-runner is amended to compile the
distributed
block device accessing module into TMCU. The well compiled TCMU is
subsequently
installed, targetcli is used to configure LUN in backstore/user:xxx, and the
corresponding
target is configured in iSCSI. Finally, it is possible to use iSCSI initiator
to access the
configured target, and to perform reading and writing operations on the block
device that
is locally mapped.
[0071] As should be noted here, Hypervisor is a middle-layer software running
between the
physical server and the operating system, and allows a plurality of operating
systems and
applications to share a set of basic physical hardware. Hypervisor can be
regarded as a
"meta" operating system in a virtual environment, it can coordinate all
physical devices
and virtual machines on the access server, so it is also referred to as a
virtual machine
monitor. Hypervisor is the kernel of all virtualization techniques, and
uninterrupted support
of multi workload migration is the basic function of hypervisor. When the
server starts and
executes hypervisor, adequate amount of memory, CPU, network and disk
resources will
be allocated to each virtual machine, and guest operating systems of all
virtual machines
will be loaded.
[0072] Embodiment 2
[0073] Fig. 5 is a flowchart of the method of accessing distributed block
storage system in user
state shown according to an exemplary embodiment, and Fig. 6 is a flowchart of
the method
of accessing distributed block storage system in user state under the
application scenario
of a physical server shown according to an exemplary embodiment. Referring to
Figs. 5
and 6, the method comprises the following steps.
[0074] S101 - receiving a data read request coming from a data accessor and
sent by the iSCSI
11
Date recue / Date received 202 1-1 1-03

initiator via the accessing module, after the LIO TCMU and the iSCSI initiator
having been
connected with each other.
[0075] Specifically, under the application scenario of a physical server, the
computing node
includes iSCSI initiator (iSCSI initiating party) and LIO TCMU, of which LIO
specifically
means Linux-I0, and TCMU specifically means TCM in userspace. In LIO TCMU is
disposed an accessing module that can realize such functions as caching, pre-
reading, and
write merging etc. at the client side. After the data accessor sends the data
read request, the
data read request is transmitted via iSCSI initiator to the accessing module
in LIO TCMU,
and is received and processed by the accessing module. That is to say, after
iSCSI initiator
has been connected with LIO, LIO directly invokes the distributed device
accessing module
interface while processing the iSCSI message, turns on the block device, and
completes the
connection with the cluster and creation of threads via the accessing module.
While the
initiator reads and writes the block device, LIO directly invokes
reading/writing interface
realized by the accessing module, performs read/write request interaction with
the cluster,
and completes read/write tasks.
[0076] S102 - judging whether there is target data corresponding to the data
read request in a
cache of the accessing module.
[0077] Specifically, in the embodiments of the present invention, the
accessing module is
configured to have cache mechanism of local data. After receiving the data
read request,
the accessing module performs analysis processing on the data read request,
and judge
whether there is target data corresponding to the data read request in a cache
of the
computing node based on the analysis result.
[0078] S103 - if yes, returning the target data to the data accessor, if not,
generating a
corresponding thread in a preconfigured thread pool in the accessing module,
so as to
facilitate the thread to request target data corresponding to the data read
request from the
distributed block storage cluster and return the target data to the data
accessor.
[0079] Specifically, if target data corresponding to the data read request is
found in the cache,
12
Date recue / Date received 202 1-1 1-03

the target data is directly returned to the data accessor, whereby it is no
longer required to
request the target data from the distributed block storage cluster, and this
enhances
servicing capability and response speed of the entire cluster. In the case
target data
corresponding to the data read request is not found in the cache, at this time
the data read
request is sent to the distributed block storage cluster, so as to obtain
target data
corresponding to the data read request and return the target data to the data
accessor.
[0080] Specifically, in the embodiments of the present invention, the
accessing module can be
realized as a multi-thread mode, and the threads are mainly divided into two
types: TO
transceiving threads and TO processing threads. The two types of threads each
constitute a
thread pool, namely TO transceiving thread pool responsible for transceiving
network data,
and TO processing thread pool responsible for specifically processing data,
such as
controlling message analysis, message processing, EC processing, and so on.
When target
data corresponding to the data read request is not found in the cache, a
corresponding thread
is generated in a preconfigured thread pool on the basis of the data read
request, the target
data corresponding to the data read request is then requested from the
distributed block
storage cluster through execution of the thread, and the target data returned
by the
distributed block storage cluster is sent to the data accessor.
[0081] As a preferred embodiment in the embodiments of the present invention,
the step of
generating a corresponding thread in a preconfigured thread pool in the
accessing module,
so as to facilitate the thread to request target data corresponding to the
data read request
from the distributed block storage cluster and return the target data to the
data accessor
further includes:
[0082] writing the target data into the cache after the target data
corresponding to the data read
request has been requested from the distributed block storage cluster through
execution of
the thread.
[0083] Specifically, in the embodiments of the present invention, after the
target data
corresponding to the data read request has been requested from the distributed
block
storage cluster, it is further needed to write the target data into the cache,
so that it is
13
Date recue / Date received 202 1-1 1-03

possible to hit the data directly from the cache at subsequent reception of
this read request
of the target data, whereby the rounds of switch between user state and kernel
state are
reduced. During specific operation, it is possible to write the target data
into the cache by
processing the thread.
[0084] As a preferred embodiment in the embodiments of the present invention,
the method
further comprises:
[0085] receiving a data write request coming from the data accessor and sent
by the iSCSI
initiator via the accessing module, the data write request including to-be-
processed data to
be written into the distributed block storage cluster;
[0086] writing the to-be-processed data into a cache of the computing node,
and generating a
corresponding data write task based on the data write request;
[0087] periodically executing the data write task to preprocess the to-be-
processed data; and
[0088] generating a corresponding thread in the preconfigured thread pool, and
writing the
preprocessed to-be-processed data into the distributed block storage cluster
through
execution of the thread.
[0089] Specifically, likewise in the embodiments of the present invention,
after the data
accessor sends the data write request, the data write request is transmitted
via iSCSI
initiator to the accessing module in LIO TCMU, and is received and processed
by the
accessing module. When the accessing module receives the data write request,
the to-be-
processed data carried with the write request is first written into the cache
of the computing
node (namely the cache of the kernel module), a corresponding data write task
is
subsequently generated according to the data write request, the data write
task is
periodically executed to preprocess the to-be-processed data, finally, a
corresponding
thread is generated in the preconfigured thread pool, and the preprocessed to-
be-processed
data is written into the distributed block storage cluster through execution
of the thread. As
should be noted here, in the embodiments of the present invention, preprocess
of the to-be-
processed data includes such operations on the to-be-processed data as
merging,
14
Date recue / Date received 202 1-1 1-03

aggregating and removing of redundancy, etc., to which no repetition is made
in this context.
[0090] As a preferred embodiment in the embodiments of the present invention,
the accessing
module is communicable with the distributed block storage cluster through a
preset
communication protocol, so as to perform read and/or write operation(s).
[0091] Embodiment 3
[0092] In the embodiments of the present invention, there is further provided
a distributed block
storage system corresponding to Embodiment 2, the system comprises a computing
node
and a distributed block storage cluster, the computing node includes iSCSI
initiator and
LIO TCMU, the LIO TCMU is provided therein with an accessing module, and the
accessing module includes:
[0093] a data receiving module, for receiving a data read request coming from
a data accessor
and sent by the iSCSI initiator, after the LIO TCMU and the iSCSI initiator
having been
connected with each other;
[0094] a data judging module, for judging whether there is target data
corresponding to the data
read request in a cache of the accessing module;
[0095] a data returning module, for returning the target data to the data
accessor; and
[0096] a data requesting module, for generating a corresponding thread in a
preconfigured
thread pool in the accessing module, so as to facilitate the thread to request
target data
corresponding to the data read request from the distributed block storage
cluster and return
the target data to the data accessor.
[0097] As a preferred embodiment in the embodiments of the present invention,
the data
requesting module is further employed for:
[0098] writing the target data into the cache after the target data
corresponding to the data read
request has been requested from the distributed block storage cluster through
execution of
the thread.
[0099] As a preferred embodiment in the embodiments of the present invention,
the accessing
Date recue / Date received 202 1-1 1-03

module is further employed for:
[0100] receiving a data write request coming from the data accessor and sent
by the iSCSI
initiator via the accessing module, the data write request including to-be-
processed data to
be written into the distributed block storage cluster;
[0101] writing the to-be-processed data into a cache of the computing node,
and generating a
corresponding data write task based on the data write request;
[0102] periodically executing the data write task to preprocess the to-be-
processed data; and
[0103] generating a corresponding thread in the preconfigured thread pool, and
writing the
preprocessed to-be-processed data into the distributed block storage cluster
through
execution of the thread.
[0104] As a preferred embodiment in the embodiments of the present invention,
the accessing
module is communicable with the distributed block storage cluster through a
preset
communication protocol, so as to perform read and/or write operation(s).
[0105] Embodiment 4
[0106] With further reference to Fig. 1, under virtual application scenario,
the currently
relatively general practice is to realize corresponding reading/writing
interfaces in the open
source iSCSI Target, where the host maps the distributed block device as a
local device via
iSCSI Initiator, and hence provides the virtual machine with reading/writing
accesses. At
the same time, in order to guarantee the access clear of malfunction, a
multipath design
using Multipath, for example, is required to be made on the iSCSI channel.
Such
application scenario has the following defects.
[0107] The paths by which the virtual machine accesses distributed block
storage are divided
into three sections, namely iSCSI Initiator, iSCSI Target, and distributed
cluster, and the
reading/writing access paths are long, so that the efficiency for the virtual
machine to read
and write a distributed block device is not very high;
[0108] The multipath design makes more inconvenient the entire cluster
deployment and the
16
Date recue / Date received 202 1-1 1-03

use of the virtual machine, the more component parts the cluster has, the more
fault points
there will be in the cluster, thus increasing use cost and maintenance
difficulty.
[0109] Likewise, to achieve function reuse and to reduce the difficulty of
engineering
realization, a method of accessing distributed block storage system in user
state is
creatively proposed in Embodiment 4 of the present invention, the method is
adapted for
application in a scenario of a virtual machine, in this method the computing
node is a virtual
machine deployed on a physical server, and an accessing module that realizes
unified
distributed block storage is disposed in the virtual machine. This accessing
module mainly
realizes functions of the client side of the distributed block storage,
directly communicates
with the distributed block storage cluster via a private protocol, can hence
directly access
the distributed block storage cluster, performs such operations as read and
write, and
enhances the servicing capability and response speed of the entire cluster.
[0110] Fig. 4 is a view illustrating the architecture of a distributed block
storage system under
the application scenario of a virtual machine shown according to an exemplary
embodiment.
Referring to Figs. 2 and 4, in the embodiments of the present invention, in
order to shorten
the access path of the virtual machine to distributed block storage, iSCSI
component parts
are reduced, secondary development is carried out on the virtual machine
software, and
backend storage driver is added by disposing an accessing module in the
virtual machine,
so as to support the self-developed distributed block storage access. The
direct access of
the virtual machine to distributed block storage via a private protocol
through the accessing
module not only shortens read/write delay, but also avoids multipath design
considerations,
and makes less the fault points of the entire cluster. Further referring to
Figs. 2 and 4, the
distributed block storage system at least includes a computing node and a
distributed block
storage cluster, the distributed block storage cluster includes a plurality of
distributed block
storage devices, the computing node includes a virtual machine deployed on a
physical
server, and the virtual machine includes an accessing module, through which
can be
realized such functions as caching, pre-reading and write merging, etc.
directly at the client
side, and the tasks originally processed by the server side are moved forward
to the client
17
Date recue / Date received 202 1-1 1-03

side, whereby servicing capability and response speed of the entire cluster
are enhanced.
[0111] Specifically, the foregoing solution can be realized via the following
steps.
[0112] Step 1 ¨ realizing distributed block storage accessing module, which
accessing module
includes, but is not limited to include, such functions as link managing
function, message
transceiving function, data caching function, pre-reading function, write
caching function,
etc.
[0113] Specifically, in order to enhance the reading/writing performance of
the distributed
block device, an accessing module (LibBlockSys) is disposed in the virtual
machine, and
this accessing module externally provides operation interfaces with respect to
the block
storage device, including, but not limited to, creating a block device,
turning on the block
device, turning off the block storage device, reading operation, writing
operation, capacity
expanding operation, and snapshot operation etc. The accessing module can at
least realize
the following functions.
[0114] The accessing module can establish multi-link communications with the
various nodes
of the distributed block storage cluster, concurrent capacity is enhanced, and
the number
of links is automatically and dynamically adjusted according to the message
pressing extent.
[0115] The accessing module can be realized as a multi-thread mode, and the
threads are mainly
divided into two types: TO transceiving threads and TO processing threads. The
two types
of threads each constitute a thread pool, namely TO transceiving thread pool
responsible
for transceiving network data, and TO processing thread pool responsible for
specifically
processing data, such as controlling message analysis, message processing, EC
processing,
and so on.
[0116] The accessing module can realize cache mechanism of local data. In the
process of use,
the reading operation preferentially hits local cache, and sends a read
request to the cluster
if hitting is not done. The writing operation is first cached in a local
memory or SSD, write
data is subsequently merged, aggregated, and removed of redundancy via timed
tasks, and
a write request is then sent to the cluster.
18
Date recue / Date received 202 1-1 1-03

[0117] As should be noted here, as a preferred embodiment in the embodiments
of the present
invention, the cache mechanism can adopt B-tree storage and LRU mode to cache
hotspot
data during specific implementation.
[0118] Step 2 ¨ adding the accessing module in the virtual machine.
[0119] Specifically, after the accessing module (LibBlockSys) has been
realized, it is possible
to invoke this accessing module in the virtual machine. Under the application
scenario of
the virtual machine, the virtualization management software usually used
includes
QEMU/KVM, XEN, VirtualBox, etc., all of these software are open source, and
secondary
development can be performed thereon to add thereto a self-defined block
device backend
module (namely the accessing module), so that the virtual machine can directly
access the
distributed block storage cluster. Taking QEMU/KVM for example, during
specific
implementation, invoking of the distributed block device is added to the QEMU
block
module, a protocol name of the distributed block device is added, so that QEMU
can
support the self-developed block storage protocol. Makefile file is amended,
so that the
distributed block device accessing module is compiled in QEMU. The well
compiled
QEMU is thereafter started, the self-defined protocol name and block storage
cluster
configuration files are configured during the start, and QEMU loads the block
device based
on the configuration items. This distributed block device will appear in the
virtual machine,
and formatting, mounting and accessing can be performed thereon.
[0120] Embodiment 5
[0121] Fig. 7 is a flowchart of the method of accessing distributed block
storage system in user
state under the application scenario of a virtual machine shown according to
an exemplary
embodiment. Referring to Figs. 5 and 7, the method comprises the following
steps.
[0122] S101 - receiving a data read request sent by a data accessor via the
accessing module.
[0123] Specifically, under the application scenario of a virtual machine, the
computing node
includes a virtual machine deployed on a physical server, and the virtual
machine includes
an accessing module that can realize such functions as caching, pre-reading,
and write
19
Date recue / Date received 202 1-1 1-03

merging etc. at the client side. After the data accessor sends the data read
request, the
request is received and processed by the accessing module. During specific
implementation,
distributed block device information that should be connected is configured in
the start
parameters of the virtual machine. After the virtual machine is started, it
directly invokes
the distributed device backend module interface, turns on the block device,
and completes
the connection with the cluster and creation of threads via the module. While
the virtual
machine reads and writes the block device, reading/writing interface realized
by the
accessing module is directly invoked, to perform read/write request
interaction with the
cluster, and to complete read/write tasks.
[0124] S102 - judging whether there is target data corresponding to the data
read request in a
cache of the accessing module.
[0125] Specifically, in the embodiments of the present invention, the
accessing module is
configured to have cache mechanism of local data. After receiving the data
read request,
the accessing module performs analysis processing on the data read request,
and judges
whether there is target data corresponding to the data read request in a cache
of the
computing node based on the analysis result.
[0126] S103 - if yes, returning the target data to the data accessor, if not,
generating a
corresponding thread in a preconfigured thread pool in the accessing module,
so as to
facilitate the thread to request target data corresponding to the data read
request from the
distributed block storage cluster and return the target data to the data
accessor.
[0127] Specifically, if target data corresponding to the data read request is
found in the cache,
the target data is directly returned to the data accessor, whereby it is no
longer required to
request the target data from the distributed block storage cluster, and this
enhances
servicing capability and response speed of the entire cluster. In the case
target data
corresponding to the data read request is not found in the cache, at this time
the data read
request is sent to the distributed block storage cluster, so as to obtain
target data
corresponding to the data read request and return the target data to the data
accessor.
Date recue / Date received 202 1-1 1-03

[0128] Specifically, in the embodiments of the present invention, the
accessing module can be
realized as a multi-thread mode, and the threads are mainly divided into two
types: TO
transceiving threads and TO processing threads. The two types of threads each
constitute a
thread pool, namely TO transceiving thread pool responsible for transceiving
network data,
and TO processing thread pool responsible for specifically processing data,
such as
controlling message analysis, message processing, EC processing, and so on.
When target
data corresponding to the data read request is not found in the cache, a
corresponding thread
is generated in a preconfigured thread pool on the basis of the data read
request , the target
data corresponding to the data read request is then requested from the
distributed block
storage cluster through execution of the thread, and the target data returned
by the
distributed block storage cluster is sent to the data accessor.
[0129] As a preferred embodiment in the embodiments of the present invention,
the step of
generating a corresponding thread in a preconfigured thread pool in the
accessing module,
so as to facilitate the thread to request target data corresponding to the
data read request
from the distributed block storage cluster and return the target data to the
data accessor
further includes:
[0130] writing the target data into the cache after the target data
corresponding to the data read
request has been requested from the distributed block storage cluster through
execution of
the thread.
[0131] Specifically, in the embodiments of the present invention, after the
target data
corresponding to the data read request has been requested from the distributed
block
storage cluster, it is further needed to write the target data into the cache,
so that it is
possible to directly hit the data from the cache during subsequent reception
of this read
request of the target data, whereby the rounds of switch between user state
and kernel state
are reduced. During specific operation, it is possible to write the target
data into the cache
by processing the thread.
[0132] As a preferred embodiment in the embodiments of the present invention,
the method
further comprises:
21
Date recue / Date received 202 1-1 1-03

[0133] receiving a data write request sent by the data accessor via the
accessing module, the
data write request including to-be-processed data to be written in the
distributed block
storage cluster;
[0134] writing the to-be-processed data into the cache of the computing node,
and generating a
corresponding data write task based on the data write request;
[0135] periodically executing the data write task to preprocess the to-be-
processed data; and
[0136] generating a corresponding thread in the preconfigured thread pool, and
writing the
preprocessed to-be-processed data into the distributed block storage cluster
through
execution of the thread.
[0137] Specifically, likewise in the embodiments of the present invention,
after the data
accessor sends the data write request, the data write request is received and
processed by
the accessing module. When the accessing module receives the data write
request, the to-
be-processed data carried with the write request is first written into the
cache of the
computing node (namely the cache of the kernel module), a corresponding data
write task
is subsequently generated according to the data write request, the data write
task is
periodically executed to preprocess the to-be-processed data, finally, a
corresponding
thread is generated in the preconfigured thread pool, and the preprocessed to-
be-processed
data is written into the distributed block storage cluster through execution
of the thread. As
should be noted here, in the embodiments of the present invention, preprocess
of the to-be-
processed data includes such operations on the to-be-processed data as
merging,
aggregating and removing of redundancy, etc., to which no repetition is made
in this context.
[0138] As a preferred embodiment in the embodiments of the present invention,
the accessing
module is communicable with the distributed block storage cluster through a
preset
communication protocol, so as to perform read and/or write operation(s).
[0139] Embodiment 6
[0140] In the embodiments of the present invention, there is further provided
a distributed block
storage system corresponding to Embodiment 5, the system comprises a computing
node
22
Date recue / Date received 202 1-1 1-03

and a distributed block storage cluster, the computing node includes a virtual
machine
deployed on a physical server, the virtual machine includes an accessing
module, and the
accessing module includes:
[0141] a data receiving module, for receiving a data read request sent by a
data accessor;
[0142] a data judging module, for judging whether there is target data
corresponding to the data
read request in a cache of the accessing module;
[0143] a data returning module, for returning the target data to the data
accessor; and
[0144] a data requesting module, for generating a corresponding thread in a
preconfigured
thread pool in the accessing module, so as to facilitate the thread to request
target data
corresponding to the data read request from the distributed block storage
cluster and return
the target data to the data accessor.
[0145] As a preferred embodiment in the embodiments of the present invention,
the data
requesting module is further employed for:
[0146] writing the target data into the cache after the target data
corresponding to the data read
request has been requested from the distributed block storage cluster through
execution of
the thread.
[0147] As a preferred embodiment in the embodiments of the present invention,
the accessing
module is further employed for:
[0148] receiving a data write request sent by the data accessor via the
accessing module, the
data write request including to-be-processed data to be written into the
distributed block
storage cluster;
[0149] writing the to-be-processed data into the cache of the computing node,
and generating a
corresponding data write task based on the data write request;
[0150] periodically executing the data write task to preprocess the to-be-
processed data; and
[0151] generating a corresponding thread in the preconfigured thread pool, and
writing the
23
Date recue / Date received 202 1-1 1-03

preprocessed to-be-processed data into the distributed block storage cluster
through
execution of the thread.
[0152] As a preferred embodiment in the embodiments of the present invention,
the accessing
module is communicable with the distributed block storage cluster through a
preset
communication protocol, so as to perform read and/or write operation(s).
[0153] To sum it up, technical solutions provided by the embodiments of the
present invention
bring about the following advantageous effects.
[0154] In the method and system of accessing distributed block storage system
in user state
provided by the embodiments of the present invention, by receiving a data read
request
coming from a data accessor and sent by the iSCSI initiator via the accessing
module after
the LIO TCMU and the iSCSI initiator have been connected with each other,
judging
whether there is target data corresponding to the data read request in a cache
of the
accessing module, if yes, returning the target data to the data accessor, if
not, generating a
corresponding thread in a preconfigured thread pool in the accessing module,
so as to
facilitate the thread to request target data corresponding to the data read
request from the
distributed block storage cluster and return the target data to the data
accessor, and by
adding such functions as caching, pre-reading and write merging etc. in the
client side, the
tasks originally processed by the server side are moved forward to the client
side, thereby
also reducing bandwidth overhead of the cluster, making it possible to provide
services to
more computing nodes, and lowering the total ownership cost of the cluster by
the
enterprise, at the same time of enhancing servicing capability and response
speed of the
entire cluster and improving the accessing performance.
[0155] In the method and system of accessing distributed block storage system
in user state
provided by the embodiments of the present invention, by receiving a data read
request
sent by a data accessor via the accessing module, judging whether there is
target data
corresponding to the data read request in a cache of the accessing module, if
yes, returning
the target data to the data accessor, if not, generating a corresponding
thread in a
preconfigured thread pool in the accessing module, so as to facilitate the
thread to request
24
Date recue / Date received 202 1-1 1-03

target data corresponding to the data read request from the distributed block
storage cluster
and return the target data to the data accessor, and by adding such functions
as caching,
pre-reading and write merging etc. in the client side, the tasks originally
processed by the
server side are moved forward to the client side, thereby also reducing
bandwidth overhead
of the cluster, making it possible to provide services to more computing
nodes, lowering
the total ownership cost of the cluster by the enterprise, reducing component
parts of the
framework as a whole, simplifying the framework, and making it convenient for
maintenance and deployment, at the same time of enhancing servicing capability
and
response speed of the entire cluster and improving the accessing performance.
[0156] As should be noted, the various embodiments are progressively described
in this
Description, identical or similar sections of the embodiments can be cross-
referenced from
one another, while the gist of each embodiment lies in its difference from
other
embodiments. Particularly, with regard to system or system embodiment, since
these are
substantially similar to method embodiment, their descriptions are relatively
simple, as
relevant sections can be cross-referenced from the corresponding sections of
the method
embodiment. The foregoing descriptions of system or system embodiment are
merely
schematic, and units explained as separate parts may be or may not be
physically separate,
while parts shown as units may be or may not be physical units, that is to
say, these can be
located at a single site, and can also be distributed on a plural of network
units. It is possible
to select partial or entire modules therefrom as practically required to
realize the objectives
of the embodiment solutions to the effect that they are understandable and
implementable
without creative effort from persons ordinarily skilled in the art.
[0157] As comprehensible to persons ordinarily skilled in the art, the entire
or partial steps in
the aforementioned embodiments can be completed via hardware, or via a program

instructing relevant hardware, the program can be stored in a computer-
readable storage
medium, and the storage medium can be a read-only memory, a magnetic disk or
an optical
disk, etc.
[0158] The foregoing embodiments are merely preferred embodiments of the
present invention,
Date recue / Date received 202 1-1 1-03

and they are not to be construed as restrictive to the present invention. Any
amendment,
equivalent substitution, and improvement makeable within the spirit and
principle of the
present invention shall all fall within the protection scope of the present
invention.
26
Date recue / Date received 202 1-1 1-03

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2021-09-03
(41) Open to Public Inspection 2022-03-03
Examination Requested 2022-09-16

Abandonment History

There is no abandonment history.

Maintenance Fee

Last Payment of $100.00 was received on 2023-12-15


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if small entity fee 2025-09-03 $50.00
Next Payment if standard fee 2025-09-03 $125.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee 2021-09-03 $408.00 2021-09-03
Request for Examination 2025-09-03 $814.37 2022-09-16
Maintenance Fee - Application - New Act 2 2023-09-05 $100.00 2023-06-15
Advance an application for a patent out of its routine order 2023-11-22 $526.29 2023-11-22
Maintenance Fee - Application - New Act 3 2024-09-03 $100.00 2023-12-15
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
10353744 CANADA LTD.
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
New Application 2021-09-03 6 209
Compliance Correspondence 2021-11-03 74 4,206
Description 2021-11-03 26 1,264
Claims 2021-11-03 4 173
Abstract 2021-11-03 1 28
Drawings 2021-11-03 4 484
Representative Drawing 2022-01-24 1 57
Cover Page 2022-01-24 1 70
Request for Examination 2022-09-16 9 326
Correspondence for the PAPS 2022-12-23 4 153
Examiner Requisition 2023-12-11 12 628
Amendment 2024-04-11 140 9,102
Claims 2024-04-11 47 3,050
Special Order / Amendment 2023-11-22 78 3,524
Claims 2023-11-22 72 4,647
Acknowledgement of Grant of Special Order 2023-11-29 1 184