Note: Descriptions are shown in the official language in which they were submitted.
SCHEDULING METHOD AND DEVICE FOR QUERY REQUEST AND COMPUTER SYSTEM
Technical Field
[0001] The present invention relates to the field of big data analysis, in
particular, to a method, a device,
and a computer system for query request scheduling.
Background
[0002] With rapid growth and development of diverse company big data
platforms, various OLAP (online
analytical processing) platforms enter different service scenarios. As a
result, additional scenarios and
greater difficulties for querying on the OLAP basic engines leads to improper
resource distributions and a
low resource utility efficiency.
[0003] In the meanwhile, query requests are affecting interactively. For
example, a large-volume query
would cause a long waiting time for other small-volume queries, and high-
priority queries must wait for
other low-priority queries. Therefore, an appropriate OLAP resources
scheduling technical strategy is
urgently demanded.
Summary
[0004] In order to solve the limitations of the current technologies, the
present invention aims at providing
a method, a device, and a computer system for query request scheduling to
provide appropriate OLAP
resources scheduling.
[0005] To achieve the forementioned goals, from the first perspective, a query
request scheduling method
is provided, comprising:
[0006] receiving a query request from a user;
[0007] parsing to determine the pending query data corresponding to the query
request;
[0008] determining pending query data volume corresponding to the described
query request based on the
metadata for the described pending query data; and
[0009] scheduling the described pending query data to a pre-set thread
according to a pre-set scheduling
strategy corresponding to the described pending query data volume, wherein the
described pre-set thread is
used to acquire the described pending query data by the described pre-set
thread for returning to the
described user.
[0010] In some embodiments of the present invention, the described query
request includes an aggregate
query, and the described method comprises:
[0011] determining the base of the described pending query data according to
the metadata for the
described pending query data;
1
Date Recue/Date Received 2021-10-12
[0012] determining required memory volume to aggregate the described pending
query data according to
the described base;
[0013] identifying the minimum-size memory block from memory blocks with
sufficient volumes equal
or greater than the described required memory volume wherein each memory block
has a pre-set memory
volume, and
[0014] aggregating the described acquired pending query data on the described
minimum-size memory
block, to return the aggregated pending query data to the described user.
[0015] In some embodiments of the present invention, the described query
request includes an associated
target priority, wherein the described pending query data is scheduled to a
pre-set thread according to a pre-
set scheduling strategy corresponding to the described pending query data
volume, to acquire the described
pending query data by the described pre-set thread for returning to the
described user by:
[0016] according to the described pre-set scheduling strategy corresponding to
the pending query data
volume, scheduling the described pending query data to a primary pre-set
thread from a thread pool
corresponding to the described target priority, wherein the described primary
thread is used to acquire the
described pending query data for returning to the described user.
[0017] In some embodiments of the present invention, the described method
includes:
[0018] when the described primary pre-set thread from the thread pool
corresponding to the described
target priority is not applicable, scheduling the described pending query data
to a secondary pre-set thread
from a thread pool lower than the described target priority, wherein the
described secondary thread is used
to acquire the described pending query data for returning to the described
user.
[0019] In some embodiments of the present invention, the described pending
query data is stored in a data
segment, wherein the pending query data includes the amount of the described
data segment; according to
a pre-set scheduling strategy corresponding to the described pending query
data volume, the procedure of
scheduling the described pending query data to a pre-set thread to acquire the
described pending query data
by the described pre-set thread for returning to the described user comprises:
[0020] scheduling the described pending query data to a pre-set thread
according to a pre-set scheduling
strategy corresponding to the amount of the described data segment, wherein
the described pre-set thread
is used to acquire the described pending query data by the described pre-set
thread for returning to the
described user.
[0021] In some embodiments of the present invention, the described query
request includes pending query
SQL statements, wherein the procedure of parsing to determine the pending
query data corresponding to
the query request comprises:
2
Date Recue/Date Received 2021-10-12
[0022] parsing the described pending query SQL statements, and determining the
pending query table
corresponding to the described query request as well as pending query columns
of the described pending
query table.
[0023] In some embodiments of the present invention, the procedure of
scheduling the described pending
query data to a pre-set thread to acquire the described pending query data by
the described pre-set thread
for returning to the described user comprises:
[0024] where if the described pending query data volume is not less than a pre-
set threshold, scheduling
the described pending query data to a pre-set thread according to the primary
scheduling strategy, wherein
the described pre-set thread is used to acquire the described pending query
data by the described pre-set
thread for returning to the described user; and
[0025] where if the described pending query data volume is less than a pre-set
threshold, scheduling the
described pending query data to a pre-set thread according to the secondary
scheduling strategy, wherein
the described pre-set thread is used to acquire the described pending query
data by the described pre-set
thread for returning to the described user.
[0026] From the second perspective, a query request scheduling device is
provided in the present invention,
comprising:
[0027] a receiving module, configured to receive a query request from a user;
[0028] a parsing module, configured to parse for determining the pending query
data corresponding to the
query request;
[0029] a processing module, configured to determine pending query data volume
corresponding to the
described query request based on the metadata for the described pending query
data; and
[0030] a scheduling module, configured to schedule the described pending query
data to a pre-set thread
according to a pre-set scheduling strategy corresponding to the described
pending query data volume,
wherein the described pre-set thread is used to acquire the described pending
query data by the described
pre-set thread for returning to the described user.
[0031] In some embodiments of the present invention, the described query
request includes an aggregate
query, wherein the described device also includes an aggregation module,
configured to determine the base
of the described pending query data according to the metadata for the
described pending query data,
followed by determining required memory volume to aggregate the described
pending query data according
to the described base; identifying the minimum-size memory block from memory
blocks with sufficient
volumes equal or greater than the described required memory volume wherein
each memory block has a
pre-set memory volume, and aggregating the described acquired pending query
data on the described
minimum-size memory block, to return the aggregated pending query data to the
described user.
[0032] From the third perspective, a computer system is provided in the
present invention, comprising:
3
Date Recue/Date Received 2021-10-12
[0033] one or more processors; and
[0034] a memory connected to the described one or more processors, wherein the
described memory is
used to store program commands, for performing the following procedures when
the described program
commands are executed on the described one or more processors:
[0035] receiving a query request from a user;
[0036] parsing to determine the pending query data corresponding to the query
request;
[0037] determining pending query data volume corresponding to the described
query request based on the
metadata for the described pending query data; and
[0038] scheduling the described pending query data to a pre-set thread
according to a pre-set scheduling
strategy corresponding to the described pending query data volume, wherein the
described pre-set thread is
used to acquire the described pending query data by the described pre-set
thread for returning to the
described user.
[0039] The present invention results in benefits of:
[0040] the present invention provides a query request scheduling method,
device, and computer system.
The method comprises: receiving a query request from a user; parsing to
determine the pending query data
corresponding to the query request; determining pending query data volume
corresponding to the described
query request based on the metadata for the described pending query data; and
scheduling the described
pending query data to a pre-set thread according to a pre-set scheduling
strategy corresponding to the
described pending query data volume, wherein the described pre-set thread is
used to acquire the described
pending query data by the described pre-set thread for returning to the
described user. By selecting a thread
scheduling strategy according to pending query data for a query request, the
pending query data is scheduled
to one or more pre-set threads based on the selected scheduling strategy, to
prevent problems of data
blockage due to an improperly large querying data traffic scheduled to a
thread and consequent long
responding time for other queries, affecting processing efficiency for small-
volume data queries.
100411 Furthermore, the present invention allows the procedure of determining
the base of the described
pending query data according to the metadata for the described pending query
data; determining required
memory volume to aggregate the described pending query data according to the
described base; identifying
the minimum-size memory block from memory blocks with sufficient volumes equal
or greater than the
described required memory volume, wherein each memory block has a pre-set
memory volume; and
aggregating the described acquired pending query data on the described minimum-
size memory block, to
return the aggregated pending query data to the described user. As a result,
the required memory volume
for the aggregate query is properly allocated based on the pending query data,
to prevent the problems of
low memory utility efficiency due to allocation of pending query data to an
unnecessarily larger memory
4
Date Recue/Date Received 2021-10-12
block; affected query parallel operations; consequent resource wasting; and
compromised query efficiency
due to allocation of pending query data to a memory block with insufficient
memory.
[0042] The present invention also provides an associated target priority
included in the described pending
query data, followed by scheduling the described pending query data to a
primary pre-set thread from a
thread pool corresponding to the described target priority, wherein the
described primary thread is used to
acquire the described pending query data for returning to the described user,
so as to solve problems of all
threads occupied by low-priority queries and consequently no immediate
processing of high-priority queries.
[0043] Products and applications of the present invention do not necessarily
carry all the forementioned
features.
Brief descriptions of the drawings
[0044] In order to make the technical strategies of the present invention
clearer, the accompanying
drawings for the present invention will be briefly introduced below.
Obviously, the following drawings in
the descriptions are only a portion of embodiments of the present invention.
Those skilled in the art are able
to generate other configurations according to the provided drawings without
requiring any creative works.
[0045] Fig. 1 is a structure diagram of the scheduling device provided in the
present invention.
[0046] Fig. 2 is a schematic diagram of the memory block allocation provided
in the present invention.
[0047] Fig. 3 is a schematic diagram of the thread pool scheduling provided in
the present invention.
[0048] Fig. 4 is a flow diagram of the method provided in the present
invention.
[0049] Fig. 5 is a structure diagram of the device provided in the present
invention.
[0050] Fig. 6 is a structure diagram of the computer system provided in the
present invention.
Detailed descriptions
[0051] In order to make the objective, the technical scheme, and the
advantages of the present invention
clearer, the present invention will be explained further in detail precisely
below with references to the
accompanying drawings. Obviously, the embodiments described below are only a
portion of embodiments
of the present invention and cannot represent all possible embodiments. Based
on the embodiments in the
present invention, the other applications by those skilled in the art without
any creative works are falling
within the scope of the present invention.
[0052] As discussed in the background, in order to solve the forementioned
problems, the present invention
provides a method for query request scheduling to provide appropriate OLAP
resources scheduling.
[0053] In particular, a query request scheduling system is developed in the
present invention as shown in
Fig. 1, wherein the operations executed on described system for query request
scheduling comprise:
[0054] Step 1, receiving a query request from a user; and
Date Recue/Date Received 2021-10-12
[0055] a user-sending query request to the described system via a pre-set
query port, wherein the query
request includes SQL statements to be queried or data tables, columns, and
rows to be queried.
[0056] Step 2, parsing to determine the pending query data corresponding to
the query request; and
[0057] particularly, the associated pending query data determined by the
parsed query request. The
pending query data includes a pending query table and corresponding pending
query columns.
[0058] Step 3, determining pending query data volume corresponding to the
described query request based
on the metadata for the described pending query data.
[0059] A segment is used to store data, wherein the pending query data is
stored in one or more storage
segments. In order to determine the number of segments with pending query data
stored, the metadate for
data is acquired. For example, based on a pre-set time period, the metadata of
data with little modifications
is acquired periodically, wherein the metadata for data from datasources can
be acquired periodically based
on a pre-set time granularity; and the data requiring real-time accuracy are
acquired by a real-time data
collecting engine Flink.
[0060] For each data table being collected, the number of rows and segments
used to store the table, data
volume, and mean volume of segments should be acquired.
[0061] For each data column being collected, the numbers of maximums,
minimums, technologies, and
null values should be acquired.
[0062] Based on the pending query data table and pending query data column,
the number of segments to
store the pending query data is determined.
[0063] Preferably, according to the exponential backoff algorithm, the
filtering factors of the described
query request is calculated to determine columns to be scanned for the
described query request.
[0064] Preferably, the base of the pending query data can be determined from
the metadata.
[0065] Step 4, scheduling the query request to a corresponding thread pool.
[0066] When all query requests publicly use all threads, the high-volume
queries with low priorities will
persistently occupy thread resources, forcing high-priority queries to remain
in the waiting queue without
available resources. To solve the problem, the present system includes
multiple pre-set thread pools
corresponding to different priority levels, wherein one or more threads are
included in each thread pool.
[0067] Each query request includes an associated target priority, wherein the
query request is scheduled to
an associated thread pool according to the target priority, to call a thread
in the thread pool for acquiring
pending query data from the segment with the pending query data stored.
[0068] When threads from the thread pool for high-priority query requests are
not available, a high-priority
query request can schedule a thread from a lower-priority thread pool, wherein
the pending query data from
the segment carrying the pending query data is acquired by the thread in the
lower-priority thread pool.
6
Date Recue/Date Received 2021-10-12
[0069] In particular, the unavailability of threads implies conditions of a
busy thread or a not responding
thread.
[0070] Step 5, scheduling the described pending query data to a pre-set thread
according to a pre-set
scheduling strategy corresponding to the described pending query data volume.
100711 In particular, a scheduling strategy is selected according to a pre-set
threshold.
[0072] For example, the described system uses the Druid for data storage,
wherein the Druid supports two
selecting strategies: Connection Count and Random. Connection Count can
satisfy demands by low-volume
query requests, wherein a high-volume query request may lead to uneven
distribution for segments and
consequent concentrated segment query tasks on one or more threads with low
processing efficiency.
Random can satisfy demands by high-volume query requests, while providing low
efficiency for low-
volume query requests and suppresses parallel query operations. Therefore, a
pre-set threshold is set for
selecting a proper selecting strategy. Where if the number segments carrying
pending query data is not less
than the threshold, Connection Count is selected as the scheduling strategy;
and where if the number
segments carrying pending query data is less than the threshold, Random is
selected as the scheduling
strategy.
[0073] After a scheduling strategy is selected, segments are distributed to
the pre-set threads, wherein the
threads acquire pending query data from the segment carrying the pending query
data.
[0074] Step 6, when the described query request is an aggregate query,
allocating a memory unit for the
pending query data, and aggregating the described acquired pending query data
on the described minimum-
size memory block, to return the aggregated pending query data to the
described user.
[0075] When the Druid aggregates results of the aggregate queries, an off-heap
memory, the MergeBuffer,
is allocated accordingly, wherein data aggregation is processed in the
MergeBuffer.
[0076] However, the currently available Druid can only be allocated with the
MergeBuffer with fixedf
volumes. The MergeBuffer with an unnecessarily large volume leads to low
memory utility efficiency and
affects parallel query operations. The MergeBuffer with an insufficient volume
leads to frequent disk
overflow, and consequently lowered query efficiency.
[0077] To solve the problem, as shown in Fig. 3, the present system divides
the MergeBuffer into multiple
memory blocks with different volumes.
[0078] Preferably, according to the equation below, required memory volume to
aggregate the pending
query data is estimated by according to the base of the pending query data:
[0079] E(card(x,y) = n. (1 ¨ ((n ¨ 1)/n)Ap), n = card(x) * card(y), p = row
count, wherein x and y
imply pending query columns, E(card(x, y)) is the combined base for x and y,
card(x) is the base for
column x and card (y) is the base for column y.
7
Date Recue/Date Received 2021-10-12
[0080] Based on the estimated aggregation memory volume, identifying the
minimum-size MergeBuffer
from MergeBuffers with sufficient volumes equal or greater than the required
memory volume.
100811 For example, with an estimation of 48 M required, a 64 M MergeBuffer
can be used as a
MergeBuffer scheduled for the current aggregation operation. When a miminum-
size memory block is not
sufficient, a memory block larger than the minimum-size MergeBuffer is then
selected.
[0082] The aggregated pending query data is returned to the user via a pre-set
method so that the user can
proceed to data analysis and other operations.
[0083] Embodiment two
[0084] Corresponding to the forementioned embodiment, as shown in Fig. 4, a
query request scheduling
method is provided in the present invention, comprising:
[0085] 410, receiving a query request from a user;
[0086] 420, parsing to determine the pending query data corresponding to the
query request;
[0087] 430, determining pending query data volume corresponding to the
described query request based
on the metadata for the described pending query data; and
[0088] 440, scheduling the described pending query data to a pre-set thread
according to a pre-set
scheduling strategy corresponding to the described pending query data volume,
wherein the described pre-
set thread is used to acquire the described pending query data by the
described pre-set thread for returning
to the described user.
[0089] Preferably, the described query request includes an associated target
priority, wherein the described
pending query data is scheduled to a pre-set thread according to a pre-set
scheduling strategy corresponding
to the described pending query data volume, to acquire the described pending
query data by the described
pre-set thread for returning to the described user by:
[0090] 441, according to the described pre-set scheduling strategy
corresponding to the pending query data
volume, scheduling the described pending query data to a primary pre-set
thread from a thread pool
corresponding to the described target priority, wherein the described primary
thread is used to acquire the
described pending query data for returning to the described user.
100911 Preferably, the described method includes:
[0092] when the described primary pre-set thread from the thread pool
corresponding to the described
target priority is not applicable, scheduling the described pending query data
to a secondary pre-set thread
from a thread pool lower than the described target priority, wherein the
described secondary thread is used
to acquire the described pending query data for returning to the described
user.
[0093] Preferably, the described pending query data is stored in a data
segment, wherein the pending query
data includes the amount of the described data segment; according to a pre-set
scheduling strategy
corresponding to the described pending query data volume, the procedure of
scheduling the described
8
Date Recue/Date Received 2021-10-12
pending query data to a pre-set thread to acquire the described pending query
data by the described pre-set
thread for returning to the described user comprises:
[0094] 443, scheduling the described pending query data to a pre-set thread
according to a pre-set
scheduling strategy corresponding to the amount of the described data segment,
wherein the described pre-
set thread is used to acquire the described pending query data by the
described pre-set thread for returning
to the described user.
[0095] Preferably, the procedure of scheduling the described pending query
data to a pre-set thread to
acquire the described pending query data by the described pre-set thread for
returning to the described user
comprises:
[0096] 444, where if the described pending query data volume is not less than
a pre-set threshold,
scheduling the described pending query data to a pre-set thread according to
the primary scheduling strategy,
wherein the described pre-set thread is used to acquire the described pending
query data by the described
pre-set thread for returning to the described user; and
[0097] where if the described pending query data volume is less than a pre-set
threshold, scheduling the
described pending query data to a pre-set thread according to the secondary
scheduling strategy, wherein
the described pre-set thread is used to acquire the described pending query
data by the described pre-set
thread for returning to the described user.
[0098] Preferably, the described query request includes an aggregate query,
and the described method
comprises:
[0099] 450, determining the base of the described pending query data according
to the metadata for the
described pending query data;
[0100] 451, determining required memory volume to aggregate the described
pending query data
according to the described base;
[0101] 452, identifying the minimum-size memory block from memory blocks with
sufficient volumes
equal or greater than the described required memory volume wherein each memory
block has a pre-set
memory volume, and
[0102] 453, aggregating the described acquired pending query data on the
described minimum-size
memory block, to return the aggregated pending query data to the described
user.
[0103] Preferably, the described query request includes pending query SQL
statements, wherein the
procedure of parsing to determine the pending query data corresponding to the
query request comprises:
[0104] parsing the described pending query SQL statements, and determining the
pending query table
corresponding to the described query request as well as pending query columns
of the described pending
query table.
[0105] Embodiment three
9
Date Recue/Date Received 2021-10-12
[0106] Corresponding to the described method, as shown in Fig. 5, a query
request scheduling device is
provided in the present invention, comprising:
[0107] a receiving module 510, configured to receive a query request from a
user;
[0108] a parsing module 520, configured to parse for determining the pending
query data corresponding
to the query request;
[0109] a processing module 530, configured to determine pending query data
volume corresponding to the
described query request based on the metadata for the described pending query
data; and
[0110] a scheduling module 540, configured to schedule the described pending
query data to a pre-set
thread according to a pre-set scheduling strategy corresponding to the
described pending query data volume,
wherein the described pre-set thread is used to acquire the described pending
query data by the described
pre-set thread for returning to the described user.
[0111] Preferably, the described query request includes an aggregate query,
wherein the described device
also includes an aggregation module 505, configured to determine the base of
the described pending query
data according to the metadata for the described pending query data, followed
by determining required
memory volume to aggregate the described pending query data according to the
described base; identifying
the minimum-size memory block from memory blocks with sufficient volumes equal
or greater than the
described required memory volume wherein each memory block has a pre-set
memory volume, and
aggregating the described acquired pending query data on the described minimum-
size memory block, to
return the aggregated pending query data to the described user.
[0112] Preferably, the described query request includes an associated target
priority; and the described
scheduling module 540 is also configured to schedule the described pending
query data to a primary pre-
set thread from a thread pool corresponding to the described target priority
according to the described pre-
set scheduling strategy corresponding to the pending query data volume,
wherein the described primary
thread is used to acquire the described pending query data for returning to
the described user.
[0113] Preferably, the described scheduling module 540 is also configured to
schedule the described
pending query data to a secondary pre-set thread from a thread pool lower than
the described target priority
when the described primary pre-set thread from the thread pool corresponding
to the described target
priority is not applicable, wherein the described secondary thread is used to
acquire the described pending
query data for returning to the described user.
[0114] Preferably, the described pending query data is stored in a data
segment, wherein the pending query
data includes the amount of the described data segment. The described
scheduling module 540 is also
configured to schedule the described pending query data to a pre-set thread
according to a pre-set scheduling
strategy corresponding to the amount of the described data segment, wherein
the described pre-set thread
Date Recue/Date Received 2021-10-12
is used to acquire the described pending query data by the described pre-set
thread for returning to the
described user.
[0115] Preferably, the described query request includes pending query SQL
statements, wherein the
parsing module 520 is also configured to parse the described pending query SQL
statements, and
determining the pending query table corresponding to the described query
request as well as pending query
columns of the described pending query table.
[0116] Preferably, the described scheduling module 540 is also configured to
schedule the described
pending query data to a pre-set thread according to the primary scheduling
strategy where if the described
pending query data volume is not less than a pre-set thresholdõ wherein the
described pre-set thread is used
to acquire the described pending query data by the described pre-set thread
for returning to the described
user; and schedule the described pending query data to a pre-set thread
according to the secondary
scheduling strategy where if the described pending query data volume is less
than a pre-set threshold,
wherein the described pre-set thread is used to acquire the described pending
query data by the described
pre-set thread for returning to the described user.
[0117] Embodiment four
[0118] Corresponding to the forementioned method, device, and system, in the
embodiment four of the
present invention, a computer system is provided, comprising: one or more
processors; and a memory
connected to the described one or more processors, wherein the described
memory is used to store program
commands, for performing the following procedures when the described program
commands are executed
on the described one or more processors:
[0119] receiving a query request from a user;
[0120] parsing to determine the pending query data corresponding to the query
request;
[0121] determining pending query data volume corresponding to the described
query request based on the
metadata for the described pending query data; and
[0122] scheduling the described pending query data to a pre-set thread
according to a pre-set scheduling
strategy corresponding to the described pending query data volume, wherein the
described pre-set thread is
used to acquire the described pending query data by the described pre-set
thread for returning to the
described user.
[0123] In particular, Fig. 6 illustrates structures of the computer system,
comprising a processor 1510, a
video display adapter 1511, a disk driver 1512, an input/output connection
port 1513, an internet connection
port 1514, and a memory 1520. The forementioned processor 1510, video display
adapter 1511, disk driver
1512, input/output connection port 1513, and internet connection port 1514 are
connected and
communicated via the system bus control 1530.
11
Date Recue/Date Received 2021-10-12
[0124] In particular, the processor 1510 can adopt a universal CPU (central
processing unit), a
microprocessor, an ASIC (application specific integrated circuit) or the use
of one or more integrated
circuits. The processor is used for executing associated programmes to achieve
the technical strategies
provided in the present invention.
[0125] The memory 1520 can adopt a read-only memory (ROM), a random access
memory (RAM), a
static memory, a dynamic memory, etc. The memory 1520 is used to store the
operating system 1521 for
controlling the electronic apparatus 1500, and the basic input output system
(BIOS) for controlling the low-
level operations of the electronic apparatus 1500. In the meanwhile, the
memory can also store the internet
browser 1523, data storage management system 1524, the device label
information processing system 1525,
etc. The described device label information processing system 1525 can be a
program to achieve the
forementioned methods and procedures in the present invention. In summary,
when the technical strategies
are performed via software or hardware, the codes for associated programs are
stored in the memory 1520,
then called and executed by the processor 1510. The input/output connection
port 1513 is used to connect
with the input/output modules for information input and output. The
input/output modules can be used as
components that are installed in the devices (not included in the drawings),
or can be externally connected
to the devices to provide the described functionalities. In particular, the
input devices may include
keyboards, mouse, touch screens, microphones, various types of sensors, etc.
The output devices may
include monitors, speakers, vibrators, signal lights, etc.
[0126] The internet connection port 1514 is used to connect with a
communication module (not included
in the drawings), to achieve the communication and interaction between the
described device and other
equipment. In particular, the communication module may be connected by wire
connection (such as USB
cables or internet cables), or wireless connection (such as mobile data, WIFI,
Bluetooth, etc.).
[0127] The system bus control 1530 include a path to transfer data across each
component of the device
(such as the processor 1510, the video display adapter 1511, the disk driver
1512, the input/output
connection port 1513, the internet connection port 1514 and the memory 1520).
[0128] Besides, the described electronic device 1500 can access the collection
condition information from
the collection condition information database 1541 via a virtual resource
object, so as for conditional
statements and other purposes.
[0129] To clarify, although the schematic of the forementioned device only
includes the processor 1510,
the video display adapter 1511, the disk driver 1512, the input/output
connection port 1513, the internet
connection port 1514, the memory 1520 and the system bus control 1530, the
practical applications may
include the other necessary components to achieve successful operations. It is
comprehensible for those
skilled in the art that the structure of the device may comprise of less
components than that in the drawings,
to achieve successful operations.
12
Date Recue/Date Received 2021-10-12
[0130] By the forementioned descriptions of the applications and embodiments,
those skilled in the art can
understand that the present invention can be achieved by combination of
software and necessary hardware
platforms. Based on this concept, the present invention is considered as
providing the technical benefits in
the means of software products. The mentioned computer software products are
stored in the storage media
such as ROM/RAM, magnetic disks, compact disks, etc. The mentioned computer
software products also
include using several commands to have a computer device (such as a personal
computer, a server, or a
network device) to perform portions of the methods described in each or some
of the embodiments in the
present invention.
[0131] The embodiments in the description of the present invention are
explained step-by-step. The
similar contents can be referred amongst the embodiments, while the
differences amongst the embodiments
are emphasized. In particular, the system and the corresponding embodiments
have similar contents to the
method embodiments. Hence, the system and the corresponding embodiments are
described concisely, and
the related contents can be referred to the method embodiments. The described
system and system
embodiments are for demonstration only, where the components that are
described separately can be
physically separated or not. The components shown in individual units can be
physical units or not. In other
words, the mentioned components can be at a single location or distributed
onto multiple network units. All
or portions of the modules can be used to achieve the purposes of embodiments
of the present invention
based on the practical scenarios. Those skilled in the art can understand and
apply the associated strategies
without creative works.
[0132] The forementioned contents of preferred embodiments of the present
invention shall not limit the
applications of the present invention. Therefore, all alterations,
modifications, equivalence, improvements
of the present invention fall within the scope of the present invention.
13
Date Recue/Date Received 2021-10-12