Sélection de la langue

Search

Sommaire du brevet 3174691 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 3174691
(54) Titre français: PROCEDE ET APPAREIL DE DETECTION DE FLOU DE VISAGE, DISPOSITIF INFORMATIQUE, ET SUPPORT DE STOCKAGE
(54) Titre anglais: HUMAN FACE FUZZINESS DETECTING METHOD, DEVICE, COMPUTER EQUIPMENT AND STORAGE MEDIUM
Statut: Examen
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • G06V 40/16 (2022.01)
  • G06N 03/08 (2023.01)
  • G06V 10/82 (2022.01)
(72) Inventeurs :
  • ZHANG, BENBEN (Chine)
  • HANG, XIN (Chine)
(73) Titulaires :
  • 10353744 CANADA LTD.
(71) Demandeurs :
  • 10353744 CANADA LTD. (Canada)
(74) Agent: JAMES W. HINTONHINTON, JAMES W.
(74) Co-agent:
(45) Délivré:
(86) Date de dépôt PCT: 2020-06-19
(87) Mise à la disponibilité du public: 2021-09-16
Requête d'examen: 2022-09-07
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Oui
(86) Numéro de la demande PCT: PCT/CN2020/097009
(87) Numéro de publication internationale PCT: CN2020097009
(85) Entrée nationale: 2022-09-07

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
202010156039.8 (Chine) 2020-03-09

Abrégés

Abrégé français

La présente invention se rapporte au domaine technique de la vision informatique, et divulgue un procédé et un appareil de détection de flou de visage, un dispositif informatique et un support de stockage. Le procédé comprend les étapes consistant à : extraire respectivement des images de bloc dans lesquelles une pluralité de points de caractéristique faciale sont situés à partir d'une image de visage; effectuer une prédiction sur chaque image de bloc au moyen d'un modèle de détection de flou préentraîné pour obtenir le degré de confiance de chaque image de bloc correspondant à chacune d'une pluralité d'étiquettes de niveau, la pluralité d'étiquettes de niveau comprenant une pluralité de niveaux de définition et une pluralité de niveaux de flou; en fonction du degré de confiance de chaque image de bloc correspondant à chacune de la pluralité d'étiquettes de niveau, acquérir la définition et le flou de chaque image de bloc; et en fonction de la définition et du flou de toutes les images de bloc, calculer le flou de l'image de visage. Dans les modes de réalisation de la présente invention, la précision de la détection de flou de visage peut être efficacement améliorée.


Abrégé anglais

The present invention relates to the technical field of computer vision, and disclosed are a face blur detection method and apparatus, a computer device and a storage medium. The method comprises: respectively extracting block images in which a plurality of facial feature points are located from within a face image; performing prediction on each block image by means of a pre-trained blur detection model to obtain the degree of confidence of each block image corresponding to each of a plurality of level labels, wherein the plurality of level labels comprise a plurality of definition levels and a plurality of blurriness levels; according to the degree of confidence of each block image corresponding to each of the plurality of level labels, acquiring the definition and blurriness of each block image; and according to the definition and blurriness of all of the block images, calculating the blurriness of the face image. In the embodiments of the present invention, the accuracy of face blur detection may be effectively improved.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 03174691 2022-09-07
CLAIMS
What is claimed is:
1. A human face fuzziness detecting method, characterized in that the method
comprises:
extracting, from a human face image, block images in which plural human face
feature points
respectively reside;
predicting each of the block images via a previously well-trained fuzziness
detecting model, and
obtaining a confidence degree of each of the block images corresponding to
each grade label in
plural grade labels, wherein the plural grade labels include plural definition
grades and plural
fuzziness grades;
obtaining definition and fuzziness of each of the block images according to
the confidence degree
of each of the block images corresponding to each grade label in plural grade
labels; and
calculating fuzziness of the human face image according to definitions and
fuzziness of all the
block images.
2. The method according to Claim 1, characterized in that the step of
extracting, from a human
face image, feature block images in which plural human face feature points
respectively reside
includes:
detecting the human face image, and locating a human face region and plural
human face feature
points; and
adjusting a size of the human face region to a preset size, and extracting a
block image in which
each of the human face feature points resides from the adjusted human face
region.
3. The method according to Claim 1 or 2, characterized in that the fuzziness
detecting model is
trained to be obtained through the following steps:
extracting, from a human face image sample, a block image sample in which each
of the human
face feature points resides, wherein the human face image sample includes
definite human face
image samples with different definition grades and fuzzy human face image
samples with
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
different fuzziness grades;
marking a corresponding grade label on each of the block image samples, and
classifying the
plural block image samples marked with grade labels into a training set and a
verifying set; and
iteratively training a preconstructed deep neural network according to the
training set and the
verifying set, and obtaining the fuzziness detecting model.
4. The method according to Claim 3, characterized in that the deep neural
network includes a
data input layer, a feature extraction layer, a first full connection layer,
an activation function
layer, a Dropout layer, a second full connection layer, and a loss function
layer sequentially
connected in cascades, wherein the feature extraction layer includes a
convolution layer, a
maximum pooling layer, a minimum pooling layer, and a concatenate layer, the
data input layer,
the maximum pooling layer, and the minimum pooling layer are respectively
connected with the
convolution layer, and the maximum pooling layer, the minimum pooling layer,
and the first full
connection layer are respectively connected with the concatenate layer.
5. The method according to Claim 3, characterized in that the method further
comprises:
employing different testing sets to calculate an optimum threshold for the
fuzziness detecting
model according to an ROC curve.
6. The method according to Claim 5, characterized in that, after the step of
calculating fuzziness
of the human face image according to definitions and fuzziness of all the
block images, the
method further comprises:
judging whether the fuzziness of the human face image obtained by calculation
is higher than the
optimum threshold;
if yes, deciding the human face image to be a fuzzy image, if not, deciding
the human face image
to be a definite image.
7. A human face fuzziness detecting device, characterized in that the device
comprises:
an extracting module, for extracting, from a human face image, block images in
which plural
26
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
human face feature points respectively reside;
a predicting module, for predicting each of the block images via a previously
well-trained
fuzziness detecting model, and obtaining a confidence degree of each of the
block images
corresponding to each grade label in plural grade labels, wherein the plural
grade labels include
plural definition grades and plural fuzziness grades;
an obtaining module, for calculating definition and fuzziness of each of the
block images
according to the confidence degree of each of the block images corresponding
to each grade label
in plural grade labels; and
a calculating module, for calculating fuzziness of the human face image
according to definitions
and fuzziness of all the block images.
8. The device according to Claim 7, characterized in that the device further
comprises a training
module, and that the training module is specifically employed for:
extracting, from a human face image sample, a block image sample in which each
of the human
face feature points resides, wherein the human face image sample includes
definite human face
image samples with different definition grades and fuzzy human face image
samples with
different fuzziness grades;
marking a corresponding grade label on each of the block image samples, and
classifying the
plural block image samples marked with grade labels into a training set and a
verifying set; and
iteratively training a preconstructed deep neural network according to the
training set and the
verifying set, and obtaining the fuzziness detecting model.
9. A computer equipment, comprising a memory, a processor, and a computer
program stored on
the memory and operable on the processor, characterized in that the human face
fuzziness
detecting method as recited in anyone of Claims 1 to 6 is realized when the
processor executes
the computer program.
10. A computer-readable storage medium, storing a computer program thereon,
characterized in
that the human face fuzziness detecting method as recited in anyone of Claims
1 to 6 is realized
27
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
when the computer program is executed by a processor.
28
Date Regue/Date Received 2022-09-07

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 03174691 2022-09-07
HUMAN FACE FUZZINESS DETECTING METHOD, DEVICE, COMPUTER
EQUIPMENT AND STORAGE MEDIUM
BACKGROUND OF THE INVENTION
Technical Field
[0001] The present invention relates to the field of computer vision
technology, and more
particularly to a human face fuzziness detecting method, and corresponding
device, computer
equipment, and storage medium.
Description of Related Art
[0002] With the advent of the age of artificial intelligence, the human face
recognition
technology has seemed to be more and more important in such aspects as payment
by face-
swiping, and going through gate by face-swiping, etc., whereby people's life
is made greatly
convenient. However, qualities of human face images input in the human face
recognition
model affect the recognition effect, and it appears particularly important to
reasonably screen
these human face images, to discard images whose fuzziness degrees are unduly
high, for
example.
[0003] The current human face fuzziness detection mainly includes a method
with full reference
and a method without reference:
[0004] (1) By the method with full reference it is required to use an original
human face image
before quality degradation as reference for comparison with fuzzy images, and
this method
is deficient in the fact that the original human face image before quality
degradation is not
easily obtainable;
[0005] (2) By the method without reference it is not required to take any
image as reference, as
fuzziness judgment is directly made on the human face image, and this method
has broader
1
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
applicability.
[0006] With respect to the fuzziness detecting method with full reference, a
reference image
whose quality is not degraded is firstly needed, and this is a restriction in
many application
scenarios; moreover, since a human face captured by a camera will be directly
used for
fuzziness judgment, it is impractical to take it as a reference image, so what
is broadly
employed is the fuzziness detecting method without reference.
[0007] As regards the fuzziness detecting method without reference, it is
modus operandi to
input an image containing a human face and background, the region of the human
face is
firstly detected in order to exclude interference of the background, such a
gradient function
as Brenner, Tenengrad, and Laplacian algorithms is then used to calculate the
gradient value
of the human face region, the greater the gradient value is, the more definite
will be the
contour of the human face, i.e., the clearer will be the human face image; to
the contrary, the
smaller the gradient value is, the fuzzier will be the contour of the human
face, i.e., the fuzzier
will be the human face image. This method is effective to few human face
images, but is
ineffective to great batches of human face images, as many definite images are
judged as
fuzzy ones, and there is the problem that the precision rate of detection is
not so high.
[0008] In addition, with the abrupt development of deep learning, the neural
network exhibits
strong capability to extract image features, there appears the application of
the deep learning
method to the detection of human face fuzziness, and certain progress is
correspondingly
achieved in the process. The deep learning method is usually employed to
classify human
face block images into the two categories of being fuzzy and being definite,
but it has been
found after experimentation that there are still some definite human face
images that are
judged as being fuzzy, and it is impossible to achieve the detection
requirement on high
precision rate.
2
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
SUMMARY OF THE INVENTION
[0009] In order to solve at least one of the above problems prevailing in the
state of the art, the
present invention provides a human face fuzziness detecting method, and
corresponding
device, computer equipment, and storage medium, enabling effective enhancement
of
precision rate in the detection of human face fuzziness. Specific technical
solutions provided
by the embodiments of the present invention are as follows.
[0010] According to the first aspect, there is provided a human face fuzziness
detecting method
that comprises:
[0011] extracting, from a human face image, block images in which plural human
face feature
points respectively reside;
[0012] predicting each of the block images via a previously well-trained
fuzziness detecting
model, and obtaining a confidence degree of each of the block images
corresponding to each
grade label in plural grade labels, wherein the plural grade labels include
plural definition
grades and plural fuzziness grades;
[0013] obtaining definition and fuzziness of each of the block images
according to the
confidence degree of each of the block images corresponding to each grade
label in plural
grade labels; and
[0014] calculating fuzziness of the human face image according to definitions
and fuzziness of
all the block images.
[0015] Further, the step of extracting, from a human face image, block images
in which plural
human face feature points respectively reside includes:
[0016] detecting the human face image, and locating a human face region and
plural human
face feature points; and
[0017] adjusting a size of the human face region to a preset size, and
extracting a block image
in which each of the human face feature points resides from the adjusted human
face region.
3
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
[0018] Further, the fuzziness detecting model is trained to be obtained
through the following
steps:
[0019] extracting, from plural human face image samples, a block image sample
in which each
of the human face feature points resides, wherein the plural image samples
include definite
human face image samples and fuzzy human face image samples;
[0020] marking a corresponding grade label on each of the block image samples,
and
classifying the plural block image samples marked with grade labels into a
training set and a
verifying set; and
[0021] iteratively training a preconstructed deep neural network according to
the training set
and the verifying set, and obtaining the fuzziness detecting model.
[0022] Further, the deep neural network includes a data input layer, a feature
extraction layer, a
first full connection layer, an activation function layer, a Dropout layer, a
second full
connection layer, and a loss function layer sequentially connected in
cascades, wherein the
feature extraction layer includes a convolution layer, a maximum pooling
layer, a minimum
pooling layer, and a concatenate layer, the data input layer, the maximum
pooling layer, and
the minimum pooling layer are respectively connected with the convolution
layer, and the
maximum pooling layer, the minimum pooling layer, and the first full
connection layer are
respectively connected with the concatenate layer.
[0023] Moreover, the method further comprises:
[0024] employing different testing sets to calculate an optimum threshold for
the fuzziness
detecting model according to an ROC curve.
[0025] Moreover, after the step of calculating fuzziness of the human face
image according to
definitions and fuzziness of all the block images, the method further
comprises:
[0026] judging whether the fuzziness of the human face image obtained by
calculation is higher
than the optimum threshold;
[0027] if yes, deciding the human face image to be a fuzzy image, if not,
deciding the human
4
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
face image to be a definite image.
[0028] According to the second aspect, there is provided a human face
fuzziness detecting
device that comprises:
[0029] an extracting module, for extracting, from a human face image, block
images in which
plural human face feature points respectively reside;
[0030] a predicting module, for predicting each of the block images via a
previously well-
trained fuzziness detecting model, and obtaining a confidence degree of each
of the block
images corresponding to each grade label in plural grade labels, wherein the
plural grade
labels include plural definition grades and plural fuzziness grades;
[0031] an obtaining module, for calculating definition and fuzziness of each
of the block images
according to the confidence degree of each of the block images corresponding
to each grade
label in plural grade labels; and
[0032] a calculating module, for calculating fuzziness of the human face image
according to
definitions and fuzziness of all the block images.
[0033] Further, the extracting module is specifically employed for:
[0034] detecting the human face image, and locating a human face region and
plural human
face feature points; and
[0035] adjusting a size of the human face region to a preset size, and
extracting a block image
in which each of the human face feature points resides from the adjusted human
face region.
[0036] Moreover, the device further comprises a training module that is
specifically employed
for:
[0037] extracting, from plural human face image samples, a block image sample
in which each
of the human face feature points resides, wherein the plural image samples
include definite
human face image samples and fuzzy human face image samples;
[0038] marking a corresponding grade label on each of the block image samples,
and
classifying the plural block image samples marked with grade labels into a
training set and a
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
verifying set; and
[0039] iteratively training a preconstructed deep neural network according to
the training set
and the verifying set, and obtaining the fuzziness detecting model.
[0040] Further, the deep neural network includes a data input layer, a feature
extraction layer, a
first full connection layer, an activation function layer, a Dropout layer, a
second full
connection layer, and a loss function layer sequentially connected in
cascades, wherein the
feature extraction layer includes a convolution layer, a maximum pooling
layer, a minimum
pooling layer, and a concatenate layer, the data input layer, the maximum
pooling layer, and
the minimum pooling layer are respectively connected with the convolution
layer, and the
maximum pooling layer, the minimum pooling layer, and the first full
connection layer are
respectively connected with the concatenate layer.
[0041] Moreover, the training module is specifically further employed for:
[0042] employing different testing sets to calculate an optimum threshold for
the fuzziness
detecting model according to an ROC curve.
[0043] Moreover, the device further comprises a judging module that is
specifically employed
for:
[0044] judging whether the fuzziness of the human face image obtained by
calculation is higher
than the optimum threshold;
[0045] if yes, deciding the human face image to be a fuzzy image, if not,
deciding the human
face image to be a definite image.
[0046] According to the third aspect, there is provided a computer equipment
that comprises a
memory, a processor, and a computer program stored on the memory and operable
on the
processor, and the human face fuzziness detecting method as recited in the
first aspect is
realized when the processor executes the computer program.
6
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
[0047] According to the fourth aspect, there is provided a computer-readable
storage medium
that stores a computer program thereon, and the human face fuzziness detecting
method as
recited in the first aspect is realized when the computer program is executed
by a processor.
[0048] As can be known from the above technical solutions, by extracting, from
a human face
image, block images in which plural human face feature points respectively
reside, thereafter
employing a previously well-trained fuzziness detecting model to predict a
confidence degree
of each of the block images corresponding to each grade label in plural grade
labels, and
obtaining definition and fuzziness of each block image according to the
confidence degree of
each block image corresponding to each grade label in plural grade labels, the
present
invention finally calculates fuzziness of the human face image according to
definitions and
fuzziness of all the block images; thusly, the plural block images in the
human face image are
respectively predicted as to fuzziness by means of a block prediction
conception, and the
prediction results are then combined together to predict the fuzziness of the
entire human face
image, whereby is avoided, to certain extent, that the entire result is
wrongly made due to
misjudgment of a certain block of the human face, so that accuracy in the
detection of human
face fuzziness is effectively enhanced; in addition, the present invention
employs a previously
well-trained fuzziness detecting model to predict confidence degrees of
different block
images in the human face image corresponding to each grade label in plural
grade labels, and
obtains fuzziness of each block image according to the confidence degree of
each block image
corresponding to each grade label in plural grade labels, since the plural
grade labels include
plural definition grades and plural fuzziness grades, in comparison with the
binary-
classification processing method in prior-art technology in which the deep
learning method
is employed to only differentiate human face block images into the two
categories of being
fuzzy and being definite, the present invention converts the binary-
classification problem to
a multi-classification problem for processing, and thereafter reconverts the
problem back to
binary-classification to obtain the fuzziness result, whereby it is made
possible to effectively
avoid the problem of misjudging a definite image as a fuzzy image, and to
further enhance
precision rate in the detection of image fuzziness.
7
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
BRIEF DESCRIPTION OF THE DRAWINGS
[0049] To describe the technical solutions in the embodiments of the present
invention more
clearly, drawings required for use in the description of the embodiments will
be briefly
introduced below. Apparently, the drawings introduced below are merely
directed to some
embodiments of the present invention, and it is possible for persons
ordinarily skilled in the
art to base on these drawings to acquire other drawings without creative
effort being spent in
the process.
[0050] Fig. 1 is a flowchart illustrating a human face fuzziness detecting
method provided by
an embodiment of the present invention;
[0051] Fig. 2 is a flowchart illustrating the process of training a fuzziness
detecting model
provided by an embodiment of the present invention;
[0052] Fig. 3 is a view schematically illustrating the structure of a deep
neural network provided
by an embodiment of the present invention;
[0053] Figs. 4a-4c are views illustrating ROC curves of the fuzziness
detecting model on
different testing sets provided by the embodiments of the present invention;
[0054] Fig. 5 is a view illustrating the structure of a human face fuzziness
detecting device
provided by an embodiment of the present invention; and
[0055] Fig. 6 is a view illustrating the internal structure of a computer
equipment provided by
an embodiment of the present invention.
8
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
DETAILED DESCRIPTION OF THE INVENTION
[0056] To make more lucid and clear the objectives, technical solutions and
advantages of the
present invention, the technical solutions in the embodiments of the present
invention will be
clearly and comprehensively described below in conjunction with accompanying
drawings
in the embodiments of the present invention. Apparently, the embodiments as
described
below are merely partial, rather than the entire, embodiments of the present
invention. All
other embodiments makeable by persons ordinarily skilled in the art on the
basis of the
embodiments in the present invention without spending any creative effort in
the process
shall all fall within the protection scope of the present invention.
[0057] As should be noted, unless explicitly demanded otherwise in the
context, such wordings
as "comprising", "including", "containing" and their various forms as used
throughout the
Description and Claims shall be understood to denote the meaning of inclusion,
rather than
the meaning of exclusion or exhaustion; in other words, these wordings denote
the meaning
of "including, but not limited to". In addition, unless explained otherwise in
the description
of the present invention, the wordings of "plural" and "a plurality of' denote
the meaning of
"two or more".
[0058] Fig. 1 is a flowchart illustrating a human face fuzziness detecting
method provided by
an embodiment of the present invention, as shown in Fig. 1, the method can
comprise the
following steps.
[0059] Step 101 - extracting, from a human face image, block images in which
plural human
face feature points respectively reside.
[0060] Specifically, a human face region is detected from the human face
image, and block
images in which plural human face feature points respectively reside are
extracted from the
human face region.
9
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
[0061] The human face feature points can include feature points to which a
left pupil, a right
pupil, a nose tip, a left comer of the mouth, and a right comer of the mouth
correspond, and
can further include other feature points, such as the feature point to which
the brow
corresponds.
[0062] In this embodiment, block images in which plural human face feature
points respectively
reside are extracted from the human face image, different human face feature
points are
contained in different block images, whereby plural block images can be
extracted, for
example, a left eye block image that contains the left pupil, and a right eye
block image that
contains the right pupil, etc.
[0063] Step 102 - predicting each block image via a previously well-trained
fuzziness detecting
model, and obtaining a confidence degree of each block image corresponding to
each grade
label in plural grade labels, wherein the plural grade labels include plural
definition grades
and plural fuzziness grades.
[0064] The confidence degree of a certain block image corresponding to a
certain grade label
is used to indicate the probability of the block image corresponding to this
grade label.
[0065] The definition grades are previously classified into three grades
according to definition
degrees in a descending order, including highly definite, mediumly definite,
and lightly
definite, and the corresponding grade labels are 0, 1, 2 respectively; the
fuzziness grades are
previously classified into three grades according to fuzziness degrees in an
ascending order,
including lightly fuzzy, mediumly fuzzy, and highly fuzzy, and the
corresponding grade labels
are 3, 4, 5 respectively; understandably, the number of grades of the
definition grades and the
number of grades of the fuzziness grades are both not restricted to three
grades, to which no
specific definition is made in the embodiments of the present invention.
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
[0066] Specifically, each block image is sequentially input in the fuzziness
detecting model for
prediction, and a confidence degree of each block image corresponding to each
grade label
in plural grade labels is obtained as output from the fuzziness detecting
model.
[0067] Step 103 ¨ obtaining definition and fuzziness of each block image
according to the
confidence degree of each block image corresponding to each grade label in
plural grade
labels.
[0068] Specifically, with respect to each block image, the confidence degree
of the block image
corresponding to each grade label in plural grade labels is calculated to
obtain definition and
fuzziness of the block image. Confidence degrees of the block image
corresponding to all
definition grades can be directly accumulatively added to obtain the
definition of the block
image, and confidence degrees of the block image corresponding to all
fuzziness grades can
be directly accumulatively added to obtain the fuzziness of the block image,
and it is also
possible to employ other calculation modes to obtain the definition and
fuzziness of the block
image, while the embodiments of the present invention make no specific
definition thereto.
[0069] Exemplarily, suppose that the circumstance, in which confidence degrees
of a left eye
block image of a certain human face image correspond to the aforementioned six
types of
grade labels, is as follows: the probability corresponding to grade label "0"
is 0, the
probability corresponding to grade label "1" is 0.9, the probability
corresponding to grade
label "2" is 0.05, the probability corresponding to grade label "3" is 0.05,
and the probabilities
corresponding to grade label "4" and grade label "5" are both 0; the
confidence degrees of
the left eye block image corresponding to all definition grades are directly
accumulatively
added to derive the definition of the block image as 0.95, and the confidence
degrees of the
left eye block image corresponding to all fuzziness grades are accumulatively
added to derive
the fuzziness of the block image as 0.05.
[0070] Step 104 - calculating fuzziness of the human face image according to
definitions and
11
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
fuzziness of all the block images.
[0071] Specifically, the definitions of all the block images are
accumulatively added, and the
result is divided by the number of the entire block images to obtain the
definition of the
human face image, and the fuzziness of all the block images are accumulatively
added, and
the result is divided by the number of the entire block images to obtain the
fuzziness of the
human face image.
[0072] This embodiment of the present invention provides a human face
fuzziness detecting
method, by extracting, from a human face image, block images in which plural
human face
feature points respectively reside, thereafter employing a previously well-
trained fuzziness
detecting model to predict a confidence degree of each of the block images
corresponding to
each grade label in plural grade labels, and obtaining definition and
fuzziness of each block
image according to the confidence degree of each block image corresponding to
each grade
label in plural grade labels, fuzziness of the human face image is finally
calculated according
to definitions and fuzziness of all the block images; thusly, the plural block
images in the
human face image are respectively predicted as to fuzziness by means of a
block prediction
conception, and the prediction results are then combined together to predict
the fuzziness of
the entire human face image, whereby is avoided, to certain extent, that the
entire result is
wrongly made due to misjudgment of a certain block of the human face, so that
accuracy in
the detection of human face fuzziness is effectively enhanced; in addition,
the present
invention employs a previously well-trained fuzziness detecting model to
predict confidence
degrees of different block images in the human face image corresponding to
each grade label
in plural grade labels, and obtains fuzziness of each block image according to
the confidence
degree of each block image corresponding to each grade label in plural grade
labels, since
the plural grade labels include plural definition grades and plural fuzziness
grades, in
comparison with the binary-classification processing method in prior-art
technology in which
the deep learning method is employed to only differentiate human face block
images into the
two categories of being fuzzy and being definite, the present invention
converts the binary-
12
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
classification problem to a multi-classification problem for processing, and
thereafter
reconverts the problem back to binary-classification to obtain the fuzziness
result, whereby
it is made possible to effectively avoid the problem of misjudging a definite
image as a fuzzy
image, and to further enhance precision rate in the detection of image
fuzziness.
[0073] In a preferred embodiment, the process of extracting, from a human face
image, feature
block images in which plural human face feature points respectively reside can
include:
[0074] detecting the human face image, locating a human face region and plural
human face
feature points, adjusting a size of the human face region to a preset size,
and extracting a
block image in which each human face feature point resides from the adjusted
human face
region.
[0075] Specifically, a well-trained MTCNN (Multi-task convolutional neural
network) human
face detection model is employed to detect the human face image to locate a
human face
region and plural human face feature points, and the MTCNN human face
detection model
here includes P-Net, R-Net, and 0-Net network layers that are respectively
responsible for
generating a detection frame, refining the detection frame, and locating human
face feature
points; the MTCNN human face detection model can be trained with reference to
prior-art
model training methods, to which no redundancy is made.
[0076] After the human face region and the plural human face feature points
have been located,
the size of the human face region is scaled to a preset size, coordinates of
the various human
face feature points are simultaneously converted from the human face image to
within a frame
of the size-adjusted human face region, pixel expansion is made all around
with the various
human face feature points as centers, plural rectangular block images are
obtained and cross
boundary processing is performed; in this embodiment, the preset size is
184*184, and 24
pixels are expanded all around with the various human face feature points as
centers to
respectively constitute block images sized 48*48.
13
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
[0077] In a preferred embodiment, as shown in Fig. 2, the fuzziness detecting
model is trained
to be obtained by a method that comprises the following steps.
[0078] Step 201 - extracting, from a human face image sample, a block image
sample in which
each human face feature point resides, wherein the human face image sample
includes
definite human face image samples with different definition grades and fuzzy
human face
image samples with different fuzziness grades.
[0079] In this embodiment, human face image samples with three definition
grades and three
fuzziness grades are firstly obtained, and the human face image samples of
each grade reach
a certain number (200 for example). Human face regions are thereafter detected
from the
human face image samples, and block image samples in which human face feature
points
respectively reside are extracted from the human face regions, wherein a well-
trained
MTCNN human face detection model can be used to detect the human face regions
and to
locate the human face feature points. Since image sizes of the image samples
are inconsistent
with one another, and sizes of the detected human face regions are also
inconsistent with one
another, the human face regions as obtained are uniformly scaled to a preset
size, coordinates
of the various human face feature points are simultaneously converted from the
human face
image to within frames of the size-adjusted human face regions, pixel
expansion is made all
around with the various human face feature points as centers, plural
rectangular block images
are obtained and cross boundary processing is performed; in this embodiment,
the preset size
is 184*184, the left pupil, the right pupil, the nose tip, the left corner of
the mouth, and the
right comer of the mouth are selected to serve as human face feature points,
24 pixels are
expanded all around with the various human face feature points as centers to
respectively
constitute block image samples sized 48*48, and these are stored. Thusly, by
processing few
amounts of human face image samples, fivefold block image samples can be
generated for
use in model training.
[0080] Step 202 - marking a corresponding grade label on each block image
sample, and
14
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
classifying plural block image samples marked with grade labels into a
training set and a
verifying set.
[0081] In this embodiment, through the above Step 201 are obtained about 1000
block image
samples for human face image samples of each grade, in this Step 202, a
corresponding grade
label is firstly manually marked on each block image sample, that is, each
block image sample
is ascribed to the correct category according to definition degree and
fuzziness degree through
manual examination, the highly definite label being 0, the mediumly definite
label being 1,
the lightly definite label being 2, the lightly fuzzy label being 3, the
mediumly fuzzy label
being 4, and the highly fuzzy label being 5, and the block image samples
marked with grade
labels are thereafter classified into a training set and a verifying set in
accordance with a
preset proportion (9:1, for example), of which the training set is used for
training the
parametric model, and the verifying set is used for correcting the model
during the training
process.
[0082] Step 203 - iteratively training a preconstructed deep neural network
according to the
training set and the verifying set, and obtaining the fuzziness detecting
model.
[0083] Specifically, the preconstructed deep neural network is trained with
block image
samples in the training set as inputs and with grade labels to which the block
image samples
correspond as outputs, and the trained deep neural network is verified
according to the
verifying set; if the verifying result does not conform to an iteration ending
condition, the
deep neural network is continually iteratively trained and verified until the
verifying result
conforms to the iteration ending condition, whereupon a fuzziness detecting
model is
obtained.
[0084] During the process of specific implementation, before the model is
trained, the training
set and the verifying set are package-processed into data of the LMDB format,
the
preconstructed deep neural network structure is stored in a document of a
format with the
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
suffix ".prototxt", the batches by which the data is read can be set as a
reasonable numerical
value according to hardware performance, a hyperparameter is set at
"solver.prototxt", a
learning rate is set as 0.005, the maximum times of iterations are set as
4,000, times of
verifications and testing intervals are set as 50 and 100, and all of these
parameters are
adjustable. Training of the model ensues to obtain a model document with the
suffix
".caffemodel". What the present invention employs is a deep learning caffe
framework, and
the use of other deep learning frameworks is similar in principle.
[0085] Generally speaking, ten or even hundred thousands of training samples
are required to
train a deep learning model, but actual fuzzy samples are extremely limited in
number during
practical production, and Gaussian fuzzy samples or motion fuzzy samples
generated by
simulation by the mode of image processing differ apparently from actual
samples, whereas
the present invention collects definite human face image samples with
different definition
grades and fuzzy human face image samples with different fuzziness grades,
extracts, from
these image samples respectively, block image samples in which plural human
face feature
points respectively reside, marks corresponding grade labels thereon, and
thereafter makes
use of plural block image samples marked with grade labels to train the
constructed deep
neural network, whereby multi-fold actual training samples can be obtained by
merely using
a small quantity of human face image samples, so that performance of the model
can be
further guaranteed, and precision in the detection of image fuzziness can
hence be effectively
enhanced.
[0086] In addition, since being highly definite and being highly fuzzy are two
extremities in the
detection of fuzziness, they are relatively easily differentiated, while those
samples that are
mediumly definite, lightly definite, lightly fuzzy and mediumly fuzzy due to
influences by
illumination, jitter of the shooter or camera pixels are not easily
differentiated. In the process
of training the fuzziness detecting model in the present invention, the binary-
classification
problem is converted to multi-classification problem for processing, whereby
interferences
from samples of the two extremities can be greatly reduced, by paying full
attention to
16
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
samples that are difficult to be differentiated, a better detecting result is
achieved than the
method of direct binary-classification processing without differentiating
definition grades
and fuzziness grades, so that the problem of misjudging an definite image as a
fuzzy image
can be effectively avoided, and precision rate in the detection of image
fuzziness is further
enhanced.
[0087] In a preferred embodiment, the aforementioned deep neural network
includes a data
input layer, a feature extraction layer, a first full connection layer, an
activation function layer,
a Dropout layer, a second full connection layer, and a loss function layer
sequentially
connected in cascades, of which the feature extraction layer includes a
convolution layer, a
maximum pooling layer, a minimum pooling layer, and a concatenate layer, the
data input
layer, the maximum pooling layer, and the minimum pooling layer are
respectively connected
with the convolution layer, and the maximum pooling layer, the minimum pooling
layer, and
the first full connection layer are respectively connected with the
concatenate layer.
[0088] See as shown in Fig. 3, which is a view schematically illustrating the
structure of a deep
neural network provided by an embodiment of the present invention. The first
is the data
input layer, and its function is to package data and thereafter input the
packaged data in the
network in small batches. The convolution layer follows. Then come separated
pooling layers:
one is maximum value pooling and the other is minimum value pooling, of which
the
maximum value pooling mode is to retain the most notable feature, and the
minimum value
pooling mode is to store the most easily neglectable feature, the combined use
of the two
pooling modes achieves excellent effect, and feature maps obtained by the two
types of
pooling are thereafter concatenated by the concatenate layer to serve together
as input to the
next layer. What follow are the full connection layer, the activation function
layer, and the
Dropout layer, of which the full connection layer is used to classify block
image features
input thereto, a Relu activation function in the activation function layer is
used to discard any
neuron whose output value is smaller than 0 to engender sparseness, and the
Dropout layer
is used to subtract few parameters during each model training to increase the
generalization
17
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
capability of the model. What ensues is still a full connection layer that is
used to output score
values of each definition grade and each fuzziness grade. The last is a
normalization and loss
function layer that is used to map the result output from the previous full
connection layer to
a corresponding probability value, and thereafter employ a cross entropy loss
function to
reduce smaller and smaller the difference between them and the label; the
specific cross
entropy loss function formula can be inferred from prior-art technology, and
no redundancy
is made thereto in this context.
[0089] In a preferred embodiment, after the step of iteratively training a
preconstructed deep
neural network according to the training set and the verifying set, and
obtaining the fuzziness
detecting model, the method can further comprise:
[0090] employing different testing sets to calculate an optimum threshold for
the fuzziness
detecting model according to an ROC curve.
[0091] The various testing sets include block image testing samples in which
the human face
feature points reside as extracted from the human face image testing sample,
the specific
extracting process can be inferred from Step 201, while no redundancy is made
thereto in this
context.
[0092] Specifically, fuzziness prediction is performed on each block image
testing sample in
each testing set on the basis of the fuzziness detecting model to obtain a
prediction result, an
ROC (receiver operating characteristic) curve to which each testing set
corresponds is drawn
according to the prediction result of each block image testing sample in each
testing set and
a preset threshold, the ROC curve to which each testing set corresponds is
analyzed, and an
optimum threshold is obtained.
[0093] In a practical application, 138669 definite human face images, 2334
semi-definite
human face images, 19050 definite human face images and 1446 fuzzy human face
images
of security thumbnails are collected to make up three image sets: definite
human face images
18
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
and fuzzy human face images, semi-definite human face images and fuzzy human
face
images, and definite human face images and fuzzy human face images of security
thumbnails,
block image testing samples in which human face feature points reside are
extracted from the
human face images in the three image sets respectively to form three testing
sets, the fuzziness
detecting model is thereafter employed to predict the various testing sets,
and ROC curves
are respectively drawn according to the prediction result of each block image
testing sample
in each testing set and the preset threshold, with reference to Figs. 4a-4c,
of which Fig. 4a
illustrates an ROC curve of the fuzziness detecting model formed by definite
and fuzzy
human face images on a testing set, Fig. 4b illustrates an ROC curve of the
fuzziness detecting
model formed by security definite thumbnails and fuzzy human face images on a
testing set,
and Fig. 4c illustrates an ROC curve of the fuzziness detecting model formed
by semi-definite
and fuzzy human face images on a testing set. In this embodiment, three levels
of preset
thresholds can be set through the expert experience method, the thresholds
are, in an
ascending order, 0.19, 0.39 and 0.79, respectively, and 0.39 is selected to
serve as the
optimum threshold through analyses of the ROC curves. 0.39 is selected to
perform tests on
the testing sets of definite and fuzzy human faces, and precision rate of the
testing results
reaches 99.3%.
[0094] In a preferred embodiment, after the above step of calculating
fuzziness of the human
face image according to definitions and fuzziness of all the block images, the
method can
further comprise:
[0095] judging whether the fuzziness of the human face image obtained by
calculation is higher
than the optimum threshold; if yes, deciding the human face image to be a
fuzzy image, if
not, deciding the human face image to be a definite image.
[0096] In this embodiment, the optimum threshold is taken as standard to judge
whether the
human face image is a fuzzy image, when the fuzziness of the human face image
is higher
than the optimum threshold, it is decided that the human face image is a fuzzy
image, whereby
is achieved automatic detection of fuzzy images, and image quality is
enhanced.
19
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
[0097] Fig. 5 is a view illustrating the structure of a human face fuzziness
detecting device
provided by an embodiment of the present invention, as shown in Fig. 5, the
device comprises:
[0098] an extracting module 51, for extracting, from a human face image, block
images in
which plural human face feature points respectively reside;
[0099] a predicting module 52, for predicting each of the block images via a
previously well-
trained fuzziness detecting model, and obtaining a confidence degree of each
of the block
images corresponding to each grade label in plural grade labels, wherein the
plural grade
labels include plural definition grades and plural fuzziness grades;
[0100] an obtaining module 53, for calculating definition and fuzziness of
each of the block
images according to the confidence degree of each of the block images
corresponding to each
grade label in plural grade labels; and
[0101] a calculating module 54, for calculating fuzziness of the human face
image according to
definitions and fuzziness of all the block images.
[0102] In a preferred embodiment, the extracting module 51 is specifically
employed for:
[0103] detecting the human face image, and locating a human face region and
plural human
face feature points; and
[0104] adjusting a size of the human face region to a preset size, and
extracting a block image
in which each of the human face feature points resides from the adjusted human
face region.
[0105] In a preferred embodiment, the device further comprises a training
module 50 that is
specifically employed for:
[0106] extracting, from plural human face image samples, a block image sample
in which each
of the human face feature points resides, wherein the plural image samples
include definite
human face image samples and fuzzy human face image samples;
[0107] marking a corresponding grade label on each of the block image samples,
and
classifying the plural block image samples marked with grade labels into a
training set and a
verifying set; and
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
[0108] iteratively training a preconstructed deep neural network according to
the training set
and the verifying set, and obtaining the fuzziness detecting model.
[0109] In a preferred embodiment, the deep neural network includes a data
input layer, a feature
extraction layer, a first full connection layer, an activation function layer,
a Dropout layer, a
second full connection layer, and a loss function layer sequentially connected
in cascades,
wherein the feature extraction layer includes a convolution layer, a maximum
pooling layer,
a minimum pooling layer, and a concatenate layer, the data input layer, the
maximum pooling
layer, and the minimum pooling layer are respectively connected with the
convolution layer,
and the maximum pooling layer, the minimum pooling layer, and the first full
connection
layer are respectively connected with the concatenate layer.
[0110] In a preferred embodiment, the training module 50 is specifically
further employed for:
[0111] employing different testing sets to calculate an optimum threshold for
the fuzziness
detecting model according to an ROC curve.
[0112] In a preferred embodiment, the device further comprises a judging
module 55 that is
specifically employed for:
[0113] judging whether the fuzziness of the human face image obtained by
calculation is higher
than the optimum threshold;
[0114] if yes, deciding the human face image to be a fuzzy image, if not,
deciding the human
face image to be a definite image.
[0115] As should be noted, the human face fuzziness detecting device provided
by the
foregoing embodiment is merely exemplarily explained by being divided into the
aforementioned various functional modules, whereas it is possible, in actual
application, to
assign the above functions to different functional modules for completion
according to
requirements, that is to say, the internal structure of the device is
classified into different
functional modules to complete the entire or partial functions as described
above. In addition,
21
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
since the human face fuzziness detecting device provided by this embodiment
pertains to the
same conception as the human face fuzziness detecting method provided by the
previous
embodiment, the process of its specific implementation and the advantageous
effects
achieved thereby can be inferred from the embodiment of the human face
fuzziness detecting
method, while no repetition is redundantly made in this context.
[0116] Fig. 6 is a view illustrating the internal structure of a computer
equipment provided by
an embodiment of the present invention. The computer equipment can be a
server, and its
internal structure can be as shown in Fig. 6. The computer equipment comprises
a processor,
a memory, and a network interface connected to each other via a system bus.
The processor
of the computer equipment is employed to provide computing and controlling
capabilities.
The memory of the computer equipment includes a nonvolatile storage medium and
an
internal memory. The nonvolatile storage medium stores therein an operating
system, a
computer program and a database. The internal memory provides environment for
the
running of the operating system and the computer program in the nonvolatile
storage medium.
The network interface of the computer equipment is employed to connect to an
external
terminal via network for communication. The computer program realizes a human
face
fuzziness detecting method when it is executed by a processor.
[0117] As understandable to persons skilled in the art, the structure
illustrated in Fig. 6 is merely
a block diagram of partial structure relevant to the solution of the present
invention, and does
not constitute any restriction to the computer equipment on which the solution
of the present
invention is applied, as the specific computer equipment may comprise
component parts that
are more than or less than those illustrated in Fig. 6, or may combine certain
component parts,
or may have different layout of component parts.
[0118] In one embodiment, there is provided a computer equipment that
comprises a memory,
a processor and a computer program stored on the memory and operable on the
processor,
and the following steps are realized when the processor executes the computer
program:
22
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
[0119] extracting, from a human face image, block images in which plural human
face feature
points respectively reside;
[0120] predicting each of the block images via a previously well-trained
fuzziness detecting
model, and obtaining a confidence degree of each of the block images
corresponding to each
grade label in plural grade labels, wherein the plural grade labels include
plural definition
grades and plural fuzziness grades;
[0121] obtaining definition and fuzziness of each of the block images
according to the
confidence degree of each of the block images corresponding to each grade
label in plural
grade labels; and
[0122] calculating fuzziness of the human face image according to definitions
and fuzziness of
all the block images.
[0123] In one embodiment, there is provided a computer-readable storage medium
storing
thereon a computer program, and the following steps are realized when the
computer program
is executed by a processor:
[0124] extracting, from a human face image, block images in which plural human
face feature
points respectively reside;
[0125] predicting each of the block images via a previously well-trained
fuzziness detecting
model, and obtaining a confidence degree of each of the block images
corresponding to each
grade label in plural grade labels, wherein the plural grade labels include
plural definition
grades and plural fuzziness grades;
[0126] obtaining definition and fuzziness of each of the block images
according to the
confidence degree of each of the block images corresponding to each grade
label in plural
grade labels; and
[0127] calculating fuzziness of the human face image according to definitions
and fuzziness of
all the block images.
[0128] As comprehensible to persons ordinarily skilled in the art, the entire
or partial flows in
the methods according to the aforementioned embodiments can be completed via a
computer
23
Date Regue/Date Received 2022-09-07

CA 03174691 2022-09-07
program instructing relevant hardware, the computer program can be stored in a
nonvolatile
computer-readable storage medium, and the computer program can include the
flows as
embodied in the aforementioned various methods when executed. Any reference to
the
memory, storage, database or other media used in the various embodiments
provided by the
present application can all include nonvolatile and/or volatile
memory/memories. The
nonvolatile memory can include a read-only memory (ROM), a programmable ROM
(PROM), an electrically programmable ROM (EPROM), an electrically erasable and
programmable ROM (EEPROM) or a flash memory. The volatile memory can include a
random access memory (RAM) or an external cache memory. To serve as
explanation rather
than restriction, the RAM is obtainable in many forms, such as static RAM
(SRAM), dynamic
RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM),
enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), memory
bus (Rambus) direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and
Rambus dynamic RAM (RDRAM), etc.
[0129] Technical features of the aforementioned embodiments are randomly
combinable, while
all possible combinations of the technical features in the aforementioned
embodiments are
not exhausted for the sake of brevity, but all these should be considered to
fall within the
scope recorded in the Description as long as such combinations of the
technical features are
not mutually contradictory.
[0130] The foregoing embodiments are merely directed to several modes of
execution of the
present invention, and their descriptions are relatively specific and
detailed, but they should
not be hence misunderstood as restrictions to the inventive patent scope. As
should be pointed
out, persons with ordinary skill in the art may further make various
modifications and
improvements without departing from the conception of the present invention,
and all these
should pertain to the protection scope of the present invention. Accordingly,
the patent
protection scope of the present invention shall be based on the attached
Claims.
24
Date Regue/Date Received 2022-09-07

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Rapport d'examen 2024-03-05
Inactive : Rapport - Aucun CQ 2024-02-29
Modification reçue - réponse à une demande de l'examinateur 2024-02-02
Modification reçue - modification volontaire 2024-02-02
Rapport d'examen 2023-10-03
Inactive : Rapport - Aucun CQ 2023-09-29
Lettre envoyée 2023-09-27
Avancement de l'examen jugé conforme - alinéa 84(1)a) des Règles sur les brevets 2023-09-27
Modification reçue - modification volontaire 2023-09-21
Inactive : Taxe de devanc. d'examen (OS) traitée 2023-09-21
Inactive : Avancement d'examen (OS) 2023-09-21
Inactive : CIB attribuée 2023-08-29
Inactive : CIB attribuée 2023-08-29
Inactive : CIB attribuée 2023-08-29
Inactive : CIB enlevée 2023-08-29
Inactive : CIB en 1re position 2023-08-29
Inactive : CIB expirée 2023-01-01
Inactive : CIB enlevée 2022-12-31
Lettre envoyée 2022-10-06
Exigences applicables à la revendication de priorité - jugée conforme 2022-10-05
Demande de priorité reçue 2022-10-05
Inactive : CIB attribuée 2022-10-05
Inactive : CIB attribuée 2022-10-05
Inactive : CIB en 1re position 2022-10-05
Demande reçue - PCT 2022-10-05
Lettre envoyée 2022-10-05
Exigences pour une requête d'examen - jugée conforme 2022-09-07
Toutes les exigences pour l'examen - jugée conforme 2022-09-07
Exigences pour l'entrée dans la phase nationale - jugée conforme 2022-09-07
Demande publiée (accessible au public) 2021-09-16

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Le dernier paiement a été reçu le 2023-12-15

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
TM (demande, 2e anniv.) - générale 02 2022-06-20 2022-09-07
Requête d'examen - générale 2024-06-19 2022-09-07
Taxe nationale de base - générale 2022-09-07 2022-09-07
TM (demande, 3e anniv.) - générale 03 2023-06-19 2022-12-15
Avancement de l'examen 2023-09-21 2023-09-21
TM (demande, 4e anniv.) - générale 04 2024-06-19 2023-12-15
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
10353744 CANADA LTD.
Titulaires antérieures au dossier
BENBEN ZHANG
XIN HANG
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document (Temporairement non-disponible). Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.

({010=Tous les documents, 020=Au moment du dépôt, 030=Au moment de la mise à la disponibilité du public, 040=À la délivrance, 050=Examen, 060=Correspondance reçue, 070=Divers, 080=Correspondance envoyée, 090=Paiement})


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Revendications 2024-02-01 20 1 162
Dessin représentatif 2023-09-12 1 30
Revendications 2023-09-20 20 1 147
Description 2022-09-06 24 1 154
Dessins 2022-09-06 4 107
Revendications 2022-09-06 4 140
Abrégé 2022-09-06 1 23
Modification / réponse à un rapport 2024-02-01 46 1 927
Demande de l'examinateur 2024-03-04 4 198
Courtoisie - Lettre confirmant l'entrée en phase nationale en vertu du PCT 2022-10-05 1 594
Courtoisie - Réception de la requête d'examen 2022-10-04 1 423
Avancement d'examen (OS) / Modification / réponse à un rapport 2023-09-20 26 960
Courtoisie - Requête pour avancer l’examen - Conforme (OS) 2023-09-26 1 184
Demande de l'examinateur 2023-10-02 4 241
Demande d'entrée en phase nationale 2022-09-06 13 1 263
Rapport de recherche internationale 2022-09-06 13 498
Modification - Abrégé 2022-09-06 2 108