Language selection

Search

Patent 3153067 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 3153067
(54) English Title: PICTURE-DETECTING METHOD AND APPARATUS
(54) French Title: PROCEDE ET DISPOSITIF DE TEST D'IMAGE
Status: Granted and Issued
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06T 07/00 (2017.01)
  • G06T 01/40 (2006.01)
  • G06T 05/30 (2006.01)
  • G06T 05/70 (2024.01)
  • G06T 07/10 (2017.01)
  • G06T 07/194 (2017.01)
  • G06T 07/70 (2017.01)
  • G06T 07/90 (2017.01)
(72) Inventors :
  • MU, CHONG (China)
  • ZHOU, XUYANG (China)
  • LIU, ERLONG (China)
  • HAN, MINGXIU (China)
(73) Owners :
  • 10353744 CANADA LTD.
(71) Applicants :
  • 10353744 CANADA LTD. (Canada)
(74) Agent: JAMES W. HINTONHINTON, JAMES W.
(74) Associate agent:
(45) Issued: 2024-03-19
(86) PCT Filing Date: 2020-06-24
(87) Open to Public Inspection: 2021-03-11
Examination requested: 2022-09-16
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/CN2020/097857
(87) International Publication Number: CN2020097857
(85) National Entry: 2022-03-02

(30) Application Priority Data:
Application No. Country/Territory Date
201910826006.7 (China) 2019-09-02

Abstracts

English Abstract

The present invention relates to the technical field of picture recognition. Disclosed are a picture test method and device, which improves the quality of an uploaded picture by adding compliance test items, i.e. a picture background purity and a main body position. Said method comprises: acquiring a denoised picture to be tested, performing pixel-level semantic segmentation processing on same, and then recognizing a main body area image and a background area image; performing color space conversion on said picture, and outputting hue space data and lightness space data of the images; expanding the main body area image and then fusing same with the hue space data, and extracting background purity values corresponding to pixels in the background area image, so as to determine whether the background purity of said picture is compliant; processing the lightness space data by means of a plurality of binarization methods, and outputting a plurality of binarization results; and fusing the main body area image with the plurality of binarization results, respectively, extracting coordinate values and a corresponding background purity value of each pixel in the fused main body area image, so as to determine whether the main body position of said picture is compliant.


French Abstract

L'invention se rapporte au domaine technique de la reconnaissance d'image. L'invention concerne un procédé et un dispositif de test d'image, qui améliorent la qualité d'une image téléchargée par ajout d'éléments de test de conformité, à savoir une pureté d'arrière-plan d'image et une position de corps principal. Ledit procédé consiste : à acquérir une image débruitée à tester, à effectuer un traitement de segmentation sémantique au niveau des pixels sur ladite image, puis à reconnaître une image de zone de corps principal et une image de zone d'arrière-plan ; à effectuer une conversion d'espace de couleur sur ladite image, et à fournir des données d'espace de teinte et des données d'espace de clarté des images ; à agrandir l'image de zone de corps principal, puis à la fusionner avec les données d'espace de teinte, et à extraire des valeurs de pureté d'arrière-plan correspondant à des pixels dans l'image de zone d'arrière-plan, de façon à déterminer si la pureté d'arrière-plan de ladite image est conforme ; à traiter les données d'espace de clarté au moyen d'une pluralité de procédés de binarisation, et à fournir une pluralité de résultats de binarisation ; et à fusionner l'image de zone de corps principal avec la pluralité de résultats de binarisation, respectivement, à extraire des valeurs de coordonnées et une valeur de pureté d'arrière-plan correspondante de chaque pixel dans l'image de zone de corps principal fusionnée, de façon à déterminer si la position de corps principal de ladite image est conforme.

Claims

Note: Claims are shown in the official language in which they were submitted.


Claims:
1. A picture-detecting apparatus, the apparatus comprising:
a pixel-processing unit, for acquiring a to-be-detected picture that has been
denoised,
performing pixel-level semantic segmentation on a denoised to-be-detected
picture, and
recognizing a subject region image and a background region image;
a hue-space-converting unit, for performing hue space conversion on the to-be-
detected
picture, so as to output hue space data and brightness space data of the
picture; and
a first determining unit, for fusing the subject region image after dilation
processing with
the hue space data, extracting a background purity value corresponding to
every pixel in
the background region image formed after dilation processing, and determining
whether
background purity of the to-be-detected picture is compliant, wherein a
compliant picture
comprises any one or more of non-violent, non-pornographic, blank, and
aesthetic
features, and wherein the aesthetic features include centered image subjects.
2. The apparatus of claim 1, the apparatus further comprising: a
binarizafion-processing unit,
for processing the brightness space data by means of plural binarization
apparatus, so as to
output plural binarization results correspondingly.
3. The apparatus of the claim 2, the apparatus further comprising: a second
determining unit,
for fusing the subject region image with the plural binarization results,
respectively,
extracting a coordinate value of every pixel in the fused subject region image
and its
corresponding background purity value, and determining whether a location of a
subject in
the to-be-detected picture is compliant.
4. The apparatus of the claim 3, wherein between the binarization-
processing unit and the
second determining unit, the apparatus further comprises: performing non-
coherence region
suppression on a first binarization result and a second binarization result,
respectively, by
means of a non-maximum suppression apparatus.
16
Date Reçue/Date Received 2023-12-18

5. The apparatus of the claim 4, wherein acquiring a to-be-detected picture
that has been
denoised, and after pixel-level semantic segmentation, recognizing a subject
region image
and a background region image comprises: denoising the to-be-detected picture
by means of
a nonlinear filtering apparatus; and performing pixel-level semantic
segmentation on the
denoised to-be-detected picture through a multi-channel deep residual fully
convolutional
network model, so as to recognize the subject region image and the background
region
image.
6. The apparatus of the claim 5, wherein performing hue space conversion on
the to-be-
detected picture to output hue space data and brightness space data of the
picture comprises:
using HSV hue space conversion apparatus to convert the to-be-detected picture
and output
the hue space data of the picture, in which the hue space data include a hue
space
component H; and using LUV hue space conversion apparatus to convert the to-be-
detected
picture and output the brightness space data of the picture, in which the
brightness space
data include a brightness space channel L.
7. The apparatus of the claim 6, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: filtering edge pixels of the subject region image by means of a
filter kemel, so as
to dilate the subject region image.
8. The apparatus of the claim 7, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: updating the part other than the dilated subject region image in
the to-be-
detected picture as the background region image.
17
Date Recue/Date Received 2023-12-18

9. The apparatus of the claim 8, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: fusing the updated background region image with data of the hue
space
component H, and determining whether the background purity value corresponding
to every
pixel in the updated background region image is compliant to a first
threshold.
10. The apparatus of the claim 9, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: determining that the background purity of the to-be-detected
picture is
compliant.
11. The apparatus of the claim 10, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: determining that the background purity of the to-be-detected
picture is non-
compliant, and wherein the first threshold includes a first background purity
threshold.
12. The apparatus of the claim 11, processing the brightness space data by
means of the plural
binarizati on apparatus to output the plural binarization results
correspondingly comprises:
processing data of the brightness space channel L by means of a fixed-
threshold binarization
apparatus, so as to obtain the first binarization result.
13. The apparatus of the claim 12, processing the brightness space data by
means of the plural
binarizati on apparatus to output the plural binarization results
correspondingly comprises:
processing the data of the brightness space channel L by means of a Gaussian-
window
binarizati on apparatus, so as to obtain the second binarizati on result.
18
Date Recue/Date Received 2023-12-18

14. The apparatus of the claim 13, the apparatus further comprising:
performing non-coherence
region suppression on the first binarization result and the second binarizati
on result,
respectively, by means of a non-maximum suppression apparatus.
15. The apparatus of the claim 14, wherein fusing the subject region image
with the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises: fusing
the subject region image recognized through pixel-level semantic segmentation
with the first
binarizati on result and the second binarization result, respectively.
16. The apparatus of the claim 15, wherein fusing the subject region image
with the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
extracting coordinate values of the pixels belonging to the subject region
image and the first
binarization result from fusing results and their corresponding background
purity values.
17. The apparatus of the claim 16, wherein fusing the subject region image
with the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
extracting coordinate vaiues of the pixels belonging to the subject region
image and the
second binarization result from fusing results and their corresponding
background purity
values.
18. The apparatus of the claim 17, wherein fusing the subject region image
with the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
summarizing and extracting the coordinate values of the pixels and their
corresponding
background purity values, and determining whether both the coordinate value of
each pixel
and its corresponding background purity value are compliant to a second
threshold.
19
Date Recue/Date Received 2023-12-18

19. The apparatus of the claim 18, wherein fusing the subject region image
with the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
determining that the location of the subject in the to-be-detected picture is
compliant.
20. The apparatus of the claim 19, wherein fusing the subject region image
with the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
determining that the location of the subject in the to-be-detected picture is
non-compliant;
and wherein the second threshold includes a second background purity threshold
and a
location coordinate interval threshold.
21. The apparatus of the claim 20, the apparatus further comprising:
idenfifying a subject region
image and a background region image in a denoised to-be-detected picture
through pixel-
level semantic segmentation, and performing hue space conversion on the
picture to output
hue space data and brightness space data of the picture.
22. The apparatus of the claim 21, the apparatus further comprising: dilating
the subject region
image to dilate the range of edge pixels of the subject region image in order
to ensure
complete coverage over the subject region image.
23. The apparatus of the claim 22, the apparatus further comprising: dilating
subject region
image is fused with hue space data.
24. The apparatus of the claim 23, the apparatus further comprising: to detect
the location of the
subject in the picture, the brightness space data are first processed by means
of the plural
binarization apparatus so as to generate the plural binarization results.
25. The apparatus of the claim 24, wherein the subject region image identified
through pixel-
level semantic segmentation is then fused with the plural binarization
results, respectively.
Date Recue/Date Received 2023-12-18

26. The apparatus of the claim 25, wherein based on the coordinate value of
every pixel in the
fused subject region image and its corresponding background purity value,
whether the
location of the subject in the to-be-detected picture is compliant can be
determined.
27. The apparatus of the claim 26, the apparatus further comprising: employing
HSV (hue,
saturation, value) hue space conversion apparatus to convert the to-be-
detected picture in the
RGB (red, green and blue) color space into the HSV color space that is closer
to human
visual perceptual characteristics.
28. The apparatus of the claim 27, wherein the conversion apparatus includes:
-V= max(R,G, B)
max(R,G,B)-mm(R,G,B)
rnax (R, 0, B)
(G - B)
60x ___________________ S~Oftmax(RAB)=R
Sx V
H = 60 x(2+ (B- R))
aamax(R,G,B)= G
S x V
60x [4+ (R-G1 0~Oftmax(RAB)=B
r7 x
= 1-1+360 H<O
29. The apparatus of the claim 28, wherein the features of brightness space
data include
extensive color gamut coverage, high visual consistency, and good capability
of expressing
color perception.
30. The apparatus of the claim 29, wherein the implantation converting
brightness space data of
the picture to be examined through: converting the to-be-detected picture from
RGB space
data into CIE XYZ space data; and converting the CIE XYZ space data into LUV
brightness
space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
21
Date Reçue/Date Received 2023-12-18

v N:31
Yõ 29
L
29)1 Y
x
Yõ k.29
,u =13 xi_ x(11 ¨
v =13xLx(vi¨vni
wherein P. and v. are light source constants, and ri is a preset fixed value;
and
wherein
4X
,LI= X +15Y+ 3Z
9Y
v, =
X +15Y+ 3Z
31. The apparatus of the claim 30, wherein a round filter kernel k is used for
filtering pixels of
the subject region image.
32. The apparatus of the claim 31, wherein when a round filter kernel has a
diameter of 4
represented by:
1 1 1
1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
k = 1 1 1 1 x 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1
1 1 1
33. The apparatus of the claim 32, wherein the filtering equation is:
= P K = tzu (k) P,(i, j)c P}
22
Date Reçue/Date Received 2023-12-18

wherein (i,j) represents the pixel coordinates, P represents the subject
region image, Zy
(k)
represents the background purity value corresponding to the pixel, Zu
represents the
punctured neighborhood region corresponding to each pixel obtained using the
round
filter kemel k as the mask, B represents the dilates subject region image,
wherein the
background region image is updated when the subject region image is dilated,
and D
represents the updated background region image.
34. The apparatus of the claim 33, wherein the background region image is
fused with data of
the hue space component H to generate a result C.
35. The apparatus of the claim 34, wherein the fusion equation is as below:
c = ic(0) c(0) = H(i,j),(i,j) c
c(i,j) = 0, (0) E D
36. The apparatus of the claim 35, wherein (0) represents pixel coordinates,
H(i, j) represents
the background purity value corresponding to the pixel in the hue space
component H; and
wherein when a pixel located at coordinates (i,j) belongs to the dilated
background region
image D, the background purity of the pixel in the hue space component His
valuated.
37. The apparatus of the claim 36, wherein when the pixel located at
coordinates (i,j) does not
belong to the dilated background region image D, the background purity value
of the pixel
in the hue space component His valuated as zero.
38. The apparatus of the claim 37, wherein the coordinates of the pixels and
their corresponding
background purity values are gathered to form an array C, which is the array
composed of
the location coordinates of the pixels in the background region image D and
the
correspondingly converted background purity values.
39. The apparatus of the claim 38, wherein the predetermined first threshold
is compared to the
array C.
23
Date Recue/Date Received 2023-12-18

40. The apparatus of the claim 39, wherein when all the background purity
values
corresponding to the induvial pixels in the background region image D are
smaller than the
first background purity threshold, it is determined that the background purity
of the to-be-
detected picture is compliant.
41. The apparatus of the claim 40, the apparatus further comprising:
processing data of the
brightness space channel L by means of a fixed-threshold binarization
apparatus, so as to
obtain a first binarization result T.
42. The apparatus of the claim 41, the apparatus further comprising:
processing the data of the
brightness space channel L by means of a Gaussian-window binarization
apparatus, so as to
obtain a second binarization result G.
43. The apparatus of the claim 42, the apparatus further comprising: non-
coherence region
suppression is performed on the first binarization result T and the second
binarization result
G, respectively by means of the non-maximum suppression apparatus to nullify
the impact
of non-coherence regions caused by complicated a background on the detection
results,
thereby further improving detection precision.
44. The apparatus of the claim 43, the apparatus further comprising: fusing
the subject region
image recognized through pixel-level semantic segmentation with the first
binarizati on
result and the second binarization result, respectively.
45. An electronic system comprising:
at least one processor;
a memory, connected with the at least one processor;
wherein the memory stores an instruction executable by the at least one
processor
configured to:
acquire a to-be-detected picture that has been denoised, perform pixel-level
semantic segmentation on a denoised to-be-detected picture, and recognizing a
subject region image and a background region image;
24
Date Recue/Date Received 2023-12-18

perform hue space conversion on the to-be-detected picture, so as to output
hue
space data and brightness space data of the picture; and
fuse the subject region image after dilation processing with the hue space
data,
extract a background purity value corresponding to every pixel in the
background region image formed after dilation processing, and determine
whether background purity of the to-be-detected picture is compliant wherein a
compliant picture comprises any one or more of non-violent, non-
pornographic, blank, and aesthetic features, and wherein the aesthetic
features
include centered image subjects.
46. The system of claim 45, the system further comprising: processing the
brightness space data
by means of plural binarization systems, so as to output plural binarization
results
correspondingly.
47. The system of the claim 46, the system further comprising: fusing the
subject region image
with the plural binarization results.
48. The system of the claim 47, the system further comprising: extracting a
coordinate value of
every pixel in the fused subject region image and its corresponding background
purity value,
and determining whether a location of a subject in the to-be-detected picture
is compliant.
49. The system of the claim 48, wherein acquiring a to-be-detected picture
that has been
denoised, and after pixel-level semantic segmentation, recognizing a subject
region image
and a background region image comprises: denoising the to-be-detected picture
by means of
a nonlinear filtering system; and performing pixel-level semantic segmentation
on the
denoised to-be-detected picture through a multi-channel deep residual fully
convolutional
network model, so as to recognize the subject region image and the background
region
image.
Date Recue/Date Received 2023-12-18

50. The system of the claim 49, wherein performing hue space conversion on the
to-be-detected
picture to output hue space data and brightness space data of the picture
comprises: using
HSV hue space conversion system to convert the to-be-detected picture and
output the hue
space data of the picture, in which the hue space data include a hue space
component H; and
using LUV hue space conversion system to convert the to-be-detected picture
and output the
brightness space data of the picture, in which the brightness space data
include a brightness
space channel L.
51. The system of the claim 50, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: filtering edge pixels of the subject region image by means of a
filter kernel, so as
to dilate the subject region image.
52. The system of the claim 51, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: updating the part other than the dilated subject region image in
the to-be-
detected picture as the background region image.
53. The system of the claim 52, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: fusing the updated background region image with data of the hue
space
component H, and determining whether the background purity value corresponding
to every
pixel in the updated background region image is compliant to a first
threshold.
26
Date Recue/Date Received 2023-12-18

54. The system of the claim 53, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: deteimining that the background purity of the to-be-detected
picture is
compliant.
55. The system of the claim 54, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: deteimining that the background purity of the to-be-detected
picture is non-
compliant, and wherein the first threshold includes a first background purity
threshold.
56. The system of the claim 55, processing the brightness space data by means
of the plural
binarizati on systems to output the plural binarization results
correspondingly comprises:
processing data of the brightness space channel L by means of a fixed-
threshold binarization
system, so as to obtain a first binarization result.
57. The system of the claim 56, processing the brightness space data by means
of the plural
binarizati on systems to output the plural binarization results
correspondingly comprises:
processing the data of the brightness space channel L by means of a Gaussian-
window
binarizati on system, so as to obtain a second binarization result.
58. The system of the claim 57, the system further comprising: performing non-
coherence
region suppression on the first binarization result and the second binarizati
on result,
respectively, by means of a non-maximum suppression system.
27
Date Recue/Date Received 2023-12-18

59. The system of the claim 58, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises: fusing
the subject region image recognized through pixel-level semantic segmentation
with the first
binarizati on result and the second binarization result, respectively.
60. The system of the claim 59, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
extracting coordinate values of the pixels belonging to the subject region
image and the first
binarizati on result from fusing results and their corresponding background
purity values.
61. The system of the claim 60, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
extracting coordinate values of the pixels belonging to the subject region
image and the
second binarization result from fusing results and their corresponding
background purity
values.
62. The system of the claim 61, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
summarizing and extracting the coordinate values of the pixels and their
corresponding
background purity values, and determining whether both the coordinate value of
each pixel
and its corresponding background purity value are compliant to a second
threshold.
28
Date Recue/Date Received 2023-12-18

63. The system of the claim 62, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
determining that the location of the subject in the to-be-detected picture is
compliant.
64. The system of the claim 63, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
determining that the location of the subject in the to-be-detected picture is
non-compliant;
and wherein the second threshold includes a second background purity threshold
and a
location coordinate interval threshold.
65. The system of the claim 64, the system further comprising: identifying a
subject region
image and a background region image in a denoised to-be-detected picture
through pixel-
level semantic segmentation, and performing hue space conversion on the
picture to output
hue space data and brightness space data of the picture.
66. The system of the claim 65, the system further comprising: dilating the
subject region image
to dilate the range of edge pixels of the subject region image in order to
ensure complete
coverage over the subject region image.
67. The system of the claim 66, the system further comprising: dilating
subject region image is
fused with hue space data.
68. The system of the claim 67, the system further comprising: to detect the
location of the
subject in the picture, the brightness space data are first processed by means
of the plural
binarization systems so as to generate the plural binarization results.
69. The system of the claim 68, wherein the subject region image identified
through pixel-level
semantic segmentation is then fused with the plural binarization results,
respectively.
29
Date Recue/Date Received 2023-12-18

70. The system of the claim 69, wherein based on the coordinate value of every
pixel in the
fused subject region image and its corresponding background purity value,
whether the
location of the subject in the to-be-detected picture is compliant can be
determined.
71. The system of the claim 70, the system further comprising: employing HSV
(hue, saturation,
value) hue space conversion system to convert the to-be-detected picture in
the RGB (red,
geen and blue) color space into the HSV color space that is closer to human
visual
perceptual characteristics.
72. The system of the claim 71, wherein the conversion system includes:
-V= max(R,G, B)
s - max (R, G, B )- mm (R, G, B)
rnax (R, 0, B)
(G - B)
60x ___________________ S~ Oftmax(RAB)=R
Sx V
H = 60 x(2+ (B- R ))
aamax(R,G,B)= G
S x V
60x [4 + (R-G1 ~ Oftmax(RAB)=B
xV
= 1-1 +360 H<O
73. The system of the claim 72, wherein the features of brightness space data
include extensive
color gamut coverage, high visual consistency, and good capability of
expressing color
perception.
74. The system of the claim 73, wherein the implantation converting brightness
space data of
the picture to be examined through: converting the to-be-detected picture from
RGB space
data into CIE XYZ space data; and converting the CIE XYZ space data into LUV
brightness
space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
Date Reçue/Date Received 2023-12-18

v N:31
Yni Yõ 29
L
29)1 Y
x
Yõ k.29
,u =13 xi_ x(11 ¨ ,uni)
v =13xLx(vi¨vni
wherein Pr, and v. are light source constants, and ri is a preset fixed value;
and
wherein
4X
,LI= X +15Y+ 3Z
9Y
v, =
X+15Y+ 3Z
75. The system of the claim 74, wherein a round filter kernel k is used for
filtering pixels of the
subject region image.
76. The system of the claim 75, wherein when a round filter kernel has a
diameter of 4
represented by:
1 1 1
1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
k = 1 1 1 1 x 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1
1 1 1
77. The system of the claim 76, wherein the filtering equation is:
= P K = tzu (k) P,(i, j)c
31
Date Reçue/Date Received 2023-12-18

wherein (i,j) represents the pixel coordinates, P represents the subject
region image, Zy
(k)
represents the background purity value corresponding to the pixel, Zu
represents the
punctured neighborhood region corresponding to each pixel obtained using the
round
filter kemel k as the mask, B represents the dilates subject region image,
wherein the
background region image is updated when the subject region image is dilated,
and D
represents the updated background region image.
78. The system of the claim 77, wherein the background region image is fused
with data of the
hue space component H to generate a result C.
79. The system of the claim 78, wherein the fusion equation is as below:
c = ic(0) c(i,j) = H(i,j),(i,j) c
c(i,j) = 0, (i,j) E D
80. The system of the claim 79, wherein (i,j) represents pixel coordinates,
H(i,j) represents the
background purity value corresponding to the pixel in the hue space component
11; and
wherein when a pixel located at coordinates (i,j) belongs to the dilated
background region
image D, the background purity of the pixel in the hue space component His
valuated.
81. The system of the claim 80, wherein when the pixel located at coordinates
(i,j) does not
belong to the dilated background region image D, the background purity value
of the pixel
in the hue space component His valuated as zero.
82. The system of the claim 81, wherein the coordinates of the pixels and
their corresponding
background purity values are gathered to form an array C, which is the array
composed of
the location coordinates of the pixels in the background region image D and
the
correspondingly converted background purity values.
83. The system of the claim 82, wherein the predetermined first threshold is
compared to the
array C.
32
Date Recue/Date Received 2023-12-18

84. The system of the claim 83, wherein when all the background purity values
corresponding
to the induvial pixels in the background region image D are smaller than the
first
background purity threshold, it is determined that the background purity of
the to-be-
detected picture is compliant.
85. The system of the claim 84, the system further comprising: processing data
of the brightness
space channel L by means of a fixed-threshold binarization system, so as to
obtain a first
binarization result T
86. The system of the claim 85, the system further comprising: processing the
data of the
brightness space channel L by means of a Gaussian-window binarization system,
so as to
obtain a second binarization result G.
87. The system of the claim 86, the system further comprising: non-coherence
region
suppression is performed on the first binarization result T and the second
binarization result
G, respectively by means of the non-maximum suppression system to nullify the
impact of
non-coherence regions caused by complicated a background on the detection
results, thereby
further improving detection precision.
88. The system of the claim 87, the system further comprising: fusing the
subject region image
recognized through pixel-level semantic segmentation with the first
binarization result and
the second binarization result, respectively.
89. A computer readable physical memory having stored thereon a computer
program executed
by a computer configured to:
acquire a to-be-detected picture that has been denoised, and perform pixel-
level semantic
segmentation on a denoised to-be-detected picture, recognizing a subject
region image
and a background region image;
perform hue space conversion on the to-be-detected picture to output hue space
data and
brightness space data of the picture; and
33
Date Recue/Date Received 2023-12-18

fuse the subject region image after dilation processing with the hue space
data, extract a
background purity value corresponding to every pixel in the background region
image
formed after dilation processing, and determine whether background purity of
the to-be-
detected picture is compliant, wherein a compliant picture comprises any one
or more of
non-violent, non-pornographic, blank, and aesthetic features, and wherein the
aesthetic
features include centred image subjects.
90. The memory of claim 89, the memory further comprising: processing the
brightness space
data by means of plural binarization memories, so as to output plural
binarization results
correspondingly.
91. The memory of the claim 90, the memory further comprising: fusing the
subject region
image with the plural binarization results.
92. The memory of the claim 91, the memory further comprising: extracting a
coordinate value
of every pixel in the fused subject region image and its corresponding
background purity
value, and determining whether a location of a subject in the to-be-detected
picture is
compliant.
93. The memory of the claim 92, wherein acquiring a to-be-detected picture
that has been
denoised, and after pixel-level semantic segmentation, recognizing a subject
region image
and a background region image comprises: denoising the to-be-detected picture
by means of
a nonlinear filtering memory; and performing pixel-level semantic segmentation
on the
denoised to-be-detected picture through a multi-channel deep residual fully
convolutional
network model, so as to recognize the subject region image and the background
region
image.
34
Date Recue/Date Received 2023-12-18

94. The memory of the claim 93, wherein performing hue space conversion on the
to-be-
detected picture to output hue space data and brightness space data of the
picture comprises:
using HSV hue space conversion memory to convert the to-be-detected picture
and output
the hue space data of the picture, in which the hue space data include a hue
space
component H; and using LUV hue space conversion memory to convert the to-be-
detected
picture and output the brightness space data of the picture, in which the
brightness space
data include a brightness space channel L.
95. The memory of the claim 94, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: filtering edge pixels of the subject region image by means of a
filter kernel, so as
to dilate the subject region image.
96. The memory of the claim 95, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: updating the part other than the dilated subject region image in
the to-be-
detected picture as the background region image.
97. The memory of the claim 96, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: fusing the updated background region image with data of the hue
space
component H, and determining whether the background purity value corresponding
to every
pixel in the updated background region image is compliant to a first
threshold.
Date Recue/Date Received 2023-12-18

98. The memory of the claim 97, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
detennining whether background purity of the to-be-detected picture is
compliant
comprises: deteimining that the background purity of the to-be-detected
picture is
compliant.
99. The memory of the claim 98, wherein fusing the subject region image after
dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image fomied after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: deteimining that the background purity of the to-be-detected
picture is non-
compliant, and wherein the first threshold includes a first background purity
threshold.
100. The memory of the claim 99, processing the brightness space data by means
of the plural
binarizati on memories to output the plural binarization results
correspondingly comprises:
processing data of the brightness space channel L by means of a fixed-
threshold binarization
memory, so as to obtain a first binarization result.
101. The memory of the claim 100, processing the brightness space data by
means of the plural
binarizati on memories to output the plural binarization results
correspondingly comprises:
processing the data of the brightness space channel L by means of a Gaussian-
window
binarizati on memory, so as to obtain a second binarization result.
102. The memory of the claim 101, the memory further comprising: performing
non-coherence
region suppression on the first binarization result and the second binarizati
on result,
respectively, by means of a non-maximum suppression memory.
36
Date Recue/Date Received 2023-12-18

103. The memory of the claim 102, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises: fusing
the subject region image recognized through pixel-level semantic segmentation
with the first
binarizati on result and the second binarization result, respectively.
104. The memory of the claim 103, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
extracting coordinate values of the pixels belonging to the subject region
image and the first
binarizati on result from fusing results and their corresponding background
purity values.
105. The memory of the claim 104, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
extracting coordinate values of the pixels belonging to the subject region
image and the
second binarization result from fusing results and their corresponding
background purity
values.
106. The memory of the claim 105, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
summarizing and extracting the coordinate values of the pixels and their
corresponding
background purity values, and determining whether both the coordinate value of
each pixel
and its corresponding background purity value are compliant to a second
threshold.
37
Date Recue/Date Received 2023-12-18

107. The memory of the claim 106, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
determining that the location of the subject in the to-be-detected picture is
compliant.
108. The memory of the claim 107, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
determining that the location of the subject in the to-be-detected picture is
non-compliant;
and wherein the second threshold includes a second background purity threshold
and a
location coordinate interval threshold.
109. The memory of the claim 108, the memory further comprising: identifying a
subject region
image and a background region image in a denoised to-be-detected picture
through pixel-
level semantic segmentation, and performing hue space conversion on the
picture to output
hue space data and brightness space data of the picture.
110. The memory of the claim 109, the memory further comprising: dilating the
subject region
image to dilate the range of edge pixels of the subject region image in order
to ensure
complete coverage over the subject region image.
111. The memory of the claim 110, the memory further comprising: dilating
subject region image
is fused with hue space data.
112. The memory of the claim 111, the memory further comprising: to detect the
location of the
subject in the picture, the brightness space data are first processed by means
of the plural
binarization memories so as to generate the plural binarization results.
113. The memory of the claim 112, wherein the subject region image identified
through pixel-
level semantic segmentation is then fused with the plural binarization
results, respectively.
38
Date Recue/Date Received 2023-12-18

114. The memory of the claim 113, wherein based on the coordinate value of
every pixel in the
fused subject region image and its corresponding background purity value,
whether the
location of the subject in the to-be-detected picture is compliant can be
determined.
115. The memory of the claim 114, the memory further comprising: employing HSV
(hue,
saturation, value) hue space conversion memory to convert the to-be-detected
picture in the
RGB (red, green and blue) color space into the HSV color space that is closer
to human
visual perceptual characteristics.
116. The memory of the claim 115, wherein the conversion memory includes:
-V= max(R,G, B)
s - max (R, G, B )- mm (R, G, B)
rnax (R, 0, B)
(G - B)
60x ___________________ S~ Oftmax(RAB)=R
Sx V
= 60 x(2+ (-
B
H R))
aamax(R,G,B)= G
S x V
60x [4+ (R-G1 ~ Oftmax(RAB)=B
r7 g x
= 1-1 +360 H<O
117. The memory of the claim 116, wherein the features of brightness space
data include
extensive color gamut coverage, high visual consistency, and good capability
of expressing
color perception.
118. The memory of the claim 117, wherein the implantation converting
brightness space data of
the picture to be examined through: converting the to-be-detected picture from
RGB space
data into CIE XYZ space data; and converting the CIE XYZ space data into LUV
brightness
space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
39
Date Reçue/Date Received 2023-12-18

v N:31
L
29)1 Y
x,13
Yõ k.29
,u =13 xi_ x(11 ¨ ,uni)
v =13xLx(vi¨vni
wherein Pr, and v. are light source constants, and ri is a preset fixed value;
and
wherein
4X
,LI= X +15Y+ 3Z
9Y
v, =
X+15Y+ 3Z
119. The memory of the claim 118, wherein a round filter kernel k is used for
filtering pixels of
the subject region image.
120. The memory of the claim 119, wherein when a round filter kernel has a
diameter of 4
represented by:
1 1 1
1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
k = 1 1 1 1 x 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1
1 1 1
121. The memory of the claim 120, wherein the filtering equation is:
= P K = fzu 1 (k) P,(i, j)c
Date Reçue/Date Received 2023-12-18

wherein (i,j) represents the pixel coordinates, P represents the subject
region image, Zy
(k)
represents the background purity value corresponding to the pixel, Zu
represents the
punctured neighborhood region corresponding to each pixel obtained using the
round
filter kemel k as the mask, B represents the dilates subject region image,
wherein the
background region image is updated when the subject region image is dilated,
and D
represents the updated background region image.
122. The memory of the claim 121, wherein the background region image is fused
with data of
the hue space component H to generate a result C.
123. The memory of the claim 122, wherein the fusion equation is as below:
c(i,j) = H(i,j),(i,j) c
C = ic(i,j)
c(i,j) = 0, (i,j) E D 5;
124. The memory of the claim 122, wherein (i,j) represents pixel coordinates,
H (i, j) represents
the background purity value corresponding to the pixel in the hue space
component H; and
wherein when a pixel located at coordinates (i,j) belongs to the dilated
background region
image D, the background purity of the pixel in the hue space component His
valuated.
125. The memory of the claim 124, wherein when the pixel located at
coordinates (i,j) does not
belong to the dilated background region image D, the background purity value
of the pixel
in the hue space component His valuated as zero.
126. The memory of the claim 125, wherein the coordinates of the pixels and
their corresponding
background purity values are gathered to form an array C, which is the array
composed of
the location coordinates of the pixels in the background region image D and
the
correspondingly converted background purity values.
127. The memory of the claim 126, wherein the predetermined first threshold is
compared to the
array C.
41
Date Recue/Date Received 2023-12-18

128. The memory of the claim 127, wherein when all the background purity
values
corresponding to the induvial pixels in the background region image D are
smaller than the
first background purity threshold, it is determined that the background purity
of the to-be-
detected picture is compliant.
129. The memory of the claim 128, the memory further comprising: processing
data of the
brightness space channel L by means of a fixed-threshold binarization memory,
so as to
obtain a first binarization result T.
130. The memory of the claim 129, the memory further comprising: processing
the data of the
brightness space channel L by means of a Gaussian-window binarization memory,
so as to
obtain a second binarization result G.
131. The memory of the claim 130, the memory further comprising: non-coherence
region
suppression is performed on the first binarization result T and the second
binarization result
G, respectively by means of the non-maximum suppression memory to nullify the
impact of
non-coherence regions caused by complicated a background on the detection
results, thereby
further improving detection precision.
132. The memory of the claim 131, the memory further comprising: fusing the
subject region
image recognized through pixel-level semantic segmentation with the first
binarizati on
result and the second binarization result, respectively.
133. A picture-detecting method, the method comprising:
acquiring a to-be-detected picture that has been denoised, perform pixel-level
semantic
segmentation on a denoised to-be-detected picture, and recognizing a subject
region image
and a background region image;
performing hue space conversion on the to-be-detected picture, so as to output
hue space data
and brightness space data of the picture; and
42
Date Recue/Date Received 2023-12-18

fusing the subject region image after dilation processing with the hue space
data, extracting a
background purity value corresponding to every pixel in the background region
image formed
after dilation processing, and determining whether background purity of the to-
be-detected
picture is compliant, wherein a compliant picture comprises any one or more of
non-violent,
non-pornographic, blank, and aesthetic features, and wherein the aesthetic
features include
centred image subjects.
134. The method of claim 133, the method further comprising: processing the
brightness space
data by means of plural binarization methods, so as to output plural
binarization results
correspondingly.
135. The method of the claim 134, the method further comprising: fusing the
subject region
image with the plural binarization results.
136. The method of the claim 135, the method further comprising: extracting a
coordinate value
of every pixel in the fused subject region image and its corresponding
background purity
value, and determining whether a location of a subject in the to-be-detected
picture is
compliant.
137. The method of the claim 136, wherein acquiring a to-be-detected picture
that has been
denoised, and after pixel-level semantic segmentation, recognizing a subject
region image
and a background region image comprises: denoising the to-be-detected picture
by means of
a nonlinear filleting method; and performing pixel-level semantic segmentation
on the
denoised to-be-detected picture through a multi-channel deep residual fully
convolutional
network model, so as to recognize the subject region image and the background
region
image.
43
Date Recue/Date Received 2023-12-18

138. The method of the claim 137, wherein performing hue space conversion on
the to-be-
detected picture to output hue space data and brightness space data of the
picture comprises:
using HSV hue space conversion method to convert the to-be-detected picture
and output
the hue space data of the picture, in which the hue space data include a hue
space
component H; and using LUV hue space conversion method to convert the to-be-
detected
picture and output the brightness space data of the picture, in which the
brightness space
data include a brightness space channel L.
139. The method of the claim 138, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: filtering edge pixels of the subject region image by means of a
filter kernel, so as
to dilate the subject region image.
140. The method of the claim 139, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: updating the part other than the dilated subject region image in
the to-be-
detected picture as the background region image.
141. The method of the claim 140, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: fusing the updated background region image with data of the hue
space
component H, and determining whether the background purity value corresponding
to every
pixel in the updated background region image is compliant to a first
threshold.
44
Date Recue/Date Received 2023-12-18

142. The method of the claim 141, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
detennining whether background purity of the to-be-detected picture is
compliant
comprises: deteimining that the background purity of the to-be-detected
picture is
compliant.
143. The method of the claim 142, wherein fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding to
every pixel in the background region image formed after dilation processing,
and
determining whether background purity of the to-be-detected picture is
compliant
comprises: deteimining that the background purity of the to-be-detected
picture is non-
compliant, and wherein the first threshold includes a first background purity
threshold.
144. The method of the claim 143, processing the brightness space data by
means of the plural
binarizati on methods to output the plural binarization results
correspondingly comprises:
processing data of the brightness space channel L by means of a fixed-
threshold binarization
method, so as to obtain a first binarization result.
145. The method of the claim 144, processing the brightness space data by
means of the plural
binarizati on methods to output the plural binarization results
correspondingly comprises:
processing the data of the brightness space channel L by means of a Gaussian-
window
binarizati on method, so as to obtain a second binarizati on result.
146. The method of the claim 145, the method further comprising: performing
non-coherence
region suppression on the first binarization result and the second binarizati
on result,
respectively, by means of a non-maximum suppression method.
Date Recue/Date Received 2023-12-18

147. The method of the claim 146, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
deteimining
whether a location of a subject in the to-be-detected picture is compliant
comprises: fusing
the subject region image recognized through pixel-level semantic segmentation
with the first
binarizati on result and the second binarization result, respectively.
148. The method of the claim 147, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
extracting coordinate values of the pixels belonging to the subject region
image and the first
binarizati on result from fusing results and their corresponding background
purity values.
149. The method of the claim 148, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
extracting coordinate values of the pixels belonging to the subject region
image and the
second binarization result from fusing results and their corresponding
background purity
values.
150. The method of the claim 149, wherein fusing the subject region image with
the plural
binarizati on results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
summarizing and extracting the coordinate values of the pixels and their
corresponding
background purity values, and determining whether both the coordinate value of
each pixel
and its corresponding background purity value are compliant to a second
threshold.
46
Date Recue/Date Received 2023-12-18

151. The method of the claim 150, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
determining that the location of the subject in the to-be-detected picture is
compliant.
152. The method of the claim 151, wherein fusing the subject region image with
the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
determining that the location of the subject in the to-be-detected picture is
non-compliant;
and wherein the second threshold includes a second background purity threshold
and a
location coordinate interval threshold.
153. The method of the claim 152, the method further comprising: identifying a
subject region
image and a background region image in a denoised to-be-detected picture
through pixel-
level semantic segmentation, and performing hue space conversion on the
picture to output
hue space data and brightness space data of the picture.
154. The method of the claim 153, the method further comprising: dilating the
subject region
image to dilate the range of edge pixels of the subject region image in order
to ensure
complete coverage over the subject region image.
155. The method of the claim 154, the method further comprising: dilating
subject region image
is fused with hue space data.
156. The method of the claim 155, the method further comprising: to detect the
location of the
subject in the picture, the brightness space data are first processed by means
of the plural
binarization methods so as to generate the plural binarization results.
157. The method of the claim 156, wherein the subject region image identified
through pixel-
level semantic segmentation is then fused with the plural binarization
results, respectively.
47
Date Reçue/Date Received 2023-12-18

158. The method of the claim 157, wherein based on the coordinate value of
every pixel in the
fused subject region image and its corresponding background purity value,
whether the
location of the subject in the to-be-detected picture is compliant can be
determined.
159. The method of the claim 158, the method further comprising: employing HSV
(hue,
saturation, value) hue space conversion method to convert the to-be-detected
picture in the
RGB (red, green and blue) color space into the HSV color space that is closer
to human
visual perceptual characteristics.
160. The method of the claim 159, wherein the conversion method includes:
-V= max(R,G, B)
s - max (R, G, B )- mm (R, G, B)
rnax (R, 0, B)
(G - B)
60x ___________________ S~ Oftmax(RAB)=R
Sx V
H = 60 x(2+ (B- R))
aamax(R,G,B)= G
S x V
60x [4 + (R-G1 ~ Oftmax(RAB)=B
xV
= 1-1 +360 H<O
161. The method of the claim 160, wherein the features of brightness space
data include
extensive color gamut coverage, high visual consistency, and good capability
of expressing
color perception.
162. The method of the claim 161, wherein the implantation converting
brightness space data of
the picture to be examined through: converting the to-be-detected picture from
RGB space
data into CIE XYZ space data; and converting the CIE XYZ space data into LUV
brightness
space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
48
Date Reçue/Date Received 2023-12-18

v
L
29)1 Y
x,13
Yõ k.29
,u =13 xi_ x(11 ¨ ,uni)
v =13xLx(vi¨vni
wherein Pr, and v. are light source constants, and ri is a preset fixed value;
and
wherein
4X
,LI= X +15Y+ 3Z
9Y
v, =
X+15Y+ 3Z
=
163. The method of the claim 162, wherein a round filter kernel k is used for
filtering pixels of
the subject region image.
164. The method of the claim 163, wherein when a round filter kernel has a
diameter of 4
represented by:
1 1 1
1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
k = 1 1 1 1 x 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1
1 1 1
165. The method of the claim 164, wherein the filtering equation is:
= P K = tzu 1 (k) P,(i, j)c
49
Date Reçue/Date Received 2023-12-18

wherein (i,j) represents the pixel coordinates, P represents the subject
region image, Zy
(k)
represents the background purity value corresponding to the pixel, Zu
represents the
punctured neighborhood region corresponding to each pixel obtained using the
round
filter kemel k as the mask, B represents the dilates subject region image,
wherein the
background region image is updated when the subject region image is dilated,
and D
represents the updated background region image.
166. The method of the claim 165, wherein the background region image is fused
with data of the
hue space component H to generate a result C.
167. The method of the claim 166, wherein the fusion equation is as below:
c(i,j) = H(i,j),(i,j) c
C = ic(i,j)
c(i,j) = 0, (i,j) E D 5;
168. The method of the claim 166, wherein (i,j) represents pixel coordinates,
H (i, j) represents
the background purity value corresponding to the pixel in the hue space
component H; and
wherein when a pixel located at coordinates (i,j) belongs to the dilated
background region
image D, the background purity of the pixel in the hue space component His
valuated.
169. The method of the claim 168, wherein when the pixel located at
coordinates (i,j) does not
belong to the dilated background region image D, the background purity value
of the pixel
in the hue space component His valuated as zero.
170. The method of the claim 169, wherein the coordinates of the pixels and
their corresponding
background purity values are gathered to form an array C, which is the array
composed of
the location coordinates of the pixels in the background region image D and
the
correspondingly converted background purity values.
171. The method of the claim 170, wherein the predetermined first threshold is
compared to the
array C.
Date Recue/Date Received 2023-12-18

172. The method of the claim 170, wherein when all the background purity
values corresponding
to the induvial pixels in the background region image D are smaller than the
first
background purity threshold, it is determined that the background purity of
the to-be-
detected picture is compliant.
173. The method of the claim 172, the method further comprising: processing
data of the
brightness space channel L by means of a fixed-threshold binarization method,
so as to
obtain a first binarization result T.
174. The method of the claim 173, the method further comprising: processing
the data of the
brightness space channel L by means of a Gaussian-window binarizafion method,
so as to
obtain a second binarization result G.
175. The method of the claim 174, the method further comprising: non-coherence
region
suppression is performed on the first binarization result T and the second
binarization result
G, respectively by means of the non-maximum suppression method to nullify the
impact of
non-coherence regions caused by complicated a background on the detection
results, thereby
further improving detection precision.
176. The method of the claim 175, the method further comprising: fusing the
subject region
image recognized through pixel-level semantic segmentation with the first
binarizati on
result and the second binarization result, respectively.
51
Date Recue/Date Received 2023-12-18

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 03153067 2022-03-02
PICTURE-DETECTING METHOD AND APPARATUS
BACKGROUND OF THE INVENTION
Technical Field
[0001] The present invention relates to the technical field of picture
recognition, and more
particularly to a picture-detecting method and a picture-detecting apparatus.
Description of Related Art
[0002] With the popularization of the Internet, more and more web-based
platforms allow users
to upload pictures. Particularly, in leading e-commerce platforms, hundreds of
millions
of pictures are uploaded by vendors and users every day, among which there are
always
some non-compliant or even illegal pictures. For preventing such improper
uploading,
examination of pictures is conventionally conducted as a combination of
machine works
and human works.
[0003] The existing examination technologies solely for a single type of
illegal picture contents
such as violent, terrorism, or porn are not satisfying to the modern e-
commerce platforms.
For improving quality of uploaded pictures, it is desirable that a picture
examination
technology has the ability to determine compliance of pictures in addition to
the ability
to detect contents related to violent, terrorism, and porn, so as to filter
out upload pictures
that contain inaesthetic layouts such as non-centered subjects, busy
backgrounds, and too
much blank.
SUMMARY OF THE INVENTION
[0004] The objective of the present invention is to provide a picture-
detecting method and a
picture-detecting apparatus, which ensure quality of uploaded pictures by
adding
detection items about picture background purity and subject location
compliance.
[0005] To achieve the foregoing objective, in a first aspect the present
invention provides a
picture-detecting method. The picture-detecting method comprises:
1
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
[0006] acquiring a to-be-detected picture that has been denoised, and after
pixel-level semantic
segmentation, recognizing a subject region image and a background region
image;
[0007] performing hue space conversion on the to-be-detected picture, so as to
output hue space
data and brightness space data of the picture;
[0008] fusing the subject region image after dilation processing with the hue
space data,
extracting a background purity value corresponding to every pixel in the
background
region image formed after dilation processing, and determining whether
background
purity of the to-be-detected picture is compliant;
[0009] processing the brightness space data by means of plural binarization
methods, so as to
output plural binarization results correspondingly; and
[0010] fusing the subject region image with the plural binarization results,
respectively,
extracting a coordinate value of every pixel in the fused subject region image
and its
corresponding background purity value, and determining whether a location of a
subject
in the to-be-detected picture is compliant.
[0011] Preferably, the step of acquiring a to-be-detected picture that has
been denoised, and after
pixel-level semantic segmentation, recognizing a subject region image and a
background
region image comprises:
[0012] denoising the to-be-detected picture by means of a nonlinear filtering
method; and
[0013] performing pixel-level semantic segmentation on the denoised to-be-
detected picture
through a multi-channel deep residual fully convolutional network model, so as
to
recognize the subject region image and the background region image.
[0014] Preferably, the step of performing hue space conversion on the to-be-
detected picture, so
as to output hue space data and brightness space data of the picture
comprises:
[0015] using HSV hue space conversion method to convert the to-be-detected
picture and output
the hue space data of the picture, in which the hue space data include a hue
space
component H; and
[0016] using LUV hue space conversion method to convert the to-be-detected
picture and output
the brightness space data of the picture, in which the brightness space data
include a
brightness space channel L.
2
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
[0017] More preferably, the step of fusing the subject region image after
dilation processing with
the hue space data, extracting a background purity value corresponding to
every pixel in
the background region image formed after dilation processing, and determining
whether
background purity of the to-be-detected picture is compliant comprises:
[0018] filtering edge pixels of the subject region image by means of a filter
kernel, so as to dilate
the subject region image;
[0019] updating the part other than the dilated subject region image in the to-
be-detected picture
as the background region image;
[0020] fusing the updated background region image with data of the hue space
component H,
and determining whether the background purity value corresponding to every
pixel in the
updated background region image is compliant to a first threshold, and if yes,
determining
that the background purity of the to-be-detected picture is compliant, or if
not,
determining that the background purity of the to-be-detected picture is non-
compliant;
and
[0021] wherein the first threshold includes a first background purity
threshold.
[0022] More preferably, the step of processing the brightness space data by
means of plural
binarization methods, so as to output plural binarization results
correspondingly
comprises:
[0023] processing data of the brightness space channel L by means of a fixed-
threshold
binarization method, so as to obtain a first binarization result; and
[0024] processing the data of the brightness space channel L by means of a
Gaussian-window
binarization method, so as to obtain a second binarization result.
[0025] Further, after the step of so as to output plural binarization results
correspondingly, the
method further comprises:
[0026] performing non-coherence region suppression on the first binarization
result and the
second binarization result, respectively, by means of a non-maximum
suppression
method.
[0027] Further, the step of fusing the subject region image with the plural
binarization results,
respectively, extracting a coordinate value of every pixel in the fused
subject region
3
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
image and its corresponding background purity value, and determining whether a
location
of a subject in the to-be-detected picture is compliant comprises:
[0028] fusing the subject region image recognized through pixel-level semantic
segmentation
with the first binarization result and the second binarization result,
respectively;
[0029] extracting coordinate values of the pixels belonging to the subject
region image and the
first binarization result from fusing results and their corresponding
background purity
values, and extracting coordinate values of the pixels belonging to the
subject region
image and the second binarization result from fusing results and their
corresponding
background purity values;
[0030] summarizing and extracting the coordinate values of the pixels and
their corresponding
background purity values, and determining whether both the coordinate value of
each
pixel and its corresponding background purity value are compliant to a second
threshold,
and if yes, determining that the location of the subject in the to-be-detected
picture is
compliant, or if not, determining that the location of the subject in the to-
be-detected
picture is non-compliant; and
[0031] wherein the second threshold includes a second background purity
threshold and a
location coordinate interval threshold.
[0032] As compared to the prior art, the picture-detecting method of the
present invention has
the following beneficial effects:
[0033] The picture-detecting method provided by the present invention first
identifies a subject
region image and a background region image in a denoised to-be-detected
picture through
pixel-level semantic segmentation, and then performs hue space conversion on
the picture,
so as to output hue space data and brightness space data of the picture.
During detection
of background purity, since pixel-level semantic segmentation processes edge
pixels of
the subject region image and of the background region image in a relatively
rough manner,
the present invention dilates the subject region image to dilate the range of
edge pixels of
the subject region image in order to ensure complete coverage over the subject
region
image. Afterward, the dilated subject region image is fused with hue space
data.
Afterward, background purity values corresponding to individual pixels in the
4
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
background region image as the final result of said fusing are fused, and
whether the
background purity of the to-be-detected picture is compliant can be
determined. To detect
the location of the subject in the picture, the brightness space data are
first processed by
means of plural binarization methods so as to generate plural corresponding
binarization
results. The subject region image identified through pixel-level semantic
segmentation is
then fused with the plural binarization result, respectively. At last, based
on the coordinate
value of every pixel in the fused subject region image and its corresponding
background
purity value, whether the location of the subject in the to-be-detected
picture is compliant
can be determined.
[0034] It is thus clear that the present invention detects background purity
of an uploaded picture
and determines the location of the subject with significantly improved
efficiency as
compared to the conventional human examination.
[0035] In another aspect, the present invention provides a picture-detecting
apparatus, which is
applied to the method for picture-detecting method as recited in the foregoing
technical
scheme. The apparatus comprises:
[0036] a pixel-processing unit, for acquiring a to-be-detected picture that
has been denoised, and
after pixel-level semantic segmentation, recognizing a subject region image
and a
background region image;
[0037] a hue-space-converting unit, for performing hue space conversion on the
to-be-detected
picture, so as to output hue space data and brightness space data of the
picture;
[0038] a first determining unit, for fusing the subject region image after
dilation processing with
the hue space data, extracting a background purity value corresponding to
every pixel in
the background region image formed after dilation processing, and determining
whether
background purity of the to-be-detected picture is compliant;
[0039] a binarization-processing unit, for processing the brightness space
data by means of plural
binarization methods, so as to output plural binarization results
correspondingly; and
[0040] a second determining unit, for fusing the subject region image with the
plural binarization
results, respectively, extracting a coordinate value of every pixel in the
fused subject
region image and its corresponding background purity value, and determining
whether a
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
location of a subject in the to-be-detected picture is compliant.
[0041] Preferably, between the binarization-processing unit and the second
determining unit, the
apparatus further comprises:
[0042] performing non-coherence region suppression on the first binarization
result and the
second binarization result, respectively, by means of a non-maximum
suppression
method.
[0043] As compared to the prior art, the disclosed picture-detecting apparatus
provides beneficial
effects that are similar to those provided by the disclosed picture-detecting
method as
enumerated above, and thus no repetitions are made herein.
[0044] In a third aspect the present invention provides a computer readable
storage medium,
storing thereon a computer program. When the computer program is executed by a
processor, it implements the steps of the picture-detecting method as
described previously.
[0045] As compared to the prior art, the disclosed computer-readable storage
medium provides
beneficial effects that are similar to those provided by the disclosed picture-
detecting
method as enumerated above, and thus no repetitions are made herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0046] The accompanying drawings are provided herein for better understanding
of the present
invention and form a part of this disclosure. The illustrative embodiments and
their
descriptions are for explaining the present invention and by no means form any
improper
limitation to the present invention, wherein:
[0047] FIG. 1 is a flowchart of the picture-detecting method according to one
embodiment of the
present invention; and
[0048] FIG. 2 is another flowchart of the picture-detecting method according
to the embodiment
of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0049] To make the foregoing objectives, features, and advantages of the
present invention
6
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
clearer and more understandable, the following description will be directed to
some
embodiments as depicted in the accompanying drawings to detail the technical
schemes
disclosed in these embodiments. It is, however, to be understood that the
embodiments
referred herein are only a part of all possible embodiments and thus not
exhaustive. Based
on the embodiments of the present invention, all the other embodiments can be
conceived
without creative labor by people of ordinary skill in the art, and all these
and other
embodiments shall be embraced in the scope of the present invention.
Embodiment 1
[0050] Referring to FIG. 1 and FIG. 2, the present embodiment provides a
picture-detecting
method, comprises:
[0051] acquiring a to-be-detected picture that has been denoised, and after
pixel-level semantic
segmentation, recognizing a subject region image and a background region
image;
performing hue space conversion on the to-be-detected picture, so as to output
hue space
data and brightness space data of the picture; fusing the subject region image
after dilation
processing with the hue space data, extracting a background purity value
corresponding
to every pixel in the background region image formed after dilation
processing, and
determining whether background purity of the to-be-detected picture is
compliant;
processing the brightness space data by means of plural binarization methods,
so as to
output plural binarization results correspondingly; and fusing the subject
region image
with the plural binarization results, respectively, extracting a coordinate
value of every
pixel in the fused subject region image and its corresponding background
purity value,
and determining whether a location of a subject in the to-be-detected picture
is compliant.
[0052] The picture-detecting method provided by the present embodiment first
identifies a
subject region image and a background region image in a denoised to-be-
detected picture
through pixel-level semantic segmentation, and then performs hue space
conversion on
the picture, so as to output hue space data and brightness space data of the
picture. During
detection of background purity, since pixel-level semantic segmentation
processes edge
pixels of the subject region image and of the background region image in a
relatively
7
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
rough manner, the present invention dilates the subject region image to dilate
the range
of edge pixels of the subject region image in order to ensure complete
coverage over the
subject region image. Afterward, the dilated subject region image is fused
with hue space
data. Afterward, background purity values corresponding to individual pixels
in the
background region image as the final result of said fusing are fused, and
whether the
background purity of the to-be-detected picture is compliant can be
determined. To detect
the location of the subject in the picture, the brightness space data are
first processed by
means of plural binarization methods so as to generate plural corresponding
binarization
results. The subject region image identified through pixel-level semantic
segmentation is
then fused with the plural binarization result, respectively. At last, based
on the coordinate
value of every pixel in the fused subject region image and its corresponding
background
purity value, whether the location of the subject in the to-be-detected
picture is compliant
can be determined.
[0053] It is thus clear that the present embodiment detects background purity
of an uploaded
picture and determines the location of the subject with significantly improved
efficiency
as compared to the conventional human examination.
[0054] In the foregoing embodiment, the step of acquiring a to-be-detected
picture that has been
denoised, and after pixel-level semantic segmentation, recognizing a subject
region image
and a background region image comprises:
[0055] denoising the to-be-detected picture by means of a nonlinear filtering
method; performing
pixel-level semantic segmentation on the denoised to-be-detected picture
through a multi-
channel deep residual fully convolutional network model, so as to recognize
the subject
region image and the background region image. Exemplarily, the nonlinear
filtering is a
median filtering denoising algorithm.
[0056] Specifically, in the foregoing embodiment, the step of performing hue
space conversion
on the to-be-detected picture, so as to output hue space data and brightness
space data of
the picture comprises:
[0057] using HSV hue space conversion method to convert the to-be-detected
picture and so as
to output the hue space data of the picture, hue space data comprises hue
space component
8
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
H; using LUV hue space conversion method to convert the to-be-detected picture
and
output the brightness space data of the picture, in which the brightness space
data include
a brightness space channel L.
[0058] For conversion of the hue space data, since the commonly used RGB color
space method
is based on computer hardware and its color space is not suitable for
characterizing color
purity, in particular implementations, the present employs the HSV hue space
conversion
method to convert the to-be-detected picture in the RGB color space into the
HSV color
space that is closer to human visual perceptual characteristics, so as to
better characterize
color purity and in turn improve detection precision of the background purity.
The
conversion equation is as below:
-r,'.= max(R,G,B)
S max(R,G,B)¨ min (R,O,B)
¨ ____________________________
max(R,G,B)
(0¨B)
60x _____________________ S# Amax R,G,B)=R
S x V
60x 2+ __________________ SO_Emax(R,G,B)=0
r (R-0)\
60x 4+ __________________ S#0Amax(R,O,B)=B
Sxr,'
, 1/-=I-1+360 If <0
[0059] Features of brightness space data include extensive color gamut
coverage, high visual
consistency, and good capability of expressing color perception. Therefore,
the present
implantation coverts brightness space data of the picture to be examined
through: first
converting the to-be-detected picture from RGB space data into CIE XYZ space
data, and
then converting the CIE XYZ space data into LUV brightness space data using
the
following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
9
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
1
Y Y r 6
116x ¨ ¨16
Y L Yn ,29;
=.
29'
Y r 6
3
n Yn 29)
/./ = 13 xL 4/1-12õ1
=13xL x(vi¨vni)
where kin and vn are light source constants, and " is a preset fixed value;
4X
kir= X +15Y + 3Z
9Y
v, =
X+15Y+ 3Z
[0060] Specifically, in the foregoing embodiment, the step of fusing the
subject region image
after dilation processing with the hue space data, extracting a background
purity value
corresponding to every pixel in the background region image formed after
dilation
processing, and determining whether background purity of the to-be-detected
picture is
compliant comprises:
[0061] filtering edge pixels of the subject region image by means of a filter
kernel, so as to dilate
the subject region image; updating the part other than the dilated subject
region image in
the to-be-detected picture as the background region image; fusing the updated
background region image with data of the hue space component H, and
determining
whether the background purity value corresponding to every pixel in the
updated
background region image is compliant to a first threshold, and if yes,
determining that the
background purity of the to-be-detected picture is compliant, or if not,
determining that
the background purity of the to-be-detected picture is non-compliant; and
wherein the
first threshold includes a first background purity threshold.
[0062] In particular implementations, a round filter kernel k is used for
filtering pixels of the
subject region image. Taking a round filter kernel having a diameter of 4 for
example:
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
1 1 1
1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
k= 1 1 1 1 x 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1
1 1 1
[0063] The filtering equation is:
= P ED K i1) z 1 (k np j) cP
-
[0064] In the above equation, (i, j) represents the pixel coordinates, P
represents the subject
region image, Zu represents the background purity value corresponding to the
pixel,
(k)
represents the punctured neighborhood region corresponding to each pixel
obtained using the round filter kernel k as the mask, B represents the dilates
subject region
image, wherein the background region image is updated when the subject region
image
is dilated, and D represents the updated background region image. Then the
updated
background region image is fused with data of the hue space component H so as
to
generate a result C, wherein the fusion equation is as below:
ic(i,j) = H(i,j), (i,j) E
C = ic(i,j)1 C(i,j) = 0, (i,j) e D
where (0) represents pixel coordinates, H(i,j) represents the background
purity value
corresponding to the pixel in the hue space component H. When a pixel located
at
coordinates (i,j) belongs to the dilated background region imageD, the
background purity
of the pixel in the hue space component H is valuated. When the pixel located
at
coordinates (i, j) does not belong to the dilated background region image D,
the
background purity value of the pixel in the hue space component H is valuated
as zero.
The coordinates of the pixels and their corresponding background purity values
are
gathered to form an array C, which is the array composed of the location
coordinates of
the pixels in the background region image D and the correspondingly converted
background purity values. Then the predetermined first threshold is compared
to the array
11
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
C. If all the background purity values corresponding to the induvial pixels in
the
background region image D are smaller than the first background purity
threshold, it is
determined that the background purity of the to-be-detected picture is
compliant.
[0065] Preferably, in the foregoing embodiment, the step of processing the
brightness space data
by means of plural binarization methods, so as to output plural binarization
results
correspondingly comprises:
[0066] processing data of the brightness space channel L by means of a fixed-
threshold
binarization method, so as to obtain a first binarization result T; processing
the data of the
brightness space channel L by means of a Gaussian-window binarization method,
so as
to obtain a second binarization result G. Then non-coherence region
suppression is
performed on the first binarization result T and the second binarization
result G,
respectively by means of the non-maximum suppression method, so as to nullify
the
impact of non-coherence regions caused by complicated a background on the
detection
results, thereby further improving detection precision.
[0067] In the foregoing embodiment, the step of fusing the subject region
image with the plural
binarization results, respectively, extracting a coordinate value of every
pixel in the fused
subject region image and its corresponding background purity value, and
determining
whether a location of a subject in the to-be-detected picture is compliant
comprises:
[0068] fusing the subject region image recognized through pixel-level semantic
segmentation
with the first binarization result and the second binarization result,
respectively;
extracting coordinate values of the pixels belonging to the subject region
image and the
first binarization result from fusing results and their corresponding
background purity
values; and/or extracting coordinate values of the pixels belonging to the
subject region
image and the second binarization result from fusing results and their
corresponding
background purity values; summarizing and extracting the coordinate values of
the pixels
and their corresponding background purity values, and determining whether both
the
coordinate value of each pixel and its corresponding background purity value
are
compliant to a second threshold, and if yes, determining that the location of
the subject
in the to-be-detected picture is compliant, or if not, determining that the
location of the
12
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
subject in the to-be-detected picture is non-compliant; and wherein the second
threshold
includes a second background purity threshold and a location coordinate
interval
threshold.
[0069] In particular implementations, the fusing process is well known in the
art, and is described
herein by example. The fusion equation is F = ff (i, DI f ,
and
1
I f (mi, ni), (mi, ni) # 0
f f(ui, vj) eTnP
. When the background purity value of the pixel (i,j) is in the
(f (mi, ni)eGnP
intersection between the first binarization result T and the subject region
image, the
coordinates of the pixel and its corresponding background purity value are
taken.
Alternatively, when the background purity value of the pixel (i,j) is in the
intersection
between the second binarization result G and the subject region image, the
coordinates of
the pixel and its corresponding background purity value are taken. At last,
the pixel
coordinates and their corresponding background purity values are assembled to
form an
array F, which is an array composed of location coordinates of the pixels
after assembling
of T n P and G n P and their corresponding background purity values. A preset
second
threshold array is then used to compare with the array F. If the pixel
coordinates are all
within the location coordinate interval threshold, and the background purity
values are all
within the second background purity threshold, it is determined that the
location of the
subject in the to-be-detected picture is compliant.
Embodiment 2
[0070] The present embodiment provides a picture-detecting apparatus, which
comprises:
[0071] a pixel-processing unit, for acquiring a to-be-detected picture that
has been denoised, and
after pixel-level semantic segmentation, recognizing a subject region image
and a
background region image;
[0072] a hue-space-converting unit, for performing hue space conversion on the
to-be-detected
picture, so as to output hue space data and brightness space data of the
picture;
[0073] a first determining unit, for fusing the subject region image after
dilation processing with
13
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
the hue space data, extracting a background purity value corresponding to
every pixel in
the background region image formed after dilation processing, and determining
whether
background purity of the to-be-detected picture is compliant;
[0074] a binarization-processing unit, for processing the brightness space
data by means of plural
binarization methods, so as to output plural binarization results
correspondingly; and
[0075] a second determining unit, for fusing the subject region image with the
plural binarization
results, respectively, extracting a coordinate value of every pixel in the
fused subject
region image and its corresponding background purity value, and determining
whether a
location of a subject in the to-be-detected picture is compliant.
[0076] Preferably, between the binarization-processing unit and the second
determining unit, the
apparatus further comprises:
[0077] performing non-coherence region suppression on the first binarization
result and the
second binarization result, respectively, by means of a non-maximum
suppression
method.
[0078] As compared to the prior art, the picture-detecting apparatus of the
present embodiment
provides beneficial effects that are similar to those provided by the based on
the picture-
detecting method of a convolutional neural network as enumerated in the
previous
embodiment, and thus no repetitions are made herein.
Embodiment 3
[0079] The present embodiment provides a computer-readable storage medium,
storing thereon
a computer program. When the computer program is executed by a processor, it
implements the steps of the picture-detecting method as described previously.
[0080] As compared to the prior art, the computer-readable storage medium of
the present
embodiment provides beneficial effects that are similar to those provided by
the picture-
detecting method as enumerated in the previous embodiment, and thus no
repetitions are
made herein.
[0081] As will be appreciated by people of ordinary skill in the art,
implementation of all or a
part of the steps of the method of the present invention as described
previously may be
14
Date Recue/Date Received 2022-03-02

CA 03153067 2022-03-02
realized by having a program instruct related hardware components. The program
may
be stored in a computer-readable storage medium, and the program is about
performing
the individual steps of the methods described in the foregoing embodiments.
The storage
medium may be a ROM/RAM, a hard drive, an optical disk, a memory card or the
like.
[0082] The present invention has been described with reference to the
preferred embodiments
and it is understood that the embodiments are not intended to limit the scope
of the present
invention. Moreover, as the contents disclosed herein should be readily
understood and
can be implemented by a person skilled in the art, all equivalent changes or
modifications
which do not depart from the concept of the present invention should be
encompassed by
the appended claims. Hence, the scope of the present invention shall only be
defined by
the appended claims.
Date Recue/Date Received 2022-03-02

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: Grant downloaded 2024-03-22
Inactive: Grant downloaded 2024-03-22
Letter Sent 2024-03-19
Grant by Issuance 2024-03-19
Inactive: Cover page published 2024-03-18
Pre-grant 2024-02-05
Inactive: Final fee received 2024-02-05
Letter Sent 2024-01-24
Notice of Allowance is Issued 2024-01-24
Inactive: Approved for allowance (AFA) 2024-01-22
Inactive: QS passed 2024-01-22
Inactive: IPC assigned 2024-01-16
Inactive: First IPC assigned 2024-01-16
Inactive: IPC assigned 2024-01-16
Inactive: IPC assigned 2024-01-16
Inactive: IPC assigned 2024-01-16
Inactive: IPC assigned 2024-01-16
Inactive: IPC assigned 2024-01-16
Inactive: IPC assigned 2024-01-16
Inactive: IPC assigned 2024-01-16
Inactive: IPC expired 2024-01-01
Inactive: IPC removed 2023-12-31
Amendment Received - Voluntary Amendment 2023-12-18
Amendment Received - Response to Examiner's Requisition 2023-12-18
Examiner's Report 2023-08-17
Inactive: Report - QC passed 2023-08-15
Letter sent 2023-06-01
Advanced Examination Determined Compliant - paragraph 84(1)(a) of the Patent Rules 2023-06-01
Inactive: Advanced examination (SO) 2023-05-08
Amendment Received - Voluntary Amendment 2023-05-08
Inactive: Advanced examination (SO) fee processed 2023-05-08
Amendment Received - Voluntary Amendment 2023-05-08
Early Laid Open Requested 2023-05-08
Letter Sent 2023-02-03
Inactive: Correspondence - PAPS 2022-12-23
All Requirements for Examination Determined Compliant 2022-09-16
Request for Examination Requirements Determined Compliant 2022-09-16
Request for Examination Received 2022-09-16
Inactive: Cover page published 2022-05-30
Letter sent 2022-03-31
Inactive: First IPC assigned 2022-03-30
Priority Claim Requirements Determined Compliant 2022-03-30
Request for Priority Received 2022-03-30
Inactive: IPC assigned 2022-03-30
Application Received - PCT 2022-03-30
National Entry Requirements Determined Compliant 2022-03-02
Application Published (Open to Public Inspection) 2021-03-11

Abandonment History

There is no abandonment history.

Maintenance Fee

The last payment was received on 2023-12-15

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Fee History

Fee Type Anniversary Year Due Date Paid Date
MF (application, 2nd anniv.) - standard 02 2022-06-27 2022-03-02
Basic national fee - standard 2022-03-02 2022-03-02
Request for examination - standard 2024-06-25 2022-09-16
MF (application, 3rd anniv.) - standard 03 2023-06-27 2022-12-15
Advanced Examination 2023-05-08 2023-05-08
MF (application, 4th anniv.) - standard 04 2024-06-25 2023-12-15
Final fee - standard 2024-02-05
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
10353744 CANADA LTD.
Past Owners on Record
CHONG MU
ERLONG LIU
MINGXIU HAN
XUYANG ZHOU
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative drawing 2024-02-15 1 39
Claims 2023-12-17 36 2,115
Abstract 2022-03-01 1 25
Description 2022-03-01 15 705
Claims 2022-03-01 4 178
Drawings 2022-03-01 2 105
Representative drawing 2022-05-29 1 50
Claims 2023-05-07 36 2,167
Final fee 2024-02-04 3 59
Electronic Grant Certificate 2024-03-18 1 2,527
Courtesy - Letter Acknowledging PCT National Phase Entry 2022-03-30 1 588
Courtesy - Acknowledgement of Request for Examination 2023-02-02 1 423
Commissioner's Notice - Application Found Allowable 2024-01-23 1 580
Examiner requisition 2023-08-16 3 165
Amendment / response to report 2023-12-17 79 3,732
National entry request 2022-03-01 13 1,124
International search report 2022-03-01 4 133
Amendment - Abstract 2022-03-01 2 116
Request for examination 2022-09-15 8 296
Correspondence for the PAPS 2022-12-22 4 149
Advanced examination (SO) / Amendment / response to report 2023-05-07 42 1,731
Early lay-open request 2023-05-07 6 190
Courtesy - Advanced Examination Request - Compliant (SO) 2023-05-31 1 178