Sélection de la langue

Search

Sommaire du brevet 3135471 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Brevet: (11) CA 3135471
(54) Titre français: METHODE ET DISPOSITIF DE VERIFICATION DE CONNEXION A UNE APPLICATION ET SUPPORT DE STOCKAGE INFORMATIQUE
(54) Titre anglais: APP LOGIN VERIFICATION METHOD AND DEVICE AND COMPUTER READABLE STORAGE MEDIUM
Statut: Accordé et délivré
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • H04L 09/32 (2006.01)
  • G06F 21/32 (2013.01)
  • G06V 40/16 (2022.01)
  • G06V 40/18 (2022.01)
  • G06V 40/40 (2022.01)
  • G10L 17/24 (2013.01)
(72) Inventeurs :
  • DING, JINFEI (Chine)
(73) Titulaires :
  • 10353744 CANADA LTD.
(71) Demandeurs :
  • 10353744 CANADA LTD. (Canada)
(74) Agent: JAMES W. HINTONHINTON, JAMES W.
(74) Co-agent:
(45) Délivré: 2024-05-14
(22) Date de dépôt: 2021-09-30
(41) Mise à la disponibilité du public: 2022-03-30
Requête d'examen: 2022-03-15
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Non

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
202011056334.2 (Chine) 2020-09-30

Abrégés

Abrégé français

Il est décrit dans la présente invention un procédé de vérification de connexion à une application, un dispositif et un support lisible par ordinateur. Le procédé décrit comprend : lacquisition dune image de face en temps réel primaire et la génération dinformations de position daffichage basées sur limage de face en temps réel primaire décrite; recevoir des messages dinvitation dun serveur et afficher le message dinvitation décrit en fonction des informations de position daffichage décrites; lacquisition dune image faciale secondaire en temps réel basée sur les informations de message dincitation décrites pour la détection de visage en direct; et si la détection de la face en direct décrite passe, en passant la vérification de connexion. La présente invention identifie des positions de focalisation de lil de lutilisateur terminal par les images de face en temps réel capturées sur le terminal, dans lesquelles le message dincitation de la détection de la face en direct peut être affiché comme foyer des yeux de lutilisateur terminal, pour éviter le gaspillage de temps lors de la recherche de messages dincitation et améliorer la vitesse de cognition des risques.


Abrégé anglais

Disclosed in the present invention is an App login verification method, device, and computer readable medium. The described method comprises: acquiring a primary real-time face image and generating display position information based on the described primary real-time face image; receiving prompting messages from a server and displaying the described prompting message based on the described display position information; acquiring a secondary real-time face image based on the described prompting message information for live face detection; and where if the described live face detection passes, passing the login verification. The present invention identifies positions of tenninal user eye focus by the real- time face images captured on the terminal, wherein the prompting message of the live face detection can be displayed as the focus of terminal user eyes, to prevent time wasted on finding prompting messages and improves the speed of risk cognization.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


Claims:
1. A device comprising:
a receiving module, configured to receive prompting messages from a server;
and
a processing module configured to:
acquire a primary real-time face image;
generate display position information based on the primary real-time face
image;
acquire a secondary face image based on the prompting message information
for live face detection; and
wherein the processing module comprises an image identification unit to
identify the
position range of eye focus in the primary real-time face image based on image
identification techniques to obtain positions of front face of the primary
real-time face
images, position of pupils in eyes, and visual shape of the pupils based on
the image
identification technique.
2. The device of claim 1, wherein the processing module further comprises of:
a capturing unit configured to acquire the primary real-time face image and
the secondary
face image;
a processing unit configured to:
perform live face detection based on the prompting message information by the
primary real-time face image, and the secondary face image; and
to generate the display position information based on the position range of
the
eye focus; and
Date Reçue/Date Received 2024-03-01

a displaying unit configured to display the prompting information based on the
displaying
position information.
3. The device of any one of claims 1 to 2, wherein the processing module
further comprises of:
the image identification unit to identify the position range of the eye focus
in the primary
real-time face image based on image identification techniques configured to:
obtain the angle between the front face of the primary real-time face image
and
the terminal;
a computation unit configured to calculate the eye focus position based on the
position of
pupils in eyes, the visual shape of the pupils, the primary real-time face
image and the
angle of terminal; and
a voice recording unit configured to acquire voice answer information.
4. The device of any one of claims 1 to 3, further comprising:
a sending module configured to:
send a login request to the server;
send the secondary real-time face image to the server so as for the face
recognizing comparison in the server;
send the voice answer information to the server for determining the voice
answer information matches with the user history activity data; and
send the app account number and password information to the server for
account and password verification by the server.
16
Date Recue/Date Received 2024-03-01

5. The device of any one of claims 1 to 4, wherein the prompting message is
displayed as lined
texts, wherein the vertical position y of the prompting message on the
terminal screen is
calculated, and wherein the horizontal position the texts are centered, by
simulation trainings,
when the pupil movement is less than a certain distance and the texts are
aligned right when
the pupil movement is greater than a certain distance to the right.
6. The device of any one of claims 1 to 5, wherein terminal user eyes are
assumed symmetric
and equal sized, when the face in the primary real-time face image sent by the
terminal is
relatively to the right of a screen, and the eyes are within range of the
screen with left eyeball
centered as a circle, the left eye in the primary real-time face image is
concluded to be
parallel to the phone screen, wherein length of the left eye is noted as X1
and right eye length
is noted as zl, wherein angle A is calculated by cosA = zl/X1, wherein maximum
value of
the angle A (maxA) is simulated while eyes remain on the screen wherein when
angle A is
smaller than maxA, the angle is identified as an effective angle, and wherein
facing to the
left, the value notation is reversed with the same calculation.
7. The device of any one of claims 1 to 6, wherein human faces can be
determined as facing
upwards or downwards with model identification. wherein when the face in the
primary real-
time face image is relatively upward to the screen, with two eyes within the
range of the
screen and eyeballs centered as a circle, the eye focus of the primary real-
time face image is
located at the top of the screen, wherein when the eyes are relatively
downward, the
conclusion is reversed.
8. The device of any one of claims 1 to 7, wherein when the lengths of the
left and right eyes
are identical, or when the left or right angles are within detectable ranges,
the position of the
pupils on the eyeballs are calculated, by simulation trainings, with maximum
angle (LmaxB),
the focus leaves from the screen.
17
Date Recue/Date Received 2024-03-01

9. The device of any one of claims 1 to 8, wherein based on the imaging
changes of pupils in
the screen, front face-screen angle in the primary real-time face image is
calculated, wherein
the y positions of the eyes in the screen are calculated according to the
front face-screen
angle in the primary real-time face image, wherein in the screen, the middle
point of the eye
imaging moves upwards by yl, wherein the shift yl is the position of the y
location for text
display, wherein upward angle LmaxB, wherein cosB = zl/X1, wherein y1 = maxY
LmaxB
* LB, and wherein, the prompting message display position on the screen is
calculated based
on the shift algorithm.
10. The device of any one of claims 1 to 9, wherein when the eye focus
position on the screen in
the primary real-time face image is detected to change, the display location
of user online
activity question texts are re-calculated based on the shift algorithm,.
11. The device of any one of claims 1 to 10, wherein the user makes
corresponding face gestures
for the secondary real-time face images capturing based on the displayed
prompting
information.
12. The device of any one of claims 1 to 11, wherein terminal history activity
data is stored in the
database, wherein each terminal has a terminal identification, wherein the
server searches the
user history activity data server from the database according to the terminal
identification,
including any one or more of the item purchase name from a recent online
order, name of a
service requested, and title key words of article and news messages.
13. The device of any one of claims 1 to 12, wherein the terminal displays the
prompting
messages including any one or more of user history activity operation
questions at the
location of the user eye focus, wherein the user would not need to take extra
time looking for
prompting messages.
14. The device of any one of claims 1 to 13, wherein the prompting message
further includes a
microphone icon to remind the user to use voice for answering the user history
activity
operation questions.
18
Date Recue/Date Received 2024-03-01

15. The device of any one of claims 1 to 14, wherein the secondary face image
includes user
voice answers to the user history activity operation questions, acquiring the
secondary face
image based on the prompting message information for live face detection
further includes
acquiring voice answer information, wherein the voice answer information
includes the voice
answer information of the user voice answers to the user history activity
operation questions,
wherein when a built in camera is turned on by the terminal, and the image
captured when
the user voice answers the user history activity operation questions is
identified as the
secondary face image, and wherein during the live face detection after
acquiring the
secondary face images the user voice answer to user history activity operation
questions are
checked and returned, to ensure the login verification performance while
preventing the
processing time from being extended.
16. The device of any one of claims 1 to 14, wherein the secondary real-time
face image is sent
to the server so as for the face recognizing comparison in the server based on
pre-set filtering
conditions, wherein the best frame in the secondary real-time face image is
sent to the server
for the face recognizing comparison, wherein the frame with the best quality
and eyes facing
straightly to the screen is selected from the secondary real-time face image
and sent to the
server for the face recognizing comparison.
17. The device of any one of claims 1 to 16, wherein the order of performing
face recognition
and the determination of the voice answer information matches with the
described user
history activity data is not restricted.
18. The device of any one of claims 1 to 17, wherein the terminal opens the
build-in camera to
capture images of the user as the primary real-time face image and analyzes
the primary real-
time face image to generate display position information.
19. A method comprising:
19
Date Recue/Date Received 2024-03-01

acquiring a primary real-time face image and generating display position
information
based on the primary real-time face image including identification of position
range of
eye focus in the primary real-time face image based on image identification
techniques
by obtaining positions of a front face of the primary real-time face images,
position of
pupils in eyes, and visual shape of pupils based on the image identification
technique;
receiving prompting messages from a server and displaying the prompting
messages
based on the display position information;
acquiring a secondary face image based on prompting message information for
live face
detection; and
wherein the live face detection passes, passing the login verification.
20. The method of claim 19, wherein the acquisition of the primary real-time
face image and the
generation of display position information based on the primary real-time face
image
comprises:
capturing the primary real-time face image;
based on image identification techniques, identifying position range of eye
focus in the
primary real-time face image; and
generating the display position information based on the position range of the
eye focus.
21. The method of claim 20, wherein identification of the position range of
the eye focus in the
primary real-time face image based on the image identification techniques
comprises:
obtaining a angle between the front face of the primary real-time face image
and
terminal; and
calculating eye focus position based on the position of pupils in eyes, the
visual shape of
the pupils, the primary real-time face image and angle of the terminal.
Date Recue/Date Received 2024-03-01

22. The method of claim 19, wherein before the acquisition of a primary real-
time face image
and the generation of display position information based on the primary real-
time face image,
the method further comprises:
sending a login request to the server, wherein the login request includes a
terminal
identification; and
the prompting a message sent by the terminal, acquired by:
searching user history activity data by the server from a database according
to
the terminal identification; and
generating user history activity operation questions by the server based on
the
user history activity data, wherein the prompting message sent by the server
includes the user history activity operation questions.
23. The method of claim 22, wherein the secondary real-time face image,
includes the real-time
face image when a user answers the user history activity operation questions
in voice,
comprises:
acquiring voice answer information;
sending the secondary real-time face image to the server for face recognizing
comparison
in the server, wherein the face detection is passed;
sending the voice answer information to the server for determining that the
voice answer
information matches with the user history activity data, wherein the face
detection is
passed; and
wherein the face recognition passes and the voice answer information matches
with the
user history activity data, the login verification is passed.
24. The method of claim 23, wherein the determination by the server the voice
answer
information matches with the user history activity data comprises:
21
Date Recue/Date Received 2024-03-01

converting the voice answer information into text answer information; and
performing fuzzy matching of the text answer information with the user history
activity
data by the server.
25. The method of claim 19, further comprising:
sending an app account number and password information to the server for
account and
password verification by the server, comprises:
receiving account password verification command form the server based on the
login request;
sending the app account number and the password information to the server,
wherein the server compares the account password with the password
information obtained from a database for the app account by the server,
wherein the password information obtained from the database for the app
account by the server matches with the account password sent to the server
from the terminal, the app account password verification passes; and
wherein the live face detection passes and the app account password
verification passes, the login verification is passed.
26. The method of any one of claims 19 to 25, wherein the prompting message is
displayed as
lined texts, wherein the vertical position y of the prompting message on the
terminal screen is
calculated, and wherein the horizontal position the texts are centered, by
simulation trainings,
when the pupil movement is less than a certain distance and the texts are
aligned right when
the pupil movement is greater than a certain distance to the right.
22
Date Recue/Date Received 2024-03-01

27. The method of any one of claims 19 to 26, wherein terminal user eyes are
assumed
symmetric and equal sized, when the face in the primary real-time face image
sent by the
terminal is relatively to the right of a screen, and the eyes are within range
of the screen with
left eyeball centered as a circle, the left eye in the primary real-time face
image is concluded
to be parallel to the phone screen, wherein length of the left eye is noted as
X1 and right eye
length is noted as zl, wherein angle A is calculated by cosA = zl/X1, wherein
maximum
value of the angle A (maxA) is simulated while eyes remain on the screen
wherein when
angle A is smaller than maxA, the angle is identified as an effective angle,
and wherein
facing to the left, the value notation is reversed with the same calculation.
28. The method of any one of claims 19 to 27, wherein human faces can be
determined as facing
upwards or downwards with model identification. wherein when the face in the
primary real-
time face image is relatively upward to the screen, with two eyes within the
range of the
screen and eyeballs centered as a circle, the eye focus of the primary real-
time face image is
located at the top of the screen, wherein when the eyes are relatively
downward, the
conclusion is reversed.
29. The method of any one of claims 19 to 28, wherein when the lengths of the
left and right eyes
are identical, or when the left or right angles are within detectable ranges,
the position of the
pupils on the eyeballs are calculated, by simulation trainings, with maximum
angle (LmaxB),
the focus leaves from the screen.
30. The method of any one of claims 19 to 29, wherein based on the imaging
changes of pupils
in the screen, front face-screen angle in the primary real-time face image is
calculated,
wherein the y positions of the eyes in the screen are calculated according to
the front face-
screen angle in the primary real-time face image, wherein in the screen, the
middle point of
the eye imaging moves upwards by yl, wherein the shift yl is the position of
the y location
for text display, wherein upward angle LmaxB, wherein cosB = zl/X1, wherein yl
= maxY /
LmaxB * LB, and wherein, the prompting message display position on the screen
is
calculated based on the shift algorithm.
23
Date Recue/Date Received 2024-03-01

31. The method of any one of claims 19 to 30, wherein when the eye focus
position on the screen
in the primary real-time face image is detected to change, the display
location of user online
activity question texts are re-calculated based on the shift algorithm,.
32. The method of any one of claims 19 to 31, wherein the user makes
corresponding face
gestures for the secondary real-time face images capturing based on the
displayed prompting
information.
33. The method of any one of claims 19 to 32, wherein terminal history
activity data is stored in
the database, wherein each terminal has a terminal identification, wherein the
server searches
the user history activity data server from the database according to the
terminal identification,
including any one or more of the item purchase name from a recent online
order, name of a
service requested, and title key words of article and news messages.
34. The method of any one of claims 19 to 33, wherein the terminal displays
the prompting
messages including any one or more of user history activity operation
questions at the
location of the user eye focus, wherein the user would not need to take extra
time looking for
prompting messages.
35. The method of any one of claims 19 to 34, wherein the prompting message
further includes a
microphone icon to remind the user to use voice for answering the user history
activity
operation questions.
24
Date Recue/Date Received 2024-03-01

36. The method of any one of claims 19 to 35, wherein the secondary face image
includes user
voice answers to the user history activity operation questions, acquiring the
secondary face
image based on the prompting message information for live face detection
further includes
acquiring voice answer information, wherein the voice answer information
includes the voice
answer information of the user voice answers to the user history activity
operation questions,
wherein when a built in camera is turned on by the terminal, and the image
captured when
the user voice answers the user history activity operation questions is
identified as the
secondary face image, and wherein during the live face detection after
acquiring the
secondary face images the user voice answer to user history activity operation
questions are
checked and returned, to ensure the login verification performance while
preventing the
processing time from being extended.
37. The method of any one of claims 19 to 36, wherein the secondary real-time
face image is sent
to the server so as for the face recognizing comparison in the server based on
pre-set filtering
conditions, wherein the best frame in the secondary real-time face image is
sent to the server
for the face recognizing comparison, wherein the frame with the best quality
and eyes facing
straightly to the screen is selected from the secondary real-time face image
and sent to the
server for the face recognizing comparison.
38. The method of any one of claims 19 to 37, wherein the order of performing
face recognition
and the determination of the voice answer information matches with the
described user
history activity data is not restricted.
39. The method of any one of claims 19 to 38, wherein the terminal opens the
build-in camera to
capture images of the user as the primary real-time face image and analyzes
the primary real-
time face image to generate display position information.
40. A computer readable physical memory having stored thereon a computer
program executed
by a computer configured to:
Date Recue/Date Received 2024-03-01

acquire a primary real-time face image and generating display position
information based
on the primary real-time face image including identification of position range
of eye
focus in the primary real-time face image based on image identification
techniques by
obtaining positions of a front face of the primary real-time face images,
position of pupils
in eyes, and visual shape of pupils based on the image identification
technique;
receive prompting messages from a server and displaying the prompting messages
based
on the display position information;
acquire a secondary face image based on prompting message information for live
face
detection; and
wherein the live face detection passes, passing the login verification.
41. The memory of claim 40, wherein the acquisition of the primary real-time
face image and the
generation of display position information based on the primary real-time face
image
comprises:
capturing the primary real-time face image;
based on image identification techniques, identifying position range of eye
focus in the
primary real-time face image; and
generating the display position information based on the position range of the
eye focus.
42. The memory of claim 41, wherein identification of the position range of
the eye focus in the
primary real-time face image based on the image identification techniques
comprises:
obtaining a angle between the front face of the primary real-time face image
and
terminal; and
calculating eye focus position based on the position of pupils in eyes, the
visual shape of
the pupils, the primary real-time face image and angle of the terminal.
26
Date Recue/Date Received 2024-03-01

43. The memory of claim 40, wherein before the acquisition of a primary real-
time face image
and the generation of display position information based on the primary real-
time face image,
the memory further comprises:
sending a login request to the server, wherein the login request includes a
terminal
identification; and
the prompting a message sent by the terminal, acquired by:
searching user history activity data by the server from a database according
to
the terminal identification; and
generating user history activity operation questions by the server based on
the
user history activity data, wherein the prompting message sent by the server
includes the user history activity operation questions.
44. The memory of claim 43, wherein the secondary real-time face image,
includes the real-time
face image when a user answers the user history activity operation questions
in voice,
comprises:
acquiring voice answer information;
sending the secondary real-time face image to the server for face recognizing
comparison
in the server, wherein the face detection is passed;
sending the voice answer information to the server for determining that the
voice answer
information matches with the user history activity data, wherein the face
detection is
passed; and
wherein the face recognition passes and the voice answer information matches
with the
user history activity data, the login verification is passed.
45. The memory of claim 44, wherein the determination by the server the voice
answer
information matches with the user history activity data comprises:
27
Date Recue/Date Received 2024-03-01

converting the voice answer information into text answer information; and
performing fuzzy matching of the text answer information with the user history
activity
data by the server.
46. The memory of claim 40, further configured to:
send an app account number and password information to the server for account
and
password verification by the server, comprising:
receiving account password verification command form the server based on the
login request;
sending the app account number and the password information to the server,
wherein the server compares the account password with the password
information obtained from a database for the app account by the server,
wherein the password information obtained from the database for the app
account by the server matches with the account password sent to the server
from the terminal, the app account password verification passes; and
wherein the live face detection passes and the app account password
verification passes, the login verification is passed.
47. The memory of any one of claims 40 to 46, wherein the prompting message is
displayed as
lined texts, wherein the vertical position y of the prompting message on the
terminal screen is
calculated, and wherein the horizontal position the texts are centered, by
simulation trainings,
when the pupil movement is less than a certain distance and the texts are
aligned right when
the pupil movement is greater than a certain distance to the right.
28
Date Recue/Date Received 2024-03-01

48. The memory of any one of claims 40 to 47, wherein terminal user eyes are
assumed
symmetric and equal sized, when the face in the primary real-time face image
sent by the
terminal is relatively to the right of a screen, and the eyes are within range
of the screen with
left eyeball centered as a circle, the left eye in the primary real-time face
image is concluded
to be parallel to the phone screen, wherein length of the left eye is noted as
X1 and right eye
length is noted as zl, wherein angle A is calculated by cosA = zl/X1, wherein
maximum
value of the angle A (maxA) is simulated while eyes remain on the screen
wherein when
angle A is smaller than maxA, the angle is identified as an effective angle,
and wherein
facing to the left, the value notation is reversed with the same calculation.
49. The memory of any one of claims 40 to 48, wherein human faces can be
determined as facing
upwards or downwards with model identification. wherein when the face in the
primary real-
time face image is relatively upward to the screen, with two eyes within the
range of the
screen and eyeballs centered as a circle, the eye focus of the primary real-
time face image is
located at the top of the screen, wherein when the eyes are relatively
downward, the
conclusion is reversed.
50. The memory of any one of claims 40 to 49, wherein when the lengths of the
left and right
eyes are identical, or when the left or right angles are within detectable
ranges, the position of
the pupils on the eyeballs are calculated, by simulation trainings, with
maximum angle
(LmaxB), the focus leaves from the screen.
51. The memory of any one of claims 40 to 50, wherein based on the imaging
changes of pupils
in the screen, front face-screen angle in the primary real-time face image is
calculated,
wherein the y positions of the eyes in the screen are calculated according to
the front face-
screen angle in the primary real-time face image, wherein in the screen, the
middle point of
the eye imaging moves upwards by yl, wherein the shift yl is the position of
the y location
for text display, wherein upward angle LmaxB, wherein cosB = zl/X1, wherein yl
= maxY /
LmaxB * LB, and wherein, the prompting message display position on the screen
is
calculated based on the shift algorithm.
29
Date Recue/Date Received 2024-03-01

52. The memory of any one of claims 40 to 51, wherein when the eye focus
position on the
screen in the primary real-time face image is detected to change, the display
location of user
online activity question texts are re-calculated based on the shift algorithm.
53. The memory of any one of claims 40 to 52, wherein the user makes
corresponding face
gestures for the secondary real-time face images capturing based on the
displayed prompting
information.
54. The memory of any one of claims 40 to 53, wherein terminal history
activity data is stored in
the database, wherein each terminal has a terminal identification, wherein the
server searches
the user history activity data server from the database according to the
terminal identification,
including any one or more of the item purchase name from a recent online
order, name of a
service requested, and title key words of article and news messages.
55. The memory of any one of claims 40 to 54, wherein the terminal displays
the prompting
messages including any one or more of user history activity operation
questions at the
location of the user eye focus, wherein the user would not need to take extra
time looking for
prompting messages.
56. The memory of any one of claims 40 to 55, wherein the prompting message
further includes
a microphone icon to remind the user to use voice for answering the user
history activity
operation questions.
Date Recue/Date Received 2024-03-01

57. The memory of any one of claims 40 to 56, wherein the secondary face image
includes user
voice answers to the user history activity operation questions, acquiring the
secondary face
image based on the prompting message information for live face detection
further includes
acquiring voice answer information, wherein the voice answer information
includes the voice
answer information of the user voice answers to the user history activity
operation questions,
wherein when a built in camera is turned on by the terminal, and the image
captured when
the user voice answers the user history activity operation questions is
identified as the
secondary face image, and wherein during the live face detection after
acquiring the
secondary face images the user voice answer to user history activity operation
questions are
checked and returned, to ensure the login verification performance while
preventing the
processing time from being extended.
58. The memory of any one of claims 40 to 57, wherein the secondary real-time
face image is
sent to the server so as for the face recognizing comparison in the server
based on pre-set
filtering conditions, wherein the best frame in the secondary real-time face
image is sent to
the server for the face recognizing comparison, wherein the frame with the
best quality and
eyes facing straightly to the screen is selected from the secondary real-time
face image and
sent to the server for the face recognizing comparison.
59. The memory of any one of claims 40 to 58, wherein the order of performing
face recognition
and the determination of the voice answer information matches with the
described user
history activity data is not restricted.
60. The memory of any one of claims 40 to 59, wherein the terminal opens the
build-in camera to
capture images of the user as the primary real-time face image and analyzes
the primary real-
time face image to generate display position information.
31
Date Recue/Date Received 2024-03-01

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


APP LOGIN VERIFICATION METHOD AND DEVICE AND COMPUTER READABLE
STORAGE MEDIUM
Technical Field
[0001] The present invention relates to the field of mobile terminal security,
in particular, to a method, a
device, and a storage medium for App login verification.
Background
[0002] With the internet development and popularity of mobile phones, the
national regulations are strictly
monitoring the security and personal information for intelligent terminal
Apps, wherein the identity
verification is required for user logins in on the user mobile terminals. For
example, account logins via a
new smart phone or from a non-residence address generally use the popular face
detection for identity
verification. With the commands of stiff gestures such as eye blinking, head
shaking, and mouth opening,
users are required to change face gestures according to the commands for live
face detection. The live face
detection is used for user identity verification.
[0003] The forementioned method has the following drawbacks and limitations.
The live face detection
adopts user face gestures according to the command such as eye blinking, head
shaking, and mouth opening.
However different Apps displays prompting messages at different locations,
being upper or lower. The user
first needs to find the message location and then complete the face gestures
according to several commands
with complicated orders. Consequently, the live face detection requires a long
processing time or multiple
detections, leading to slowed risk identification process.
Summary
[0004] The aim of the present invention is to provide an App login
verification method, device, and
computer readable storage medium, to improve the login risk identification
speed.
[0005] The technical proposal in the present invention includes, from the
first perspective, an App login
verification method is provided, comprising:
acquiring a primary real-time face image and generating display position
information based on
the described primary real-time face image;
receiving prompting messages from a server and displaying the described
prompting message
based on the described display position information;
acquiring a secondary face image based on the described prompting message
information for live
face detection; and
where if the described live face detection passes, passing the login
verification.
1
Date recue / Date received 202 1-1 1-30

[0006] In some preferred embodiments, the described acquisition of a primary
real-time face image and
35 the described generation of display position information based on the
described primary real-time face
image, particularly comprise
capturing a primary real-time face image;
based on image identification techniques, identifying the position range of
the eye focus in the
described primary real-time face image; and
40 generating the display position information based on the described
position range of the eye
focus.
[0007] In some preferred embodiments, the described identification of the
position range of the eye focus
in the described primary real-time face image based on image identification
techniques, particularly
include:
45 obtaining the positions of the front face of the primary real-time
face images, position of pupils in
eyes, and the visual shape of the pupils based on the image identification
technique;
obtaining the angle between the front face of the described primary real-time
face image and the
terminal; and
calculating the eye focus position based on the position of pupils in eyes,
the visual shape of the
50 pupils, the primary real-time face image and the angle of terminal.
[0008] In some preferred embodiments, before the described acquisition of a
primary real-time face
image and the described generation of display position information based on
the described primary real-
time face image, the described method further includes:
sending a login request to the server, wherein the described login request at
least includes a terminal
55 identification; and
the described prompting message sent by terminal, acquired by:
searching user history activity data by the server from the database according
to the described
terminal identification; and
generating user history activity operation questions by the server based on
the described user history
60 activity data, wherein
the described prompting message sent by the described server includes at least
one of the user history
activity operation questions.
[0009] In some preferred embodiments, the described secondary real-time face
image, including the real-
time face image when the user answers the described user history activity
operation questions in voice;
65 and
the described method further includes:
acquiring voice answer information;
2
Date recue / Date received 202 1-1 1-30

where if the described face detection is passed, sending the described
secondary real-time face image
to the server so as for the face recognizing comparison in the server, and
sending the described voice
70 answer information to the server so as for determining that if the voice
answer information matches with
the described user history activity data; and
where if the described face recognition passes and the voice answer
information matches with the
described user history activity data, the login verification is passed.
[0010] In some preferred embodiments, the described determination by the
server of that if the voice
75 answer information matches with the described user history activity
data, particularly includes:
converting the described voice answer information into text answer
information; and
performing fuzzy matching of the described text answer information with the
described user history
activity data by the server.
[0011] In some preferred embodiments, the described method further includes
80 sending the App account number and password information to the server
for account and password
verification by the server, in particular including:
sending the App account number and password information to the server, so as
to compare the
described account password with the password information obtained from the
database for the App
account by the server; and
85 where if the described live face detection passes and the server
concludes the matched account
password with the password information obtained from the database for the App
account by the server,
the login verification is passed.
[0012] From the second perspective, an App login verification device is
provided, at least comprising:
a receiving module, configured to receive prompting messages from a server;
and
90 a processing module, configured to acquire a primary real-time face
image and generate display
position information based on the described primary real-time face image; then
acquire a secondary face
image based on the described prompting message information for live face
detection.
[0013] In some preferred embodiments, the described processing unit comprises:
a capturing unit, configured to acquire the primary real-time face image and
the secondary face
95 image;
a processing unit, configured to perform live face detection based on the
described prompting
message information by the primary real-time face image, and the described
secondary face image; and
a displaying unit, configured to display the described prompting information
based on the
described displaying position information.
3
Date recue / Date received 202 1-1 1-30

100 [0014] From the third perspective, a readable computer storage medium
is provided with computer
programs stored, wherein any of the procedures in the forementioned methods
are performed when the
described computer programs are executed on the described processor.
[0015] Compared with the current technologies, the benefits of the present
invention include that the
terminal user eye focus location information is determined by capturing real-
time face image recognition
105 by terminals, to identify a range of focusing position on the terminal
screen by user eyes. Therefore, the
prompting messages of the live face detection are displayed at the user eye
focus, to save time from
finding prompting messages and help users for in-time responses. Therefore,
the risk identification speed
is improved, to prevent users from spending too much time on finding reminders
and multiple commands
with consequent extended risk identification time.
110
Brief descriptions of the drawings
[0016] For better explanation of the technical proposal of embodiments in the
present invention, the
accompanying drawings are briefly introduced in the following. Obviously, the
following drawings
represent only a portion of embodiments of the present invention. Those
skilled in the art are able to
115 create other drawings according to the accompanying drawings without
making creative efforts.
[0017] Fig. 1 is a flow diagram of an App login verification method provided
in the embodiment 1 of the
present invention;
Fig. 2 is a schematic diagram of the algorithm for face-screen angles in the
embodiment 1 of the
present invention;
120 Fig. 3 is a schematic diagram of the maximum upward offset angle of
simulated human eyes in
the embodiment 1 of the present invention;
Fig. 4 is a schematic diagram of the imaging formation of human eyes in the
screen when human
eyes are in the front of the screen in the embodiment 1 of the present
invention;
Fig. 5 is a schematic diagram of the imaging formation of human eyes in the
screen when human
125 eyes are looking upwards with keep face static in the embodiment 1 of
the present invention;
Fig. 6 is a schematic diagram of calculating the front face-terminal angle in
the primary real-time
face image in the embodiment 1 of the present invention;
Fig. 7 is a flow diagram of an App login verification method provided in the
embodiment 2 of the
present invention;
130 Fig. 8 is a flow diagram of an App login verification method
provided in the embodiment 3 of the
present invention;
Fig. 9 is a flow diagram of an App login verification method provided in the
embodiment 4 of the
present invention; and
4
Date recue / Date received 202 1-1 1-30

Fig. 10 is a structure diagram of an App login verification device provided in
the embodiment 5
135 of the present invention.
Detailed descriptions
[0018] With the drawings of embodiments in the present invention, the
technical proposals are explained
precisely and completely. Obviously, the embodiments described below are only
a portion of embodiments
140 of the present invention and cannot represent all possible embodiments.
Based on the embodiments in the
present invention, the other applications by those skilled in the art without
any creative works are falling
within the scope of the present invention.
[0019] For App logins, especially for finance-related Apps, the user
identities on the mobile terminals
should be verified for risk identifications. Currently, the live face
detection is generally adopted for
145 recognizing the user identity. The current methods include sending face
detection commands to the terminal,
and displaying command messages such as eye blinking, head shaking, and mouth
opening, and the
performing live face detection of the face gestures made by users according to
the commands. With the
described method, users need to find command messages on terminals. With
different message displaying
positions for different Apps, users need extra time to find the command. When
a user does not find the
150 command in time without in-time response of corresponding face
gestures, the live face detection is failed
and restarted, leading to extended identity verification time and slowed risk
identification speed.
[0020] Embodiment 1, an App login verification method as shown in Fig. 1 is
provided, comprising:
Si-1, acquiring a primary real-time face image and generating display position
information based
on the described primary real-time face image.
155 [0021] The terminal opens a build-in camera to capture images of the
current user as the primary real-time
face image, and analyzes the primary real-time face image to generate display
position information.
[0022] In detail, the present step comprises the following sub-steps:
S 1- la, capturing a primary real-time face image; and
Si-lb, based on image identification techniques, identifying the position
range of the eye focus in
160 the described primary real-time face image.
[0023] In detail, the step Si-lb comprises the following steps:
Si-lb l, obtaining the positions of the front face of the primary real-time
face images, position of
pupils in eyes, and the visual shape of the pupils based on the image
identification technique;
S 1-1b2, obtaining the angle between the front face of the described primary
real-time face image
165 and the terminal; and
S 1-1b3, calculating the eye focus position based on the position of pupils in
the eyes, the visual
shape of the pupils, the primary real-time face image and the angle of the
terminal.
Date recue / Date received 202 1-1 1-30

[0024] Sl-lc, generating the display position information based on the
described position range of the
eye focus.
170 [0025] In detail, if the prompting message is displayed as lined texts,
the vertical position y of the
prompting message on the terminal screen is calculated. For the horizontal
position, by simulation
trainings, when the pupil movement is less than a certain distance, the texts
are centered, and when the
pupil movement is greater than a certain distance to the right, the texts are
aligned right.
[0026] As shown in Fig. 2, with assuming terminal user eyes are symmetric and
equal sized, when the
175 face in the primary real-time face image sent by the terminal is
relatively to the right of the screen, and
the eyes are within the range of the screen with the left eyeball centered as
a circle, the left eye in the
primary real-time face image is concluded to be parallel to the phone screen.
The length of the left eye is
noted as X1 and the right eye length is noted as zl. The angle A is calculated
by cosA = zl/Xl. With big
data trainings, the maximum value of the angle A (maxA) is simulated while
eyes remain on the screen.
180 When A is smaller than maxA, the angle is identified as an effective
angle. For facing to the left, the
value notation is reversed with the same calculation.
[0027] With the model identifications, the human faces can be determined as
facing upwards or
downwards. When the face in the primary real-time face image is relatively
upward to the screen, with
two eyes within the range of the screen and eyeballs centered as a circle, the
eye focus of the described
185 primary real-time face image is located at the top of the screen. When
the eyes are relatively downward,
the conclusion is reversed.
[0028] When the lengths of the left and right eyes are identical, or, when the
left or right angles are
within detectable ranges, the position of the pupils on the eyeballs are
calculated. By simulation trainings,
with the maximum angle (LmaxB), the focus leaves from the screen as shown in
Fig. 3.
190 [0029] When human eyes are facing straight to the screen, the imaging
formed in the screen of the eyes
are shown in Fig. 4. When the face is static with eyes looking upwards, the
imaging formed in the screen
of the eyes are shown in Fig. 5.
[0030] As shown in Fig. 6, based on the imaging changes of pupils in the
screen, the front face-screen
angle in the primary real-time face image is calculated. According to the
front face-screen angle in the
195 primary real-time face image, the y positions of the eyes in the screen
are calculated. In the screen, the
middle point of the eye imaging moves upwards by yl, wherein the shift yl is
the position of the y
location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zI/Xl;
yl = maxY / LmaxB * LB; and
200 based on the shift algorithm, the prompting message display
position on the screen is calculated
(as shown in Fig. 7).
6
Date recue / Date received 202 1-1 1-30

[0031] When the eye focus position on the screen in the primary real-time face
image is detected to change,
based on the shift algorithm, the display location of user online activity
question texts are re-calculated.
[0032] SI-2, receiving prompting messages from a server and displaying the
described prompting message
205 based on the described display position information.
[0033] SI-3, acquiring a secondary face image based on the described prompting
message information for
live face detection.
[0034] In detail, the terminal user makes corresponding face gestures for the
secondary real-time face
images capturing based on the displayed prompting information. Where if the
live face detection passes,
210 the login verification is passed.
[0035] An App login verification method is provided in embodiments of the
present invention, wherein
the real-time face images are captured by a terminal to identify the eye focus
location information of the
terminal user, for determining the range of user eye focus location on the
terminal screen. Therefore, the
prompting messages of the live face detection are displayed at the user eye
focus, to save time from
215 finding prompting messages and help users for in-time responses.
Therefore, the risk identification speed
is improved, to prevent users from spending too much time on finding reminders
and multiple commands
with consequent extended risk identification time.
[0036] Embodiment 2, an App login verification method is provided in the
present invention, as shown in
Fig.7, comprising:
220 S2-1, sending a login request to the server, wherein the described
login request at least includes a
terminal identification.
[0037] S2-2, acquiring a primary real-time face image and generating display
position information based
on the described primary real-time face image. In detail, the present step
includes the following sub-steps:
52-2a, capturing a primary real-time face image.
225 [0038] 52-2b, based on image identification techniques, identifying the
position range of the eye focus in
the described primary real-time face image.
[0039] In detail, the step S2-2b includes the following sub-steps:
52-2b1, obtaining the positions of the front face of the primary real-time
face images, position of
pupils in eyes, and the visual shape of the pupils based on the image
identification technique.
230 [0040] S2-2b2, obtaining the angle between the front face of the
described primary real-time face image
and the terminal.
[0041] 52-2b3, calculating the eye focus position based on the position of
pupils in eyes, the visual shape
of the pupils, the primary real-time face image and the angle of terminal.
[0042] 52-2c, generating the display position information based on the
described position range of the
235 eye focus.
7
Date recue / Date received 202 1-1 1-30

[0043] In detail, if the prompting message is displayed as lined texts, the
vertical position y of the
prompting message on the terminal screen is calculated. For the horizontal
position, by simulation
trainings, when the pupil movement is less than a certain distance, the texts
are centered, and when the
pupil movement is greater than a certain distance to the right, the texts are
aligned right.
240 [0044] As shown in Fig. 2, with assuming terminal user eyes are
symmetric and equal sized, when the
face in the primary real-time face image sent by the terminal is relatively to
the right of the screen, and
the eyes are within the range of the screen with the left eyeball centered as
a circle, the left eye in the
primary real-time face image is concluded to be parallel to the phone screen.
The length of the left eye is
noted as X1 and the right eye length is noted as zl. The angle A is calculated
by cosA = zl/Xl. With big
245 data trainings, the maximum value of the angle A (maxA) is simulated
while eyes remain on the screen.
When A is smaller than maxA, the angle is identified as an effective angle.
For facing to the left, the
value notation is reversed with the same calculation.
[0045] With the model identifications, the human faces can be determined as
facing upwards or
downwards. When the face in the primary real-time face image is relatively
upward to the screen, with
250 two eyes within the range of the screen and eyeballs centered as a
circle, the eye focus of the described
primary real-time face image is located at the top of the screen. When the
eyes are relatively downward,
the conclusion is reversed.
[0046] When the lengths of the left and right eyes are identical, or, when the
left or right angles are
within detectable ranges, the position of the pupils on the eyeballs are
calculated. By simulation trainings,
255 with the maximum angle (LmaxB), the focus leaves from the screen as
shown in Fig. 3.
[0047] When human eyes are facing straight to the screen, the imaging formed
in the screen of the eyes
are shown in Fig. 4. When the face is static with eyes looking upwards, the
imaging formed in the screen
of the eyes are shown in Fig. 5.
[0048] As shown in Fig. 6, based on the imaging changes of pupils in the
screen, the front face-screen
260 angle in the primary real-time face image is calculated. According to
the front face-screen angle in the
primary real-time face image, the y positions of the eyes in the screen are
calculated. In the screen, the
middle point of the eye imaging moves upwards by yl, wherein the shift yl is
the position of the y
location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zl/Xl;
265 yl = maxY / LmaxB * LB; and
based on the shift algorithm, the prompting message display position on the
screen is calculated
(as shown in Fig. 7).
[0049] S2-3, receiving prompting messages from a server and displaying the
described prompting message
based on the described display position information.
8
Date recue / Date received 202 1-1 1-30

270 [0050] In detail, the prompting information is sent by the server via
the following procedure:
[0051] S2-3a, searching user history activity data by the server from the
database.
[0052] The terminal history activity data is stored in the database, wherein
each terminal has a terminal
identification. The server searches user history activity data server from the
database according to the
described terminal identification, such as the item purchase name from a
recent online order, name of a
275 service requested, and title key words of article or news messages.
[0053] The terminal displays the prompting messages including at least one of
user history activity
operation questions at the location of the user eye focus, wherein the user
would not need to take extra time
looking for prompting messages, and consequently less time for login
verification is required.
[0054] Further in detail, as a preferrable application, the prompting message
further includes a microphone
280 icon, to remind the current terminal user to use voice for answering
user history activity operation questions.
[0055] S2-4, acquiring a secondary face image based on the described prompting
message information for
live face detection.
[0056] In detail, the secondary face image includes user voice answers to the
user history activity operation
questions. The present step further includes: acquiring voice answer
information. In detail, the described
285 voice answer information includes the voice answer information of the
user voice answers to user history
activity operation questions. When the camera is turned on by the terminal,
and the image captured when
the user voice answers user history activity operation questions is identified
as the secondary face image.
During the live face detection after acquiring the secondary face images the
user voice answer to user
history activity operation questions are checked and returned, to ensure the
login verification performance
290 while prevent the processing time from being extended.
[0057] Where if the live face detection passes, proceeding to the next step S2-
5.
[0058] S2-5, sending the described secondary real-time face image to the
server so as for the face
recognizing comparison in the server, and sending the described voice answer
information to the server so
as for determining that if the voice answer information matches with the
described user history activity data.
295 [0059] As a preferred application, the secondary real-time face image
is sent to the server so as for the face
recognizing comparison in the server, by means of
based on pre-set filtering conditions, the best frame in the secondary real-
time face image is sent to
the server for the face recognizing comparison. For example, the frame with
the best quality and eyes facing
straightly to the screen is selected from the secondary real-time face image
and sent to the server for the
300 face recognizing comparison.
[0060] In detail, the server determines that if the voice answer information
matches with the described user
history activity data, by means of:
S2-5a, converting the described voice answer information into text answer
information.
9
Date recue / Date received 202 1-1 1-30

[0061] S2-5b, performing fuzzy matching of the described text answer
information with the described user
305 history activity data by the server.
[0062] where if the described face recognition passes and the voice answer
information matches with the
described user history activity data, the login verification is passed.
[0063] The present embodiment would not restrict the order of performing face
recognition and the
determination of that if the voice answer information matches with the
described user history activity data.
310 [0064] An App login method is provided in the present invention. During
the account login, the terminal
built-in camera is always on. By image recognition, the front face, pupil
location in the user eyes, the pupil
shape and the front face-screen angle of the current terminal user are
determined, to identify the location
range of the eye focus on the screen. The user history activity operation
questions are directly displayed on
the current user eye focus. As a result, the user does not need extra time to
look for the texts. The voice
315 answer information of the answers to history activity data by the user
are collected. The user voice answer
and the face gesture are compared with the stored user history. Compared with
the current login verification
detection methods, the problems of finding the message location and long live
face detection time with
complicated orders are solved. In the meanwhile, the user history activity
verification is added with the live
face detection to improve the account security and prevent account or funds
from being stolen.
320 [0065] Embodiment 3, an app verification method is provided as shown in
Fig. 8, comprising:
S3-1, acquiring a primary real-time face image and generating display position
information based
on the described primary real-time face image.
[0066] The terminal opens a build-in camera to capture images of the current
user as the primary real-time
face image, and analyzes the primary real-time face image to generate display
position information.
325 [0067] In detail, the present step comprises the following sub-steps:
S3-1a, capturing a primary real-time face image; and
S3-1b, based on image identification techniques, identifying the position
range of the eye focus in
the described primary real-time face image.
[0068] In detail, the step Si-lb comprises the following steps:
330 S3- lbl, obtaining the positions of the front face of the primary
real-time face images, position of
pupils in eyes, and the visual shape of the pupils based on the image
identification technique.
[0069] S3-1b2, obtaining the angle between the front face of the described
primary real-time face image
and the terminal; and
[0070] S3-1b3, calculating the eye focus position based on the position of
pupils in eyes, the visual shape
335 of the pupils, the primary real-time face image and the angle of
terminal.
[0071] S3-1c, generating the display position information based on the
described position range of the
eye focus.
Date recue / Date received 202 1-1 1-30

[0072] In detail, if the prompting message is displayed as lined texts, the
vertical position y of the
prompting message on the terminal screen is calculated. For the horizontal
position, by simulation
340 trainings, when the pupil movement is less than a certain distance, the
texts are centered, and when the
pupil movement is greater than a certain distance to the right, the texts are
aligned right.
[0073] As shown in Fig. 2, with assuming terminal user eyes are symmetric and
equal sized, when the
face in the primary real-time face image sent by the terminal is relatively to
the right of the screen, and
the eyes are within the range of the screen with the left eyeball centered as
a circle, the left eye in the
345 primary real-time face image is concluded to be parallel to the phone
screen. The length of the left eye is
noted as X1 and the right eye length is noted as zl. The angle A is calculated
by cosA = zl/Xl. With big
data trainings, the maximum value of the angle A (maxA) is simulated while
eyes remain on the screen.
When A is smaller than maxA, the angle is identified as an effective angle.
For facing to the left, the
value notation is reversed with the same calculation.
350 [0074] With the model identifications, the human faces can be
determined as facing upwards or
downwards. When the face in the primary real-time face image is relatively
upward to the screen, with
two eyes within the range of the screen and eyeballs centered as a circle, the
eye focus of the described
primary real-time face image is located at the top of the screen. When the
eyes are relatively downward,
the conclusion is reversed.
355 [0075] When the lengths of the left and right eyes are identical, or,
when the left or right angles are
within detectable ranges, the position of the pupils on the eyeballs are
calculated. By simulation trainings,
with the maximum angle (LmaxB), the focus leaves from the screen as shown in
Fig. 3.
[0076] When human eyes are facing straight to the screen, the imaging formed
in the screen of the eyes
are shown in Fig. 4. When the face is static with eyes looking upwards, the
imaging formed in the screen
360 of the eyes are shown in Fig. 5.
[0077] As shown in Fig. 6, based on the imaging changes of pupils in the
screen, the front face-screen
angle in the primary real-time face image is calculated. According to the
front face-screen angle in the
primary real-time face image, the y positions of the eyes in the screen are
calculated. In the screen, the
middle point of the eye imaging moves upwards by yl, wherein the shift yl is
the position of the y
365 location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zl/Xl;
yl = maxY / LmaxB * LB; and
based on the shift algorithm, the prompting message display position on the
screen is calculated
(as shown in Fig. 7).
370 [0078] When the eye focus position on the screen in the primary real-
time face image is detected to change,
based on the shift algorithm, the display location of user online activity
question texts are re-calculated.
11
Date recue / Date received 202 1-1 1-30

[0079] S3-2, receiving prompting messages from a server and displaying the
described prompting message
based on the described display position information.
[0080] S3-3, acquiring a secondary face image based on the described prompting
message information for
375 live face detection.
[0081] In detail, the user makes corresponding face gestures for the secondary
real-time face images
capturing based on the displayed prompting information.
[0082] As a preferred application, where if the live face detection passes,
proceeding to the next step S3-
4.
380 [0083] S3-4, sending the App account number and password information to
the server for account and
password verification by the server.
[0084] In particular, the step comprises:
S3-4a, receiving the account password verification command form the server
based on the login
request.
385 [0085] S3-4b, sending the App account number and password information
to the server, so as to compare
the described account password with the password information obtained from the
database for the App
account by the server.
[0086] Where if the password information obtained from the database for the
App account by the server
matches with the account password sent to the server from the terminal, the
App account password
390 verification passes.
[0087] Where if both the live face detection and the App account password
verification pass, login
verification is passed.
[0088] An App login method is provided in the present invention. During the
account login, the terminal
built-in camera is always on. By image recognition, the front face, pupil
location in the user eyes, the pupil
395 shape and the front face-screen angle of the current terminal user are
determined, to identify the location
range of the eye focus on the screen. The user history activity operation
questions are directly displayed on
the current user eye focus. As a result, the user does not need extra time to
look for the texts. The voice
answer information of the answers to history activity data by the user are
collected. The user voice answer
and the face gesture are compared with the stored user history. Compared with
the current login verification
400 detection methods, the problems of finding the message location and
long live face detection time with
complicated orders are solved. In the meanwhile, the user history activity
verification is added with the live
face detection to improve the account security and prevent account or funds
from being stolen.
[0089] Embodiment 4, an App login verification method is provided in the
present embodiment as shown
in Fig. 9, wherein the difference from the embodiment 4 is that the App
account password information is
12
Date recue / Date received 202 1-1 1-30

405 sent to the server for the account password verification before
performing the live face detection. the present
embodiment provides the same technical benefits as the embodiment 5, and is
not further explained in detail.
[0090] Embodiment 5, an app verification device is provided in the present
embodiment, as shown in Fig
10, at least comprising:
a receiving module 51, configured to receive prompting messages from a server.
410 [0091] a processing module 52, configured to acquire a primary real-
time face image and generate
display position information based on the described primary real-time face
image; then acquire a
secondary face image based on the described prompting message information for
live face detection.
[0092] In some preferred embodiments, the processing module 52 particularly
comprises:
a capturing unit, configured to acquire the primary real-time face image and
the secondary face
415 image;
a processing unit, configured to perform live face detection based on the
described prompting
message information by the primary real-time face image, and the described
secondary face image; and
a displaying unit, configured to display the described prompting information
based on the described
displaying position information.
420 [0093] In some preferred embodiments, the processing module 52 further
comprises:
an image identification unit, configured to identify the position range of the
eye focus in the
described primary real-time face image based on image identification
techniques; in detail, to obtain the
positions of the front face of the primary real-time face images, position of
pupils in eyes, and the visual
shape of the pupils based on the image identification technique; and to
obtaining the angle between the
425 front face of the described primary real-time face image and the
terminal.
[0094] A computation unit, configured to calculate the eye focus position
based on the position of pupils
in eyes, the visual shape of the pupils, the primary real-time face image and
the angle of terminal.
[0095] The processing unit is further configured to generate the display
position information based on the
described position range of the eye focus.
430 [0096] In some preferred embodiment, the described device further
comprises:
a sending module, configured to send a login request to the server and send
the described secondary
real-time face image to the server so as for the face recognizing comparison
in the server, and send the
described voice answer information to the server so as for determining that if
the voice answer information
matches with the described user history activity data.
435 [0097] In the meanwhile, the processing module 52 further includes
a voice recording unit, configured to acquire voice answer information.
[0098] In some preferred embodiment, the sending module is further configured
to send the App account
number and password information to the server for account and password
verification by the server.
13
Date recue / Date received 202 1-1 1-30

[0099] To clarify, when the App login verification method is invoked in the
App login verification device
440 in the forementioned embodiments, the described functional module
configurations are used for illustration
only. In practical applications, the described functions can be assigned to
different functional modules
according to practical demands, wherein the internal structural configuration
of the device is divided into
different functional modules to perform all or a portion of the described
functions. Besides, the
forementioned App login verification device in the embodiment adopts the same
concepts in the described
445 App login verification method embodiments. The described device is
based on the implementation of the
App login verification method, whereas the detailed procedures can be referred
to the method embodiments
and are not explained in further detail.
[0100] Embodiment 6, a readable computer storage medium with computer programs
stored is provided
in the present embodiment, wherein the App login verification methods in any
of embodiments 1 ¨ 4 are
450 performed when the described computer programs are executed on the
described processor.
[0101] The readable computer storage medium provided in the present embodiment
is used to process the
App login verification method in the embodiments 1 ¨4, with the same benefits
provided by the App login
verification method from the embodiments 1 ¨ 4, and is not further explained
in detail.
[0102] Those skills in that art can understand that all or a portion of the
forementioned embodiments can
455 be achieved by hardware, or by hardware driven by programs, stored on a
readable computer storage
medium. The forementioned storage medium can be but not limited to memory,
diskettes, or discs.
[0103] the forementioned technical proposals can be achieved by any
combinations of the embodiments
in the present invention. In other words, the embodiments can be combined to
meet requirements of
different application scenarios, wherein all possible combinations are falling
in the scope of the present
460 invention, and are not explained in further detail.
[0104] Obviously, the forementioned embodiments are referred to represent the
technical concept and
features of the present invention, providing explanations to those skilled in
the art for further applications,
that shall not limit the protection scope of the present invention. Therefore,
all alterations, modifications,
equivalence, improvements of the present invention fall within the scope of
the present invention.
465
14
Date recue / Date received 202 1-1 1-30

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Inactive : Octroit téléchargé 2024-06-18
Inactive : Octroit téléchargé 2024-06-18
Lettre envoyée 2024-05-14
Accordé par délivrance 2024-05-14
Inactive : Page couverture publiée 2024-05-13
Préoctroi 2024-04-05
Inactive : Taxe finale reçue 2024-04-05
Lettre envoyée 2024-03-14
Un avis d'acceptation est envoyé 2024-03-14
Inactive : Approuvée aux fins d'acceptation (AFA) 2024-03-12
Inactive : Q2 réussi 2024-03-12
Modification reçue - modification volontaire 2024-03-01
Modification reçue - réponse à une demande de l'examinateur 2024-03-01
Rapport d'examen 2024-02-27
Inactive : Rapport - CQ échoué - Mineur 2024-02-26
Remise non refusée 2023-07-28
Modification reçue - modification volontaire 2023-06-28
Modification reçue - réponse à une demande de l'examinateur 2023-06-28
Remise non refusée 2023-05-30
Lettre envoyée 2023-03-28
Offre de remise 2023-03-28
Rapport d'examen 2023-02-28
Inactive : Rapport - Aucun CQ 2023-02-24
Inactive : CIB attribuée 2022-04-22
Inactive : CIB attribuée 2022-04-22
Inactive : CIB attribuée 2022-04-22
Inactive : CIB attribuée 2022-04-22
Inactive : CIB enlevée 2022-04-22
Inactive : CIB attribuée 2022-04-22
Lettre envoyée 2022-04-21
Avancement de l'examen jugé conforme - alinéa 84(1)a) des Règles sur les brevets 2022-04-21
Lettre envoyée 2022-04-19
Inactive : Page couverture publiée 2022-04-06
Inactive : CIB attribuée 2022-04-04
Inactive : CIB en 1re position 2022-04-04
Inactive : CIB attribuée 2022-04-04
Demande publiée (accessible au public) 2022-03-30
Inactive : Taxe de devanc. d'examen (OS) traitée 2022-03-15
Exigences pour une requête d'examen - jugée conforme 2022-03-15
Inactive : Avancement d'examen (OS) 2022-03-15
Toutes les exigences pour l'examen - jugée conforme 2022-03-15
Modification reçue - modification volontaire 2022-03-15
Requête d'examen reçue 2022-03-15
Réponse concernant un document de priorité/document en suspens reçu 2021-11-30
Inactive : Rép reçue: Traduct de priorité exigée 2021-11-30
Lettre envoyée 2021-10-29
Exigences de dépôt - jugé conforme 2021-10-29
Demande de priorité reçue 2021-10-28
Exigences applicables à la revendication de priorité - jugée conforme 2021-10-28
Lettre envoyée 2021-10-28
Inactive : Pré-classement 2021-09-30
Inactive : CQ images - Numérisation 2021-09-30
Demande reçue - nationale ordinaire 2021-09-30

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Le dernier paiement a été reçu le 2023-12-15

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Taxe pour le dépôt - générale 2021-10-01 2021-09-30
Requête d'examen - générale 2025-10-01 2022-03-15
Avancement de l'examen 2022-03-15 2022-03-15
TM (demande, 2e anniv.) - générale 02 2023-10-03 2023-06-15
TM (demande, 3e anniv.) - générale 03 2024-10-01 2023-12-15
Taxe finale - générale 2021-10-01 2024-04-05
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
10353744 CANADA LTD.
Titulaires antérieures au dossier
JINFEI DING
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Revendications 2024-02-29 17 950
Dessin représentatif 2024-04-16 1 12
Revendications 2023-06-27 17 951
Description 2021-09-29 12 744
Dessins 2021-09-29 7 269
Description 2021-11-29 14 851
Revendications 2021-11-29 4 134
Dessins 2021-11-29 7 89
Abrégé 2021-11-29 1 23
Dessin représentatif 2022-04-05 1 8
Revendications 2022-03-14 17 663
Demande de l'examinateur 2024-02-26 4 151
Modification / réponse à un rapport 2024-02-29 22 829
Taxe finale 2024-04-04 3 63
Certificat électronique d'octroi 2024-05-13 1 2 527
Courtoisie - Certificat de dépôt 2021-10-28 1 565
Courtoisie - Réception de la requête d'examen 2022-04-18 1 423
Avis du commissaire - Demande jugée acceptable 2024-03-13 1 578
Modification / réponse à un rapport 2023-06-27 41 1 664
Nouvelle demande 2021-09-29 6 209
Avis du commissaire - Traduction requise 2021-10-27 2 199
Traduction reçue / Document de priorité 2021-11-29 31 1 234
Requête d'examen / Modification / réponse à un rapport / Avancement d'examen (OS) 2022-03-14 22 861
Courtoisie - Requête pour avancer l’examen - Conforme (OS) 2022-04-20 1 173
Demande de l'examinateur 2023-02-27 5 174
Courtoisie - Lettre de remise 2023-03-27 2 202