Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.
APP LOGIN VERIFICATION METHOD AND DEVICE AND COMPUTER READABLE
STORAGE MEDIUM
Technical Field
[0001] The present invention relates to the field of mobile terminal security,
in particular, to a method, a
device, and a storage medium for App login verification.
Background
[0002] With the internet development and popularity of mobile phones, the
national regulations are strictly
monitoring the security and personal information for intelligent terminal
Apps, wherein the identity
verification is required for user logins in on the user mobile terminals. For
example, account logins via a
new smart phone or from a non-residence address generally use the popular face
detection for identity
verification. With the commands of stiff gestures such as eye blinking, head
shaking, and mouth opening,
users are required to change face gestures according to the commands for live
face detection. The live face
detection is used for user identity verification.
[0003] The forementioned method has the following drawbacks and limitations.
The live face detection
adopts user face gestures according to the command such as eye blinking, head
shaking, and mouth opening.
However different Apps displays prompting messages at different locations,
being upper or lower. The user
first needs to find the message location and then complete the face gestures
according to several commands
with complicated orders. Consequently, the live face detection requires a long
processing time or multiple
detections, leading to slowed risk identification process.
Summary
[0004] The aim of the present invention is to provide an App login
verification method, device, and
computer readable storage medium, to improve the login risk identification
speed.
[0005] The technical proposal in the present invention includes, from the
first perspective, an App login
verification method is provided, comprising:
acquiring a primary real-time face image and generating display position
information based on
the described primary real-time face image;
receiving prompting messages from a server and displaying the described
prompting message
based on the described display position information;
acquiring a secondary face image based on the described prompting message
information for live
face detection; and
where if the described live face detection passes, passing the login
verification.
1
Date recue / Date received 202 1-1 1-30
[0006] In some preferred embodiments, the described acquisition of a primary
real-time face image and
35 the described generation of display position information based on the
described primary real-time face
image, particularly comprise
capturing a primary real-time face image;
based on image identification techniques, identifying the position range of
the eye focus in the
described primary real-time face image; and
40 generating the display position information based on the described
position range of the eye
focus.
[0007] In some preferred embodiments, the described identification of the
position range of the eye focus
in the described primary real-time face image based on image identification
techniques, particularly
include:
45 obtaining the positions of the front face of the primary real-time
face images, position of pupils in
eyes, and the visual shape of the pupils based on the image identification
technique;
obtaining the angle between the front face of the described primary real-time
face image and the
terminal; and
calculating the eye focus position based on the position of pupils in eyes,
the visual shape of the
50 pupils, the primary real-time face image and the angle of terminal.
[0008] In some preferred embodiments, before the described acquisition of a
primary real-time face
image and the described generation of display position information based on
the described primary real-
time face image, the described method further includes:
sending a login request to the server, wherein the described login request at
least includes a terminal
55 identification; and
the described prompting message sent by terminal, acquired by:
searching user history activity data by the server from the database according
to the described
terminal identification; and
generating user history activity operation questions by the server based on
the described user history
60 activity data, wherein
the described prompting message sent by the described server includes at least
one of the user history
activity operation questions.
[0009] In some preferred embodiments, the described secondary real-time face
image, including the real-
time face image when the user answers the described user history activity
operation questions in voice;
65 and
the described method further includes:
acquiring voice answer information;
2
Date recue / Date received 202 1-1 1-30
where if the described face detection is passed, sending the described
secondary real-time face image
to the server so as for the face recognizing comparison in the server, and
sending the described voice
70 answer information to the server so as for determining that if the voice
answer information matches with
the described user history activity data; and
where if the described face recognition passes and the voice answer
information matches with the
described user history activity data, the login verification is passed.
[0010] In some preferred embodiments, the described determination by the
server of that if the voice
75 answer information matches with the described user history activity
data, particularly includes:
converting the described voice answer information into text answer
information; and
performing fuzzy matching of the described text answer information with the
described user history
activity data by the server.
[0011] In some preferred embodiments, the described method further includes
80 sending the App account number and password information to the server
for account and password
verification by the server, in particular including:
sending the App account number and password information to the server, so as
to compare the
described account password with the password information obtained from the
database for the App
account by the server; and
85 where if the described live face detection passes and the server
concludes the matched account
password with the password information obtained from the database for the App
account by the server,
the login verification is passed.
[0012] From the second perspective, an App login verification device is
provided, at least comprising:
a receiving module, configured to receive prompting messages from a server;
and
90 a processing module, configured to acquire a primary real-time face
image and generate display
position information based on the described primary real-time face image; then
acquire a secondary face
image based on the described prompting message information for live face
detection.
[0013] In some preferred embodiments, the described processing unit comprises:
a capturing unit, configured to acquire the primary real-time face image and
the secondary face
95 image;
a processing unit, configured to perform live face detection based on the
described prompting
message information by the primary real-time face image, and the described
secondary face image; and
a displaying unit, configured to display the described prompting information
based on the
described displaying position information.
3
Date recue / Date received 202 1-1 1-30
100 [0014] From the third perspective, a readable computer storage medium
is provided with computer
programs stored, wherein any of the procedures in the forementioned methods
are performed when the
described computer programs are executed on the described processor.
[0015] Compared with the current technologies, the benefits of the present
invention include that the
terminal user eye focus location information is determined by capturing real-
time face image recognition
105 by terminals, to identify a range of focusing position on the terminal
screen by user eyes. Therefore, the
prompting messages of the live face detection are displayed at the user eye
focus, to save time from
finding prompting messages and help users for in-time responses. Therefore,
the risk identification speed
is improved, to prevent users from spending too much time on finding reminders
and multiple commands
with consequent extended risk identification time.
110
Brief descriptions of the drawings
[0016] For better explanation of the technical proposal of embodiments in the
present invention, the
accompanying drawings are briefly introduced in the following. Obviously, the
following drawings
represent only a portion of embodiments of the present invention. Those
skilled in the art are able to
115 create other drawings according to the accompanying drawings without
making creative efforts.
[0017] Fig. 1 is a flow diagram of an App login verification method provided
in the embodiment 1 of the
present invention;
Fig. 2 is a schematic diagram of the algorithm for face-screen angles in the
embodiment 1 of the
present invention;
120 Fig. 3 is a schematic diagram of the maximum upward offset angle of
simulated human eyes in
the embodiment 1 of the present invention;
Fig. 4 is a schematic diagram of the imaging formation of human eyes in the
screen when human
eyes are in the front of the screen in the embodiment 1 of the present
invention;
Fig. 5 is a schematic diagram of the imaging formation of human eyes in the
screen when human
125 eyes are looking upwards with keep face static in the embodiment 1 of
the present invention;
Fig. 6 is a schematic diagram of calculating the front face-terminal angle in
the primary real-time
face image in the embodiment 1 of the present invention;
Fig. 7 is a flow diagram of an App login verification method provided in the
embodiment 2 of the
present invention;
130 Fig. 8 is a flow diagram of an App login verification method
provided in the embodiment 3 of the
present invention;
Fig. 9 is a flow diagram of an App login verification method provided in the
embodiment 4 of the
present invention; and
4
Date recue / Date received 202 1-1 1-30
Fig. 10 is a structure diagram of an App login verification device provided in
the embodiment 5
135 of the present invention.
Detailed descriptions
[0018] With the drawings of embodiments in the present invention, the
technical proposals are explained
precisely and completely. Obviously, the embodiments described below are only
a portion of embodiments
140 of the present invention and cannot represent all possible embodiments.
Based on the embodiments in the
present invention, the other applications by those skilled in the art without
any creative works are falling
within the scope of the present invention.
[0019] For App logins, especially for finance-related Apps, the user
identities on the mobile terminals
should be verified for risk identifications. Currently, the live face
detection is generally adopted for
145 recognizing the user identity. The current methods include sending face
detection commands to the terminal,
and displaying command messages such as eye blinking, head shaking, and mouth
opening, and the
performing live face detection of the face gestures made by users according to
the commands. With the
described method, users need to find command messages on terminals. With
different message displaying
positions for different Apps, users need extra time to find the command. When
a user does not find the
150 command in time without in-time response of corresponding face
gestures, the live face detection is failed
and restarted, leading to extended identity verification time and slowed risk
identification speed.
[0020] Embodiment 1, an App login verification method as shown in Fig. 1 is
provided, comprising:
Si-1, acquiring a primary real-time face image and generating display position
information based
on the described primary real-time face image.
155 [0021] The terminal opens a build-in camera to capture images of the
current user as the primary real-time
face image, and analyzes the primary real-time face image to generate display
position information.
[0022] In detail, the present step comprises the following sub-steps:
S 1- la, capturing a primary real-time face image; and
Si-lb, based on image identification techniques, identifying the position
range of the eye focus in
160 the described primary real-time face image.
[0023] In detail, the step Si-lb comprises the following steps:
Si-lb l, obtaining the positions of the front face of the primary real-time
face images, position of
pupils in eyes, and the visual shape of the pupils based on the image
identification technique;
S 1-1b2, obtaining the angle between the front face of the described primary
real-time face image
165 and the terminal; and
S 1-1b3, calculating the eye focus position based on the position of pupils in
the eyes, the visual
shape of the pupils, the primary real-time face image and the angle of the
terminal.
Date recue / Date received 202 1-1 1-30
[0024] Sl-lc, generating the display position information based on the
described position range of the
eye focus.
170 [0025] In detail, if the prompting message is displayed as lined texts,
the vertical position y of the
prompting message on the terminal screen is calculated. For the horizontal
position, by simulation
trainings, when the pupil movement is less than a certain distance, the texts
are centered, and when the
pupil movement is greater than a certain distance to the right, the texts are
aligned right.
[0026] As shown in Fig. 2, with assuming terminal user eyes are symmetric and
equal sized, when the
175 face in the primary real-time face image sent by the terminal is
relatively to the right of the screen, and
the eyes are within the range of the screen with the left eyeball centered as
a circle, the left eye in the
primary real-time face image is concluded to be parallel to the phone screen.
The length of the left eye is
noted as X1 and the right eye length is noted as zl. The angle A is calculated
by cosA = zl/Xl. With big
data trainings, the maximum value of the angle A (maxA) is simulated while
eyes remain on the screen.
180 When A is smaller than maxA, the angle is identified as an effective
angle. For facing to the left, the
value notation is reversed with the same calculation.
[0027] With the model identifications, the human faces can be determined as
facing upwards or
downwards. When the face in the primary real-time face image is relatively
upward to the screen, with
two eyes within the range of the screen and eyeballs centered as a circle, the
eye focus of the described
185 primary real-time face image is located at the top of the screen. When
the eyes are relatively downward,
the conclusion is reversed.
[0028] When the lengths of the left and right eyes are identical, or, when the
left or right angles are
within detectable ranges, the position of the pupils on the eyeballs are
calculated. By simulation trainings,
with the maximum angle (LmaxB), the focus leaves from the screen as shown in
Fig. 3.
190 [0029] When human eyes are facing straight to the screen, the imaging
formed in the screen of the eyes
are shown in Fig. 4. When the face is static with eyes looking upwards, the
imaging formed in the screen
of the eyes are shown in Fig. 5.
[0030] As shown in Fig. 6, based on the imaging changes of pupils in the
screen, the front face-screen
angle in the primary real-time face image is calculated. According to the
front face-screen angle in the
195 primary real-time face image, the y positions of the eyes in the screen
are calculated. In the screen, the
middle point of the eye imaging moves upwards by yl, wherein the shift yl is
the position of the y
location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zI/Xl;
yl = maxY / LmaxB * LB; and
200 based on the shift algorithm, the prompting message display
position on the screen is calculated
(as shown in Fig. 7).
6
Date recue / Date received 202 1-1 1-30
[0031] When the eye focus position on the screen in the primary real-time face
image is detected to change,
based on the shift algorithm, the display location of user online activity
question texts are re-calculated.
[0032] SI-2, receiving prompting messages from a server and displaying the
described prompting message
205 based on the described display position information.
[0033] SI-3, acquiring a secondary face image based on the described prompting
message information for
live face detection.
[0034] In detail, the terminal user makes corresponding face gestures for the
secondary real-time face
images capturing based on the displayed prompting information. Where if the
live face detection passes,
210 the login verification is passed.
[0035] An App login verification method is provided in embodiments of the
present invention, wherein
the real-time face images are captured by a terminal to identify the eye focus
location information of the
terminal user, for determining the range of user eye focus location on the
terminal screen. Therefore, the
prompting messages of the live face detection are displayed at the user eye
focus, to save time from
215 finding prompting messages and help users for in-time responses.
Therefore, the risk identification speed
is improved, to prevent users from spending too much time on finding reminders
and multiple commands
with consequent extended risk identification time.
[0036] Embodiment 2, an App login verification method is provided in the
present invention, as shown in
Fig.7, comprising:
220 S2-1, sending a login request to the server, wherein the described
login request at least includes a
terminal identification.
[0037] S2-2, acquiring a primary real-time face image and generating display
position information based
on the described primary real-time face image. In detail, the present step
includes the following sub-steps:
52-2a, capturing a primary real-time face image.
225 [0038] 52-2b, based on image identification techniques, identifying the
position range of the eye focus in
the described primary real-time face image.
[0039] In detail, the step S2-2b includes the following sub-steps:
52-2b1, obtaining the positions of the front face of the primary real-time
face images, position of
pupils in eyes, and the visual shape of the pupils based on the image
identification technique.
230 [0040] S2-2b2, obtaining the angle between the front face of the
described primary real-time face image
and the terminal.
[0041] 52-2b3, calculating the eye focus position based on the position of
pupils in eyes, the visual shape
of the pupils, the primary real-time face image and the angle of terminal.
[0042] 52-2c, generating the display position information based on the
described position range of the
235 eye focus.
7
Date recue / Date received 202 1-1 1-30
[0043] In detail, if the prompting message is displayed as lined texts, the
vertical position y of the
prompting message on the terminal screen is calculated. For the horizontal
position, by simulation
trainings, when the pupil movement is less than a certain distance, the texts
are centered, and when the
pupil movement is greater than a certain distance to the right, the texts are
aligned right.
240 [0044] As shown in Fig. 2, with assuming terminal user eyes are
symmetric and equal sized, when the
face in the primary real-time face image sent by the terminal is relatively to
the right of the screen, and
the eyes are within the range of the screen with the left eyeball centered as
a circle, the left eye in the
primary real-time face image is concluded to be parallel to the phone screen.
The length of the left eye is
noted as X1 and the right eye length is noted as zl. The angle A is calculated
by cosA = zl/Xl. With big
245 data trainings, the maximum value of the angle A (maxA) is simulated
while eyes remain on the screen.
When A is smaller than maxA, the angle is identified as an effective angle.
For facing to the left, the
value notation is reversed with the same calculation.
[0045] With the model identifications, the human faces can be determined as
facing upwards or
downwards. When the face in the primary real-time face image is relatively
upward to the screen, with
250 two eyes within the range of the screen and eyeballs centered as a
circle, the eye focus of the described
primary real-time face image is located at the top of the screen. When the
eyes are relatively downward,
the conclusion is reversed.
[0046] When the lengths of the left and right eyes are identical, or, when the
left or right angles are
within detectable ranges, the position of the pupils on the eyeballs are
calculated. By simulation trainings,
255 with the maximum angle (LmaxB), the focus leaves from the screen as
shown in Fig. 3.
[0047] When human eyes are facing straight to the screen, the imaging formed
in the screen of the eyes
are shown in Fig. 4. When the face is static with eyes looking upwards, the
imaging formed in the screen
of the eyes are shown in Fig. 5.
[0048] As shown in Fig. 6, based on the imaging changes of pupils in the
screen, the front face-screen
260 angle in the primary real-time face image is calculated. According to
the front face-screen angle in the
primary real-time face image, the y positions of the eyes in the screen are
calculated. In the screen, the
middle point of the eye imaging moves upwards by yl, wherein the shift yl is
the position of the y
location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zl/Xl;
265 yl = maxY / LmaxB * LB; and
based on the shift algorithm, the prompting message display position on the
screen is calculated
(as shown in Fig. 7).
[0049] S2-3, receiving prompting messages from a server and displaying the
described prompting message
based on the described display position information.
8
Date recue / Date received 202 1-1 1-30
270 [0050] In detail, the prompting information is sent by the server via
the following procedure:
[0051] S2-3a, searching user history activity data by the server from the
database.
[0052] The terminal history activity data is stored in the database, wherein
each terminal has a terminal
identification. The server searches user history activity data server from the
database according to the
described terminal identification, such as the item purchase name from a
recent online order, name of a
275 service requested, and title key words of article or news messages.
[0053] The terminal displays the prompting messages including at least one of
user history activity
operation questions at the location of the user eye focus, wherein the user
would not need to take extra time
looking for prompting messages, and consequently less time for login
verification is required.
[0054] Further in detail, as a preferrable application, the prompting message
further includes a microphone
280 icon, to remind the current terminal user to use voice for answering
user history activity operation questions.
[0055] S2-4, acquiring a secondary face image based on the described prompting
message information for
live face detection.
[0056] In detail, the secondary face image includes user voice answers to the
user history activity operation
questions. The present step further includes: acquiring voice answer
information. In detail, the described
285 voice answer information includes the voice answer information of the
user voice answers to user history
activity operation questions. When the camera is turned on by the terminal,
and the image captured when
the user voice answers user history activity operation questions is identified
as the secondary face image.
During the live face detection after acquiring the secondary face images the
user voice answer to user
history activity operation questions are checked and returned, to ensure the
login verification performance
290 while prevent the processing time from being extended.
[0057] Where if the live face detection passes, proceeding to the next step S2-
5.
[0058] S2-5, sending the described secondary real-time face image to the
server so as for the face
recognizing comparison in the server, and sending the described voice answer
information to the server so
as for determining that if the voice answer information matches with the
described user history activity data.
295 [0059] As a preferred application, the secondary real-time face image
is sent to the server so as for the face
recognizing comparison in the server, by means of
based on pre-set filtering conditions, the best frame in the secondary real-
time face image is sent to
the server for the face recognizing comparison. For example, the frame with
the best quality and eyes facing
straightly to the screen is selected from the secondary real-time face image
and sent to the server for the
300 face recognizing comparison.
[0060] In detail, the server determines that if the voice answer information
matches with the described user
history activity data, by means of:
S2-5a, converting the described voice answer information into text answer
information.
9
Date recue / Date received 202 1-1 1-30
[0061] S2-5b, performing fuzzy matching of the described text answer
information with the described user
305 history activity data by the server.
[0062] where if the described face recognition passes and the voice answer
information matches with the
described user history activity data, the login verification is passed.
[0063] The present embodiment would not restrict the order of performing face
recognition and the
determination of that if the voice answer information matches with the
described user history activity data.
310 [0064] An App login method is provided in the present invention. During
the account login, the terminal
built-in camera is always on. By image recognition, the front face, pupil
location in the user eyes, the pupil
shape and the front face-screen angle of the current terminal user are
determined, to identify the location
range of the eye focus on the screen. The user history activity operation
questions are directly displayed on
the current user eye focus. As a result, the user does not need extra time to
look for the texts. The voice
315 answer information of the answers to history activity data by the user
are collected. The user voice answer
and the face gesture are compared with the stored user history. Compared with
the current login verification
detection methods, the problems of finding the message location and long live
face detection time with
complicated orders are solved. In the meanwhile, the user history activity
verification is added with the live
face detection to improve the account security and prevent account or funds
from being stolen.
320 [0065] Embodiment 3, an app verification method is provided as shown in
Fig. 8, comprising:
S3-1, acquiring a primary real-time face image and generating display position
information based
on the described primary real-time face image.
[0066] The terminal opens a build-in camera to capture images of the current
user as the primary real-time
face image, and analyzes the primary real-time face image to generate display
position information.
325 [0067] In detail, the present step comprises the following sub-steps:
S3-1a, capturing a primary real-time face image; and
S3-1b, based on image identification techniques, identifying the position
range of the eye focus in
the described primary real-time face image.
[0068] In detail, the step Si-lb comprises the following steps:
330 S3- lbl, obtaining the positions of the front face of the primary
real-time face images, position of
pupils in eyes, and the visual shape of the pupils based on the image
identification technique.
[0069] S3-1b2, obtaining the angle between the front face of the described
primary real-time face image
and the terminal; and
[0070] S3-1b3, calculating the eye focus position based on the position of
pupils in eyes, the visual shape
335 of the pupils, the primary real-time face image and the angle of
terminal.
[0071] S3-1c, generating the display position information based on the
described position range of the
eye focus.
Date recue / Date received 202 1-1 1-30
[0072] In detail, if the prompting message is displayed as lined texts, the
vertical position y of the
prompting message on the terminal screen is calculated. For the horizontal
position, by simulation
340 trainings, when the pupil movement is less than a certain distance, the
texts are centered, and when the
pupil movement is greater than a certain distance to the right, the texts are
aligned right.
[0073] As shown in Fig. 2, with assuming terminal user eyes are symmetric and
equal sized, when the
face in the primary real-time face image sent by the terminal is relatively to
the right of the screen, and
the eyes are within the range of the screen with the left eyeball centered as
a circle, the left eye in the
345 primary real-time face image is concluded to be parallel to the phone
screen. The length of the left eye is
noted as X1 and the right eye length is noted as zl. The angle A is calculated
by cosA = zl/Xl. With big
data trainings, the maximum value of the angle A (maxA) is simulated while
eyes remain on the screen.
When A is smaller than maxA, the angle is identified as an effective angle.
For facing to the left, the
value notation is reversed with the same calculation.
350 [0074] With the model identifications, the human faces can be
determined as facing upwards or
downwards. When the face in the primary real-time face image is relatively
upward to the screen, with
two eyes within the range of the screen and eyeballs centered as a circle, the
eye focus of the described
primary real-time face image is located at the top of the screen. When the
eyes are relatively downward,
the conclusion is reversed.
355 [0075] When the lengths of the left and right eyes are identical, or,
when the left or right angles are
within detectable ranges, the position of the pupils on the eyeballs are
calculated. By simulation trainings,
with the maximum angle (LmaxB), the focus leaves from the screen as shown in
Fig. 3.
[0076] When human eyes are facing straight to the screen, the imaging formed
in the screen of the eyes
are shown in Fig. 4. When the face is static with eyes looking upwards, the
imaging formed in the screen
360 of the eyes are shown in Fig. 5.
[0077] As shown in Fig. 6, based on the imaging changes of pupils in the
screen, the front face-screen
angle in the primary real-time face image is calculated. According to the
front face-screen angle in the
primary real-time face image, the y positions of the eyes in the screen are
calculated. In the screen, the
middle point of the eye imaging moves upwards by yl, wherein the shift yl is
the position of the y
365 location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zl/Xl;
yl = maxY / LmaxB * LB; and
based on the shift algorithm, the prompting message display position on the
screen is calculated
(as shown in Fig. 7).
370 [0078] When the eye focus position on the screen in the primary real-
time face image is detected to change,
based on the shift algorithm, the display location of user online activity
question texts are re-calculated.
11
Date recue / Date received 202 1-1 1-30
[0079] S3-2, receiving prompting messages from a server and displaying the
described prompting message
based on the described display position information.
[0080] S3-3, acquiring a secondary face image based on the described prompting
message information for
375 live face detection.
[0081] In detail, the user makes corresponding face gestures for the secondary
real-time face images
capturing based on the displayed prompting information.
[0082] As a preferred application, where if the live face detection passes,
proceeding to the next step S3-
4.
380 [0083] S3-4, sending the App account number and password information to
the server for account and
password verification by the server.
[0084] In particular, the step comprises:
S3-4a, receiving the account password verification command form the server
based on the login
request.
385 [0085] S3-4b, sending the App account number and password information
to the server, so as to compare
the described account password with the password information obtained from the
database for the App
account by the server.
[0086] Where if the password information obtained from the database for the
App account by the server
matches with the account password sent to the server from the terminal, the
App account password
390 verification passes.
[0087] Where if both the live face detection and the App account password
verification pass, login
verification is passed.
[0088] An App login method is provided in the present invention. During the
account login, the terminal
built-in camera is always on. By image recognition, the front face, pupil
location in the user eyes, the pupil
395 shape and the front face-screen angle of the current terminal user are
determined, to identify the location
range of the eye focus on the screen. The user history activity operation
questions are directly displayed on
the current user eye focus. As a result, the user does not need extra time to
look for the texts. The voice
answer information of the answers to history activity data by the user are
collected. The user voice answer
and the face gesture are compared with the stored user history. Compared with
the current login verification
400 detection methods, the problems of finding the message location and
long live face detection time with
complicated orders are solved. In the meanwhile, the user history activity
verification is added with the live
face detection to improve the account security and prevent account or funds
from being stolen.
[0089] Embodiment 4, an App login verification method is provided in the
present embodiment as shown
in Fig. 9, wherein the difference from the embodiment 4 is that the App
account password information is
12
Date recue / Date received 202 1-1 1-30
405 sent to the server for the account password verification before
performing the live face detection. the present
embodiment provides the same technical benefits as the embodiment 5, and is
not further explained in detail.
[0090] Embodiment 5, an app verification device is provided in the present
embodiment, as shown in Fig
10, at least comprising:
a receiving module 51, configured to receive prompting messages from a server.
410 [0091] a processing module 52, configured to acquire a primary real-
time face image and generate
display position information based on the described primary real-time face
image; then acquire a
secondary face image based on the described prompting message information for
live face detection.
[0092] In some preferred embodiments, the processing module 52 particularly
comprises:
a capturing unit, configured to acquire the primary real-time face image and
the secondary face
415 image;
a processing unit, configured to perform live face detection based on the
described prompting
message information by the primary real-time face image, and the described
secondary face image; and
a displaying unit, configured to display the described prompting information
based on the described
displaying position information.
420 [0093] In some preferred embodiments, the processing module 52 further
comprises:
an image identification unit, configured to identify the position range of the
eye focus in the
described primary real-time face image based on image identification
techniques; in detail, to obtain the
positions of the front face of the primary real-time face images, position of
pupils in eyes, and the visual
shape of the pupils based on the image identification technique; and to
obtaining the angle between the
425 front face of the described primary real-time face image and the
terminal.
[0094] A computation unit, configured to calculate the eye focus position
based on the position of pupils
in eyes, the visual shape of the pupils, the primary real-time face image and
the angle of terminal.
[0095] The processing unit is further configured to generate the display
position information based on the
described position range of the eye focus.
430 [0096] In some preferred embodiment, the described device further
comprises:
a sending module, configured to send a login request to the server and send
the described secondary
real-time face image to the server so as for the face recognizing comparison
in the server, and send the
described voice answer information to the server so as for determining that if
the voice answer information
matches with the described user history activity data.
435 [0097] In the meanwhile, the processing module 52 further includes
a voice recording unit, configured to acquire voice answer information.
[0098] In some preferred embodiment, the sending module is further configured
to send the App account
number and password information to the server for account and password
verification by the server.
13
Date recue / Date received 202 1-1 1-30
[0099] To clarify, when the App login verification method is invoked in the
App login verification device
440 in the forementioned embodiments, the described functional module
configurations are used for illustration
only. In practical applications, the described functions can be assigned to
different functional modules
according to practical demands, wherein the internal structural configuration
of the device is divided into
different functional modules to perform all or a portion of the described
functions. Besides, the
forementioned App login verification device in the embodiment adopts the same
concepts in the described
445 App login verification method embodiments. The described device is
based on the implementation of the
App login verification method, whereas the detailed procedures can be referred
to the method embodiments
and are not explained in further detail.
[0100] Embodiment 6, a readable computer storage medium with computer programs
stored is provided
in the present embodiment, wherein the App login verification methods in any
of embodiments 1 ¨ 4 are
450 performed when the described computer programs are executed on the
described processor.
[0101] The readable computer storage medium provided in the present embodiment
is used to process the
App login verification method in the embodiments 1 ¨4, with the same benefits
provided by the App login
verification method from the embodiments 1 ¨ 4, and is not further explained
in detail.
[0102] Those skills in that art can understand that all or a portion of the
forementioned embodiments can
455 be achieved by hardware, or by hardware driven by programs, stored on a
readable computer storage
medium. The forementioned storage medium can be but not limited to memory,
diskettes, or discs.
[0103] the forementioned technical proposals can be achieved by any
combinations of the embodiments
in the present invention. In other words, the embodiments can be combined to
meet requirements of
different application scenarios, wherein all possible combinations are falling
in the scope of the present
460 invention, and are not explained in further detail.
[0104] Obviously, the forementioned embodiments are referred to represent the
technical concept and
features of the present invention, providing explanations to those skilled in
the art for further applications,
that shall not limit the protection scope of the present invention. Therefore,
all alterations, modifications,
equivalence, improvements of the present invention fall within the scope of
the present invention.
465
14
Date recue / Date received 202 1-1 1-30