Sélection de la langue

Search

Sommaire du brevet 3147026 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 3147026
(54) Titre français: SYSTEME D'ANNEAU DE DETECTION DES GESTES NATURELS POUR UNE COMMANDE D'INTERFACE UTILISATEUR A DISTANCE ET L'ENTREE DE TEXTE
(54) Titre anglais: NATURAL GESTURE DETECTING RING SYSTEM FOR REMOTE USER INTERFACE CONTROL AND TEXT ENTRY
Statut: Demande conforme
Données bibliographiques
Abrégés

Abrégé anglais


A method for using finger or hand mounted motion sensing platforms connected
wirelessly to a main computing device to reliably recognize complex hand and
finger
movements is shown. This allows the user to control the computing device or
other
devices with intuitive gestures and character entry using finger handwriting.
A fitness
tracking mode is also supported. The sensor platform and main computing device
use
a combination of conventional signal processing and deep neural networks to
process
data to determine the presence and the classification of gesture motions and
character
entry. Preset tap patterns can change the device from recognizing gestures,
characters, fitness tracking and sleep mode, while an inactivity timer also
engages
sleep mode. For character recognition, the invention displays the top detected
character candidates and allows the correct one to be selected by the user via
swipe
gestures to better accommodate for errors in character detection and also to
help build
a library of recognized characters for improving future operation.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


Claims
1.A portable wireless electronic system allowing gesture-controlled
functionality
comprising of
a. a battery powered wireless motion detecting sensor platform
i. Consisting of a motion detecting sensor, a digital microprocessor,
an electronic battery, a power management control unit, a digital
memory storage unit, a wireless radio transmitter, and a digital
communications bus for the above to communicate
ii. With a physical apparatus to secure the device to a user's finger or
hand
iii. With a wireless bidirectional link to the main computing device,
allowing
= the sensor platform to send both digitally processed and
original data from the motion detecting sensor
= the main computing device to change modes of operation, to
update or modify the firmware or other data storage on the
sensor platform
b. a main computing device, a compact portable electronic device with digital
storage, a digital microprocessor, a wireless connection to other smart
devices, a means to display information visually to the user via either a
physical display on the main computing device or a wireless link to a
display device, or a wireless link to a device connected to a display device
2.The combination in claim 1, where the motion data processing path comprises
of
a. the motion detecting sensor on the sensor platform sending data to the
microprocessor on the sensor platform where it is processed using a
digital algorithm
b. a digital algorithm consisting of a data preprocessing block which feeds
data to parallel sets of sequential data processing blocks, which can
consist of digital signal processing filters, convolutional neural networks,
recursive neural networks, or fully connected neural network layers
c. a decision-making algorithm block at the end of the parallel sets of
processing blocks and where the processing blocks may branch in parallel
paths or combine together to a common block
d. the number of, types, and coefficients of the data processing blocks and
as well as the algorithms and the coefficients of the decision making block
being adjustable, programmable and reconfigurable based on a library of
labelled, prerecorded motion sensor data corresponding to known and
classified movements
e. the motion detecting apparatus sending data over the communications
bus to the digital microprocessor, which processes it according to a
preprogrammed digital algorithm stored on the digital memory storage
unit, before sending the data to the main computing device, which
Date Recue/Date Received 2022-01-28

3.The combination in claim 2, where the main computing device receives
processed
or unprocessed sensor data from the sensor platform over the wireless link and
processes it with a digital algorithm:
a. a digital algorithm consisting of a data preprocessing block which feeds
data to parallel sets of sequential data processing blocks, which can
consist of digital signal processing filters, convolutional neural networks,
recursive neural networks, or fully connected neural network layers
b. a decision-making algorithm block at the end of the parallel sets of
processing blocks and where the processing blocks may branch in parallel
paths or combine together to a common block
c. the number of, types, and coefficients of the data processing blocks and
as well as the algorithms and the coefficients of the decision-making block
being adjustable, programmable and reconfigurable based on a library of
labelled, prerecorded motion sensor data corresponding to known and
classified movements
d. the decision-making blocks are able to make determinations made by
decision-making blocks is used by the digital algorithm stored on the
digital storage memory on the main computing device by the digital
microprocessor on the main computing device is used to make
determinations as to the presence and type of relevant hand and finger
gestures represented by the data received by the sensor platform.
e. the decision-making block is able to make determinations upon with the
microprocessor is able to take actions on the main computing device itself,
or the plurality of smart devices that the main computing device is
connected to, such as to remotely send or receive information, execute
commands.
4. The combination in claim 1, wherein the operational mode of the system
comprises of 4 different modes of operation
a. Where the sensor platform and main computing device are able to change
operational modes through the detection of predetermined tap patterns by
means of the sensor platform motion detector, in addition to context
changes through inactivity timers, context sensitive user interface cues, or
through an external control.
b. a low power sleep mode where the low power sleep operation mode
comprises of having the sensor platform operate in a low energy state,
with the microprocessor operating with a reduced operational frequency,
and with the radio transmitter operational for short and infrequent periods
of time, and with the motion sensor operating with a reduced sample rate
with a communicating indicator if a predetermined tapping pattern is
received to indicate a transition into a different operational mode of the
system.
c. a gesture detection mode where the gesture detection mode comprises of
the sensor and process combination of the sensor platform and main
computing device to record motion data from the sensor platform, making
Date Recue/Date Received 2022-01-28

use of the data processing stacks on the sensor platform and the main
computing device to detect the presence of and make a determination of
the type of gestures and where the main computing device is able to take
actions in response to detected gestures, such as the changing of the
mode of operation, manipulating the user interface as displayed by the
main computing device, and manipulating the user interface of a remote
connected smart device (by means of the bidirectional communications
link).
d. a text entry mode where the text entry mode comprises of the sensor and
process combination of the sensor platform and main computing device to
record motion data from the sensor platform, making use of the data
processing stacks on the sensor platform and the main computing device
to detect the presence of and make a determination of the most likely
types of character entry motions.
e. a fitness tracking mode, where the fitness tracking mode comprises of the
sensor and process combination of the sensor platform and main
computing device which process motion data to detect steps, motion, or
other fitness activity.
5. The combination in claim 4, wherein upon the detection of a character
motion and
upon the determination of the most likely types of character entry motions
a. the user interface of the main computing device displays the most likely
character candidates and temporarily changes the mode of operation to
the gesture detection operational mode to detect selection gestures
b. the main computing device is able to take actions in response to detected
character entry motions in combination with the selection gesture, such as
the insertion, deletion or manipulation of text characters or the character
entry point on the user interface as displayed by the main computing
device, or the insertion, deletion or manipulation of text characters or the
character entry point the user interface of a remote connected smart
device (by means of the bidirectional communications link).
c. the main computing device is able to take actions in response to detected
character entry motions in combination with the selection gesture by
adding the detected character motion and the selected character to a
library of labelled gesture and character data to be used in the
improvement of future character motion detection.
Date Recue/Date Received 2022-01-28

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


Description
This invention is in the field of wearable electronic devices. It deals with
an interface for
controlling user interfaces for different types of electronic devices. This
can include
everyday portable electronic devices like a smartphone, tablet, laptop or
desktop
computer. Another these types of devices are displays like television screens,
smart
TVs, and TV set-top boxes like Apple TV, Google Chromecast, Amazon Fire stick,
or
Roku TV stick, etc. Another type of device are smart home electronics such as
robot
vacuum cleaners, smart blinds, smart home lighting, or smart thermostat, etc.
Still
another type of devices are virtual reality or augmented reality headsets. The
use of
this invention is not limited to these types of device categories, which are
listed to give
examples of applications for this invention.
Consumer electronic devices have advanced greatly in the past decade, taking
on
smaller, low energy form factors, with increased capabilities made possible by
wireless
technologies such as Bluetooth, IEEE 802.11 ("Wifi") and cellular networks.
However,
methods for controlling such devices are still limited to predominantly
tactile methods
such as keyboards, mouse, physical buttons, touchscreens or pointers. Control
when
direct proximity is not available is limited to audio cues such as voice
commands or
rudimentary hand gestures such as waving or clapping. This invention expands
upon
these methods by using a finger mounted wearable controlled wireless
accelerometer
sensor to relay precise and complex finger gestures and interpret them using a
combination of conventional signal processing and deep neural networks. This
allows a
wide array of fine natural and intuitive gestures to be detected and
classified with high
proficiency to allow for a high degree of device control, including operation
of devices
and text entry. The invention allows for intuitive text entry and adds a
method for easily
correcting errors in text entry, while at the same time improving the gesture
recognition
capabilities of the invention.
The concept of the invention is shown in figure 1. It makes use of a sensor
platform, a
compact wireless device with a built-in motion detector (such as an
accelerometer or
gyroscope) worn on the user's finger or hand, such as a ring, or bracelet, or
an
implanted sensor, that can communicate wirelessly to the main computing
device, such
as a laptop computer or smart phone, using a wireless digital communications
protocol
such as Wifi, or Bluetooth. The invention consists of the software operating
on
wearable device and on the computing device and the integration of the two,
which
allows the sensor platform and the main computing device to work together to
recognize
complex hand and figure motions, allowing it to take the user's finger inputs
and take
actions accordingly (open applications, insert or modify text, control an
additional
remote device, etc.).
The general operation of the invention has the wearable finger sensor
platform's
accelerometer data sent to a low power microprocessor on the wearable sensor
platform. The low power microprocessor takes this data and can perform some
basic
signal processing and decision making with this data- such as wake from sleep
mode,
Date Recue/Date Received 2022-01-28

or change to other mode of operation. When appropriate, a stream of data,
which can
include the accelerometer data and data that can be derived from the
accelerometer
data (such as filtered data, or transformed into frequency domain or wavelet
transformation) is sent to the main computing device, where more advanced
signal
processing takes place. This signal processing involves multiple layers of
conventional
signal processing (such as basic time or frequency domain filters, wavelet
transforms
etc.), and deep neural networks (using algorithms such as convolutional neural
networks, recursive neural networks such as LSTM or GRU, as well as fully
connected
neural networks). A high-level decision is made based on the processing of
this data,
which can be to determine if the accelerometer data represents a valid gesture
or
character that should be acted upon, performing the necessary actions based on
the
gesture, and learning from previous gestures to improve the future
functionality of the
device.
The hardware of the sensor platform is shown in Figure 2. It is a small finger
or hand
worn device, which contains several electronic components, such as a motion
detector
(such as an accelerometer or gyroscope), a microprocessor, a power management
circuit, on board data storage (both for storing data, access codes, and
executable
instructions) as well as a radio transmitter for sending the measured motion
data, as
well as for operations such as firmware updates, and device pairing.
The motion data is sent to the main computing device where it is analyzed and
decisions are made based off of the nature of the data. This includes the
detection and
classification of gestures and characters, with actions taken accordingly
(such as the
insertion of a character in a line of text, or the operation of a smart
device). In order to
make such a determination given the complex nature of the motion data, a deep
processing stack is required. Parts of this stack may exist on the sensor
platform and
the main computing device.
A sample of the data processing stack that may be on either or both of the
sensor
platform and the main computing device is shown in Figure 3. Motion data taken
from
the motion sensor on the sensor platform is given to a preprocessing layer,
which can
perform basic operations such as scaling, interpolating, decimating,
normalization.
Additional data streams may be derived from this stream as well, which can
include
differentiation, integration, weighted averaging, etc. This data is fed into a
processing
stack which performs filtering operations on the data. These could be
conventional
signal processing filters such as IIR, or FIR filter, FFT transforms, or non-
linear
operations (such as exponentiation, power, logarithmic, etc.). Alternatively,
they can be
neural network layers to make up a deep neural network, which could include
convolutional neural network (CNN) layers, Recursive Neural Network (RNN)
layers
such as Long Short-Term Memory (LSTM) or GRU units, as well as fully connected
layers. These layers are given numerical weights, bias values, and a non-
linear
activation function, whose values are determined through mathematical training
operations based on a large library of measured and labelled motion data
(including
Date Recue/Date Received 2022-01-28

gesture and character samples). Many of these processing layers are present to
process the data, and the layers are connected one to the other until they
reach the
decision-making layer, which takes the result of the previous layers to make a
determination based on them (such as the detection or classification of a
gesture or
character). These layers may be branched, interconnected, and combined as
needed,
which is determined by a mathematical training operation using the previously
mentioned library of labelled motion data. To improve the efficiency of the
library, lower
level layers can be shared to make different types of determinations, while
the upper
levels can remain distinct.
The invention allows for 4 different modes of operation. These include a
gesture
detection mode, a character detection mode, a fitness tracking mode and a low
power
sleep mode. Figure 4 shows a state diagram of these 4 different modes and how
they
may change from one state to another. One way the state changes may occur is
if
predetermine tap patterns are entered. Once detected, the tap pattern "G" will
cause
the device to enter gesture detection mode, while tap pattern "C" will cause
it to enter
character detection mode, while tap pattern "F" will cause it to enter fitness
tracking
mode, while tap pattern "S" will cause it to enter low power sleep mode. Other
means
of triggering transitions to other modes of operation are possible. The device
sleep
mode can be triggered on a long enough period of inactivity, while context
sensitive
cues can cause the device to move from character to gesture recognition mode
(for
instance, once a character is detected, several candidate characters will be
displayed
for the user to select one, with the device changed to the gesture recognition
mode to
allow the user to use directional gestures to select the desired character).
In the low power sleep mode, most of the device functions, such as the radio
transmitter, are disabled to save power and to prolong battery life. Limited
functionality
exists to detect tap patterns to wake the sensor platform and change it to
another mode
of operation.
In the fitness tracking mode, the device acts as a basic step counter to track
steps or
distance travelled. The motion detector on the sensor platform is used to
monitor the
motions of the user's arm and body to track motions associated with fitness
activity such
as walking or jogging. The signal processing block on the sensor platform can
provide
preliminary signal processing so to keep track of detected steps while
minimizing the
number of radio transmissions to save power.
In the gesture detection mode, gestures detected as shown in the chart in
Figure 5.
Which allows the user to perform many different type of actions such as the
manipulation of remote user interfaces without the need for physical contact.
This can
be when it is impractical, impossible, or undesirable to do so (such as a
screen that is
far away, or only exists virtually in a headset, or if the user's hands are
full, wet or dirty).
The invention can detect a gesture when a predetermined tap pattern is
detected to
start listening for a gesture. The gesture data will be the accelerometer data
(direct and
Date Recue/Date Received 2022-01-28

preprocessed) up until the end of the gesture is detected (such as when the
accelerometer detects no further motion of the wearable sensor platform). This
data is
sent to the main computing device when listens for this gesture and acts
accordingly -
such as opening or closing an application or dialog box, changing an active
selection, or
interacting with a Ul element such as a slider. The gestures include a set to
recognize
commonly performed actives- such as a gesture to accept, enter or start, a
gesture for
cancellation of exit, a directional swipe set of gesture for selection, and
rotational
gestures for incrementing or decrementing values.
The text entry mode of operation operates similarly to the gesture detection
mode of
operation. It is shown in the chart in Figure 6. To make the invention as
intuitive to use
as possible, the use of a virtual keyboard is not desirable. Instead, when the
user
wishes to enter a character (such as a letter, number, or punctuation symbol),
the user
should draw the character directly with their fingers as if they were writing
it in the air.
The invention can detect a character when a predetermined tap pattern is
detected to
start listening for a character. The character data will be the accelerometer
data (direct
and preprocessed) up until the end of the gesture is detected (such as when
the motion
detector detects no further motion of the wearable sensor platform). This data
is sent to
the main computing device when listens for this gesture and acts accordingly.
Due to
the larger set of characters that can be recognized, there will often by a
large margin of
error in selecting the correct character- to accommodate for this, after the
character
entry, the invention will display several top choices based on the character
detection
algorithm and allow the user to choose the desired one with a quick swipe
gesture
(similar to what was described in the gesture detection mode of operation). In
addition
to giving the user an opportunity to correct for errors, this also gives the
system a
chance to save the recorded character entered and the user's actual intent to
a library
of prerecorded characters for user in improving the character detection
algorithm in the
future for all users in general, and for the current user in particular to
recognize their
handwriting style, for instance.
Date Recue/Date Received 2022-01-28

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Exigences quant à la conformité - jugées remplies 2024-03-11
Lettre envoyée 2024-01-29
Demande publiée (accessible au public) 2023-07-28
Inactive : CIB attribuée 2022-05-19
Inactive : CIB attribuée 2022-05-19
Inactive : CIB en 1re position 2022-05-19
Exigences de dépôt - jugé conforme 2022-02-14
Lettre envoyée 2022-02-14
Inactive : CQ images - Numérisation 2022-01-28
Déclaration du statut de petite entité jugée conforme 2022-01-28
Demande reçue - nationale ordinaire 2022-01-28
Inactive : Pré-classement 2022-01-28

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Taxe pour le dépôt - petite 2022-01-28 2022-01-28
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
JERRY LAM
ZHE JIANG
Titulaires antérieures au dossier
S.O.
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Dessin représentatif 2023-12-20 1 6
Description 2022-01-27 4 339
Revendications 2022-01-27 3 218
Dessins 2022-01-27 4 61
Abrégé 2022-01-27 1 32
Courtoisie - Certificat de dépôt 2022-02-13 1 568
Avis du commissaire - non-paiement de la taxe de maintien en état pour une demande de brevet 2024-03-10 1 552
Nouvelle demande 2022-01-27 5 141