Sélection de la langue

Search

Sommaire du brevet 3084890 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 3084890
(54) Titre français: SYSTEME AUDIO SENSIBLE A LA VOIX, ET PROCEDE ASSOCIE
(54) Titre anglais: VOICE AWARE AUDIO SYSTEM AND METHOD
Statut: Examen
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • G10L 21/00 (2013.01)
  • G10L 21/0364 (2013.01)
  • G10L 25/78 (2013.01)
  • G10L 25/90 (2013.01)
  • G10L 25/93 (2013.01)
(72) Inventeurs :
  • DEGRAYE, TIMOTHY (Suisse)
  • HUGUET, LILIANE (Suisse)
(73) Titulaires :
  • HED TECHNOLOGIES SARL
(71) Demandeurs :
  • HED TECHNOLOGIES SARL (Suisse)
(74) Agent: STIKEMAN ELLIOTT S.E.N.C.R.L.,SRL/LLP
(74) Co-agent:
(45) Délivré:
(86) Date de dépôt PCT: 2018-12-07
(87) Mise à la disponibilité du public: 2019-06-13
Requête d'examen: 2023-12-06
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Oui
(86) Numéro de la demande PCT: PCT/IB2018/001503
(87) Numéro de publication internationale PCT: IB2018001503
(85) Entrée nationale: 2020-06-05

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
16/213,489 (Etats-Unis d'Amérique) 2018-12-07
62/595,627 (Etats-Unis d'Amérique) 2017-12-07

Abrégés

Abrégé français

L'invention concerne un système audio sensible à la voix, et un procédé permettant à un utilisateur portant un casque d'écoute d'être sensible à un environnement sonore extérieur pendant qu'il écoute de la musique ou une autre source audio. Une zone de sensibilité sonore réglable permet à l'utilisateur de ne pas entendre des voix distantes. Le son extérieur peut être analysé dans un domaine fréquentiel afin de sélectionner une fréquence candidate oscillante, et dans un domaine temporel afin de déterminer si la fréquence candidate oscillante constitue le signal d'intérêt. Si le signal dirigé vers le son extérieur est déterminé comme étant un signal d'intérêt, le son extérieur est mélangé à l'audio provenant de la source audio.


Abrégé anglais

A voice aware audio system and a method for a user wearing a headset to be aware of an outer sound environment while listening to music or any other audio source. An adjustable sound awareness zone gives the user the flexibility to avoid hearing far distant voices. The outer sound can be analyzed in a frequency domain to select an oscillating frequency candidate and in a time domain to determine if the oscillating frequency candidate is the signal of interest. If the signal directed to the outer sound is determined to be a signal of interest the outer sound is mixed with audio from the audio source.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


23
What is claimed is:
1. A voice aware audio system comprising:
a headphone configured to receive audio from an audio source;
at least one microphone associated with the headphone, the at least one
microphone
configured to detect an outer sound in an outer sound environment and to
generate a
signal directed to the outer sound; and
an analyzer module for determining if the signal directed to the outer sound
is a signal of
interest,
wherein if the signal directed to the outer sound is determined to be a signal
of interest
the outer sound is mixed with the audio from the audio source.
2. The voice aware audio system of claim 1 wherein the analyzer module is
configured to
analyze the signal directed to an outer sound in a frequency domain to select
an
oscillating frequency candidate and in a time domain to determine if the
oscillating
frequency candidate is the signal of interest.
3. The voice aware audio system of claim 2 wherein the analyzer module
receives the
signal directed to the outer sound in an input buffer and the analysis in the
frequency
domain uses a FFT of the signal in the input buffer to generate an input frame
and the
analysis in the time domain recursively uses sub-frames with the input frame.
4. The voice aware audio system of claim 3 wherein the analysis in the
frequency domain is
performed with Weiner entropy or Weiner entropy simplified.
5. The voice aware audio system of claim 3 wherein the analysis in the time
domain is
performed with a pitch estimation or YIN algorithm.
6. The voice aware audio system of claim 1 wherein the analyzer module further
comprises
a hangover module for determining speech presence or speech absence in the
signal of
interest determined in the time domain.
7. The voice aware system of claim 2 wherein the analysis in the frequency
domain is used
in a noise reduction algorithm to estimate a noise level in the outer sound
environment
and to tune the voice aware audio system based on the noise level.
8. The voice aware audio system of claim 1 wherein an adjustable sound
awareness zone is
defined around the headphone, the adjustable sound awareness zone having one
or more

24
tuning zones and the outer sound is determined to be a signal of interest when
the outer
sound is within a predetermined one of the one or more tuning zones.
9. The voice aware audio system of claim 1 wherein the audio is music.
10. The voice aware audio system of claim 1 wherein the headphones comprise an
array of
microphones, the array of microphones being arranged to attenuate or amplify
audio
coming from a selected direction, the microphones of the array of microphones
being
pointed in various directions to achieve a 360° audio image of an
environment around a
user.
11. The voice aware audio system of claim 10 wherein an adjustable sound
awareness zone is
defined around the headphone, the adjustable sound awareness zone having one
or more
tuning zones and the outer sound is determined to be a signal of interest when
the outer
sound is within a predetermined one of the one or more tuning zones, the
microphone
array removes signals coming from non-desired directions and directs the
microphone
array to a direction of interest.
12. A method for a user wearing a headphone to be aware of an outer sound
environment ,
the headphone configured to receive audio from an audio source comprising the
steps
of:,
a. detecting an outer sound in the outer sound environment with at least one
microphone associated with the headphone,
b. generating a signal directed to the outer sound,
c. determining if the signal directed to the outer sound is a signal of
interest, and
d. if the signal directed to the outer sound is determined to be a signal of
interest,
mixing the outer sound with the audio from the audio source.
13. The method of claim 12 wherein in step. b the outer sound is analyzed in a
frequency
domain to select an oscillating frequency candidate and in a time domain to
determine if
the oscillating frequency candidate is the signal of interest.
14. The method of claim 13 wherein the analysis in the frequency domain is
performed with
Weiner entropy or Weiner entropy simplified.
15. The method of claim 13 wherein the analysis in the time domain is
performed with a
pitch estimation or YIN algorithm.
16. The method of claim 13 further comprising the step of;

25
determining speech presence or speech absence in the signal of interest
determined in the time domain,
17. The method of claim 12 further comprising the steps of:
estimating a noise level in the outer sound environment and
step c. includes tuning based on the noise level to determine if the signal
directed
to the outer sound is a signal of interest.
18. The method of claim 12 further comprising the steps of:
defining an adjustable sound awareness zone around the headphone, the
adjustable sound awareness zone having one or more tuning zones and in step.
c. the
outer sound is determined to be a signal of interest when the outer sound is
within a
predetermined one of the one or more tuning zones.
19. The method of claim 12 wherein the at least one microphone is an array of
microphones,
wherein after a sound is detected in step a. further comprising the step of
localizing a
direction of the sound and steering the array of microphones towards the
determined
localized direction.
20. The method of claim 19 further comprising the steps of :
e. determining if the signal is step b. is a noisy signal;
f when the noisy signal is determined, generating a clean signal;
g. determining the signal in step c. from a first and a second direction and
h. measuring a similarity of the signal from the first and second direction ,
wherein if it is determined in step h. that the signal from the first
direction and the
signal from the second direction are similar, mixing the signal in step. d.
21. The method of claim 18 further comprising the step of removing all signals
coming from
non-desired directions in the adjustable sound awareness zone.
22. The method of claim 12 wherein the audio is music and further comprising
the steps of:
estimating a spectral density power of the music;
estimating a spectral density power of speech in the outer sound;
estimating of a fundamental frequency of the speech to determine speech
formats;

26
computing an energy ratio between the speech formants from and the spectral
power of the music form block to determine voice-to-music ratios (VMR) for
each
spectral band; and
applying a FFT-based equalizer (EQ) onto the spectral bands with a
predetermined VMR.
23. A computer program product implemented in a non-transitory computer
readable storage
medium for determining a sound in an outer sound environment at a headphone,
the
headphone configured to receive audio from an audio source, the program
comprising
program code for detecting an outer sound in the outer sound environment with
at least
one microphone associated with the headphone, program code for generating a
signal
directed to the outer sound, program code for determining if the signal
directed to the
outer sound is a signal of interest, and program code for determining if the
signal
directed to the outer sound is a signal of interest, and program code for
mixing the outer
sound with the audio from the audio source when the signal of the outer sound
is
determined to be of interest .
24. The computer program of claim 23 wherein the outer sound is analyzed in a
frequency
domain to select an oscillating frequency candidate and in a time domain to
determine if
the oscillating frequency candidate is the signal of interest.

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
1
VOICE AWARE AUDIO SYSTEM AND METHOD
Background of the Invention
Field of the Invention
The present invention relates to a system and method for a user wearing a
headset to be
aware of an outer sound environment while listening to music or any other
audio source.
Description of Related Art
Voice activity detection (VAD), also known as speech activity detection or
speech
detection, is a technique used in speech processing in which the presence or
absence of human
speech is detected. Various VAD algorithms are known. Conventional algorithmic
solutions used
for VAD are known to suffer from the problem of a poor detection score when
the input signal is
noisy.
VAD plays a role in many speech processing applications including speech
recognition,
speech compression and noise reduction systems. In Fig. 1, a basic principle
of conventional
VAD is depicted which consists of extracting features from a framed input
signal then, on the
basis of information grabbed from the last few frames, adapting a multi-
dimension threshold and
proceeding to a comparison of the features with this threshold in order to
decide whether the
frame is speech or noise. In general, there is typically a final stage of
decision hangover which
objective is to ensure a continuous speech stream which includes the normal
short silent periods
that happen in a sentence. Frame lengths are in general chosen to be between
10 and 40 ms
duration as this corresponds to a time window where speech can be considered
statistically
stationary.
A criterion to detect speech is to look for voiced parts as those are periodic
and have a
mathematical well-defined structure that can be used in an algorithm. Another
approach is to use
a statistical model for speech, estimate its parameters from acquired data
samples and use the
classic results of decision theory to get to the frame speech/noise
classification.
Fig. 2 illustrates techniques which have been used in time-domain methods to
detect
speech. The techniques include short-time energy, zero-crossing rate, cross-
correlation,
periodicity measure, linear prediction analysis and pitch estimation. Fig. 3
illustrates techniques
which have been used in frequency-domain methods to detect speech. The
techniques include
sub-band energies, Weiner entropy, Cepstrum, energy entropy, harmonicity ratio
and spectrum

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
2
peak analysis. Conventional VAD algorithms use either time or frequency
domains features or
use statistical or other particular algorithmic mechanisms. Some conventional
VADs use a
collection of features including long-term spectral divergence, cepstral peak,
MEL-filtered
spectrum and spectro-temporal modulation in either a time domain or a
frequency domain.
It is known that VAD performance decreases when an amount of noise increases.
Conventional solutions are to have the VAD system preceded by a noise
reduction (NR)
module. One known limitation when pre-processing a speech signal with noise
reduction (NR)
is the potential appearance of musical noise which added to the input signal
may mislead the
VAD module and creates false detections.
Another drawback with the use of conventional NR modules is the difficulty and
even the
impossibility to set internal parameters to allow the system to work correctly
for different noise
levels and categories. As an example, if one chooses a set of internal
parameters to tackle a very
noisy environment, then relatively important distortions will appear in silent
and quiet
environments.
To overcome the above drawbacks which not only impact the audio quality but
may even
harm the VAD module performance, it is desirable to provide an improved
mechanism for
detecting a noise level environment and allow the dynamic setting of the NR
internal parameters.
It is desirable to provide an improved noise-robust VAD method and a system
for
allowing a user to be aware of an outer sound environment while listening to
music or any other
audio source.
Summary of the Invention
The present invention relates to a voice aware audio system and a method for a
user
wearing a headset to be aware of an outer sound environment while listening to
music or any
other audio source. The present invention relates to a concept of an
adjustable sound awareness
zone which gives the user the flexibility to avoid hearing far distant voices.
The system of the
present invention can use features of a headphone as described in US Patent
Publication Number
2016/0241947 hereby incorporated by reference into this application. In one
embodiment, the
headphone includes a microphone array having four input microphones. This
provides spatial
sound acquisition selectivity and allows the steering of the microphone array
towards directions
of interest. Using beamforming methods and combining with different
technologies like noise

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
3
reduction systems, fractional delay processing and a voice activity detection
(VAD) algorithm of
the present invention, a new audio architecture is provided with improved
performance in noisy
environments.
The present invention includes different signal processing modules including
noise
.. reduction and array processing. In particular, a procedure is provided
which estimates the noise
level which is referred to as Noise Sensing (NS). This procedure adapts
parameters of a noise
reduction so that output sound quality is optimized. Once voice has been
detected, the user can
be alarmed via a headphone signal without disrupting the music or other audio
source that the
user was listening to. This is done by mixing the external voice with the
headphone lead signal.
.. A mixing mechanism is used which can take into account psychoacoustic
properties and allow
final mixing without reducing a volume of the music signal while maximizing at
the same time
intelligibility.
Typical applications of the voice awareness audio system of the present
invention can
appear within the following scenarios: voice, for example a person shouting,
talking or calling,
a baby crying, public transport announcements; bells and alarms, for example
someone ringing a
door bell, a door bell activated for a package delivery, house, car and other
alarms; and others,
for example a car horn, police and ambulance air-raid siren, and whistles The
invention will be
more fully described by reference to the following drawings.
Brief Description of the Drawing
Fig. 1 is a schematic diagram of prior art principles in voice activity
detection (VAD).
Fig. 2 is a schematic diagram of example prior art time-domain speech
detection
techniques..
Fig. 3 is a schematic diagram of example prior art frequency-domain speech
detection
techniques.
Fig. 4 is a schematic diagram of a voice aware audio system in which an
external voice of
interest is mixed with user music in accordance with the teachings of the
present invention.
Fig. 5 is a schematic diagram of an adjustable sound awareness zone used in
the voice
aware audio system of the present invention.
Fig. 6 is a schematic diagram of a microphone array used in a headphone of the
present
invention.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
4
Fig. 7 is a flow diagram of a method for voice activity detection in
accordance with the
teachings of the present invention.
Fig. 8A is a schematic diagram of a speech signal.
Fig. 8B is a schematic diagram of log weiner entropy.
Fig. 8C is a schematic diagram of log wiener entropy simplified
Fig. 9 is a schematic diagram of a voice activity detection architecture
system including
data buffer organization around noise reduction (NR) and voice activity
detection (VAD)
modules.
Fig. 10 is a schematic diagram of a state machine diagram of a hangover
procedure.
Fig. 11A is a schematic diagram of a speech signal at a 128 buffer length.
Fig. 11B is schematic diagram of log weiner entropy of the signal shown in
Fig. 11A.
Fig. 11C is schematic diagram of log weiner entropy simplified of the signal
shown in
Fig. 11A.
Fig. 12A is a schematic diagram of a speech signal at a 258 buffer length.
Fig. 12B is schematic diagram of log weiner entropy of the signal shown in
Fig. 12A.
Fig. 12C is schematic diagram of log weiner entropy simplified of the signal
shown in
Fig. 12A.
Fig. 13A is a schematic diagram of a speech signal at a 128 buffer length.
Fig. 13B is schematic diagram of log weiner entropy of the signal shown in
Fig. 13A.
Fig. 13C is schematic diagram of log weiner entropy simplified of the signal
shown in
Fig. 13A
Fig. 14 is a schematic diagram of an adaptive noise reduction module in
accordance with
the teachings of the present invention.
Fig. 15A is a schematic diagram of an input signal including noise.
Fig. 15B is a schematic diagram of a phase difference of a microphone left
front and a
microphone right front.
Fig. 15C is a schematic diagram of a microphone right front and a microphone
right back.
Fig. 16 is a flow diagram of a method to improve voice activity detection
(VAD)
output quality including localization and beamforming using a microprocessor
array.
Fig. 17 is a schematic diagram to improve the robustness of voice activity
detection
(VAD) against diffuse noise.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
Fig. 18 is a flow diagram of a method to increase the robustness of voice
activity
detection (VAD) against unwanted voices in a zone of awareness.
Fig. 19 is a flow diagram of a method for implementing the voice aware audio
system
including adaptivie spectral equalization.
5 Fig. 20A is a graph of music with bad intelligibility of speech.
Fig. 20B is a graph of music with good intelligibility of speech using an
adaptive EQ
concept.
Fig. 21A is a schematic diagram of bad intelligibility of speech.
Fig. 21B is a schematic diagram of good intelligibility of speech achieved
using a
HRTF-based intelligibility improvement concept.
Fig. 22 is a flow diagram of a method of ad-hoc processing using compression-
based
processing.
Fig. 23A is a schematic diagram of processing resulting in bad
intelligibility.
Fig. 23B is a schematic diagram of an implementation of ad-hoc processing
using
compression-based processing to provide good intelligibility.
Detailed Description
Reference will now be made in greater detail to a preferred embodiment of the
invention,
an example of which is illustrated in the accompanying drawings. Wherever
possible, the same
reference numerals will be used throughout the drawings and the description to
refer to the same
or like parts.
The voice aware audio system of the present invention allows any user wearing
a
headphone to be aware of the outer sound environment while listening to music
or any other
audio source. In one embodiment, the voice aware audio system can be
implemented as a
headphone which has 4 input microphones as described for example in US Patent
Publication
No. 2016-0241947. The user will be prompted by hearing a voice or a set of
defined sounds of
interest when the signal coming from the headphone microphone is recognized to
be a desired
signal. When the signal coming from the microphone is not analyzed to be a
voice or any signal
of interest, the listener will not be disrupted by the microphone signal and
will just hear the lead
signal.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
6
Fig. 4 illustrates a possible scenario for voice aware audio system 10 as
person B comes
towards person A who is wearing headphone 12 and listens to music or watches a
television
screen or the like with audio output. As soon as person B talks to person A,
voice will be
detected through one or more microphones 15 arranged in ear pads 14 and mixed
with a lead
signal so that person A will be aware of the speech message spoken by person
B. In order not to
be disarranging, the outer sound needs to be mixed with music only when the
outer sound is
desirable, such as human voice. Voice aware system 10 can also detect other
typical sounds for
example, alarms, rings, horns, alarms, sirens, bells and whistles.
A sub-system called Adjustable Sound Awareness Zone (ASAZ) can be used with
voice
aware audio system 10 as depicted in Fig. 5. The user has the ability to
define a variable sphere
radius around their head through an Application Program Interface (API)
associated with
headphone 12 so that voice aware system 10 reacts only to normal voices, no
whispering, which
are inside a defined sphere radius. Any other normal voice, no shouting,
situated outside the
defined sphere will not be detected. Three levels of tuning of voice aware
system 12 can be been
defined as: large, medium and small. A large tuning corresponds to radius RL
having a large
length, a medium tuning corresponds to radius RM having a medium length which
is smaller
than radius RL and a small tuning corresponds to radius RS having a small
length which is
smaller than radius RM. For example, radius RL can have a length in the range
of about 75 feet
to about 30 feet, radius RM can have a length in the range of about 50 feet to
about 20 feet and
radius RS can have a length in the range of about 25 feet to about one foot.
Referring to Fig. 4, Voice aware audio system 10 includes a Noise Reduction
(NR)
method or Noise Reduction (NR) algorithm to estimate the noise level so that
voice aware audio
system 10 can tune quickly to any of the internal parameters of the noise
reduction (NR)
algorithm. This provides the best audio quality for a wide range of noise
levels. This procedure
referred to as Noise Sensing (NS) is used also to tune dynamically sensitive
thresholds or other
internal parameters and achieve better performance.
In one embodiment, headphone 12 has one or more omni-directional microphones
15
located in ear pads 14. Headphone 12 can include four omni-directional
microphones 15 as
shown in Fig. 6. Headphone 12 is fitted with a rectangular or trapezoidal
array of four
omnidirectional microphones 15. The configuration allows the use of different
virtual
directive/cardioid microphones, by pairs in a line or even combining elements
on the diagonal.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
7
Omni-directional microphones 15 are located in lower portion 16 of ear pads
14, mounted in a
specific position in order to achieve a 360 audio image of the environment
around the user.
Using an array processing algorithm a localization of interest such as a
speaker's location is
determined. Once localization has been performed, the user can easily point
the equivalent
antenna radiation pattern towards that direction. Doing so, the noise energy
at omni-directional
microphone(s) 15 can be reduced and the external voice will be enhanced.
Impact of
beamforming can have a positive impact on the performances of noise reduction
as described
below. One or more speakers 17 can be associated with microphones 15. In
alternate
embodiments, headphone 12 can include any type of speaker array associated
with an type of
structure.
Fig. 7 is a schematic diagram of a method for voice activity detection 20
which can be
implemented in voice aware audio system 10. The implementation of the present
invention is to
use both frequency and time domains. In block 22, a frequency domain can be
used for detecting
periodic patterns. Block 22 can be referred to as a first guess step. Block 22
is a coarse decision
process where the objective is to select potential oscillating frequency
candidates. After block
22, block 24 can be performed. Block 24 can be a time-domain procedure in
order to check if the
selected oscillating frequency candidate is confirmed or not. For the
frequency domain guess
step in block 22, and in order to be noise-resistant, large buffers can be
used and a relatively low
threshold in order to minimize the rate of false negative decisions. If the
detected oscillating
frequency candidate is false, the second and final decision process in block
24 is performed in
the time domain using recursively results of a time domain algorithm analysis
running on sub-
frames within the frame used for the frequency domain first step analysis.
In an implementation of block 22, Wiener entropy or spectral flatness is used
in order to
reduce the computational burden of the two successive procedures. The FFT of
the input buffer
can also be used for noise reduction as described below.
In an implementation of block 24 a pitch estimation algorithm is used. In one
embodiment, the pitch estimation algorithm is based on a robust YIN algorithm.
The estimation
process can be simplified into a detection-only process or the complete
algorithm can be used
ensuring continuity of the estimated pitch values between successive frames to
render the
algorithm even more robust against errors.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
8
Successive decisions over subframes in a frame plus overlapping between the
large frame
provides an increase in the accuracy of the algorithm, referred to as the
WEYIN (Wiener Entropy
YIN) algorithm.
In one embodiment for VAD, the method can be done with different combinations
of
features in frequency domain in block 22 to detect potential pitch voiced
frames candidates that
will be re-analyzed in time-domain in block 24.
The Wiener entropy given as:
,
= ti k.)) / -7, Yl(i k))
can be computed using:
1 I ,
(I]l X1(1, k) = exp log ( 1X1 (/, k)i = exo ) log
N
IF R / 1E15
This leads to the following equation:
(k) exp (1, Logi Xf
1101)/(2.-Iirr(1., 01)
/EB /EB
The Wiener entropy can be computed in different bands Bi , i = 1, = == , L .
So that,
the candidate selection process is done through the computation of the L
scalar quantities:

CA 03084890 2020-06-05
WO 2019/111050 PCT/IB2018/001503
9
i I ______________ .1
4:0 r3,(k.).= exp = ¨ > Xf k ) 1 ¨ 1Xf(1.....k )1 ..
== I Lõ
N ' = .i kN
=
= = ,1==1.> z
Which are sent to the selection process after a threshold decision step:
lAYBI(k) hi , i =1, = == , D .
Once the frame has been designed as a candidate for speech presence, the time-
domain
inspection begins in block 24. The YIN algorithm can be used over K subframes
of length M
such that:
N=KM ,
where:
N= 2L,
is the frame length used in the spectrum-domain and chosen to be a power of 2,
in order to be
able to use the FFT.
Yin algorithm is turned from a pitch estimation algorithm to a pitch detection
one. For
that purpose, a frequency band [FPrnin , FPniax] is defined corresponding to
minimum and
maximum expected pitch frequency values which leads to the time values
interval [min,max]:
IF IT-
Imin = [Fs/FLA-I and 174.nax= = -
where Fs is the sampling frequency which can be a fraction of the original
sampling
frequency used for the processing in the frequency domain, [ ] and [1 are
respectively the
floor and ceiling rounding operators. As an example, if [Fini,,P, FmaxP ] =
[70, 400] Hz and
F5= 8 kHz , then [Tmin, Tmax] = [20, 115].

CA 03084890 2020-06-05
WO 2019/111050 PCT/IB2018/001503
The following matrix of time delays lags is defined:
'17 , wd2.) t ¨1)12)
= atIA
_ (Trnax 2 ¨((1
where ( ) is the rounding to the nearest integer operator and (0 : m) = (0 1 2
= == m ¨ 1 m) .
the example above is reconsidered:
5
A =, i-sci 59 59 60 60 114 115 11-51
[57 57 56 56 55 1 0
With this choice, computations of the YIN difference function will be done
according to the lag values of the first and second rows of the matrix A .
First column of
this matrix will give the relative indices from which the difference function
computation
10 departs.
Over the present frame, a set of difference function values is defined taken
over
successive intervals of length H. They are organized in a matrix with number
of rows and
columns defined as:
¨
mRows ¨ _________________
H
r,4.Cols =
YIN difference matrix dd is defined by its generic element as:
H-
2
= 04\60 -at) ¨ 170)
1,3=c1
Consider then:

CA 03084890 2020-06-05
WO 2019/111050 PCT/IB2018/001503
11
adtk, ,
./ .=
.k=
And the quantity:
=
.zea zem,
The algorithm resumes by computing:
ad (t)
- __________________
And looks for the minimum:
=
which is compared to a threshold:
rr(i)cp
If this minimum is smaller than the threshold, decision of speech presence f3
i= 1 for
subframe i is taken.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
12
Once decisions are done on the successive K subframes in the present frame, it
is decided
for the speech presence over the complete frame by proceeding to a majority
vote:
f3kf3i Q,
i= 1
where Q may be chosen (but not restricted to) to be K/2.
In one embodiment a Wiener entropy simplification can be used in block 22.
In order to avoid the square root vectorial operation: 1Xf(/, k)1= Ai912Xf (1,
k)+ 3 2 X f (1, k)
which can be costly, are chosen to use:
(N 1 ..............................
exi) L
B 'log Sf iN
1E B 1E B
where:
Sf(1,k)=912Xf(1,k)+ 3 2 Xf(1,k)=1Xf(1,k)12 .
Fig. 8A shows a speech signal. Fig. 8B shows a log of Weiner entropy. Fig. 8C
shows a
log of Weiner entropy simplified. The results indicate the Weiner entropy
simplified is a valid
indicator of voiced speech.
In one embodiment, a Yin simplification can be used in block 24.
For the time-domain part, the following YIN version can be used:

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
13
=
=
1 "t
40) /V k(J)
=
where
lz-tcr+H
4(c) =---
.n =k-N + I
In this last equation, the squared difference function is replaced by the
absolute value in order to
reduce the number of operations.
There exists an overlap of] samples between two successive frames (decision of
speech presence
is valid for the] first samples only).
If r k(i + 1) is the k th row of the matrix ddi+1 at time i + 1, then we have:
(i i) .172 CO
= ra.(0 idd (4;'
nR..,ows -
cldi
)
.11.0%2st 1:
roR8,,,m-10 + = rolows(i) =
rnRows0 rrsn,c40(1 4- 1):
where rdi + 1) is the m th row of the matrix ddi+1 and ddi(2: nRows, : ) is
the extracted
matrix from dd associated to the present frame i, from row 2 to nRows.
From the previous equation, we deduce easily:

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
14
R.-Wqm R,WwxRow.c
Ddis-t= + = rgei + + 1) = irka) +
rnRows rnRows(i + 1)
,
k=1 k=i
or:
Ddi+i = Ddi¨ ri(i) + rnRows(i + 1) .
Therefore, there is no need to compute all the elements of the matrix dd
before computing the
sum of its rows. Instead, the vector Dd(i) is updated by computing rnRows(i)
and nnRows(i).
Fig. 9 is a schematic diagram of an implementation of method 20 in voice
activity
detection architecture system 30 in combination with noise sensing
architecture system 50.
Voice activity detection (VAD) architecture system 30 and noise sensing
architecture system
(NS) 50 can be implemented in voice aware audio system 10, as shown in Fig. 1,
to provide
noise robust voice activity detection (VAD). Referring to Fig. 9, input buffer
31 receives input
signal 29. Fast Fourier Transform (FFT) and concatenation of input signal 29
in input buffer 31
determines frame 32. Frame 32 can be used in Weiner entropy module 33 to
detect candidates.
Weiner entropy module 33 performs block 22, as shown in Fig. 7.
Referring, to Fig. 9, frame 32 can also be divided into successive K sub-
frames 34. Down
sampling process 35 can be used on sub-frames 34 before Yin pitch detection
module 36. Yin
pitch detection module 36 performs block 24 as shown in Fig. 7. Referring to
Fig. 9, Weiner
entropy module 33 and Yin detection module 36 determine decision sub-frame 37.
Decision sub-
frame 37 and decisions from other sub-frames 38 can be introduced into
hangover module 39
before determining speech presence module 40. Inside a sentence, one can find
areas with low
energies and method 20 of the present invention may consider them as non-
speech frames. If
there are too much interruptions the listening at the output can be annoying.
The disruptions can
be eliminated by using hangover module 39. Frame 32 can also be forwarded to
noise sensing
(NS) architecture 50.
Fig. 10 is a schematic diagram of state machine 60 which can be used in
hangover
module 39. Permanent state 1 standing for speech presence at hangover module
output is
depicted by circle 61 and permanent state 0 standing for speech absence at
hangover module

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
output is depicted by circle 63. Each arrow decision (0 or 1) coming out from
circle 61 and
boxes 64 and circle 63 and boxes 65 come after processing a frame. If the
decision is the same
than as the previous one, then XY or XN is accumulated for respectively speech
presence or
absence. If not, then they are reset to their initial values 0. Once one of
these variables equals NY
5 or NN, switch from one state to another is activated.
In this method or algorithm decVad is denoted the input decision coming from
the
speech decision module 40 shown in Fig. 9. If one defines a position index idx
in the state
machine of Fig. 10 and an output decision decHov value associated to the state
at that index such
that state[0] = 0 and state[1] = 1.
10 Figs. 11 ¨ 13 show the influence of the input buffer data on the Wiener
entropy value.
Figs. 11A , 12A and 13A show the speech signal at a buffer length respectively
of 128, 256 and
512. Figs. 11B, 12B and 13B show the log Weiner entropy at a buffer length
respectively of
128, 256 and 512. Figs. 11C, 12C and 13C show the log Weiner entropy
simplified at a buffer
length respectively of 128, 256 and 512. It is shown that increasing the input
data buffer length
15 has the effect to smoothen the Wiener entropy curve.
In one embodiment, noise Sensing (NS) architecture 50 optimizes for all
possible noise
levels to provide noise reduction (NR) audio quality output while preventing
as much as
possible, the apparition of the musical noise. Output 51 of noise sensing (NS)
can be used in
adaptive noise reduction (NR) module 70 as depicted in Fig. 14. Noise energy
sensing
.. architecture system 72 is used to estimate noise with module 73 and noise
reduction module 74
which output is combined with combiner 75. The amount of noise is estimated by
a noise
reduction module 74 which drives the choice of noise reduction (NR) algorithm
parameters.
Distance computation module 76 can determine a distance between the sensed
noise and
headphone 12.
Output from distance computation module 76 is used in hangover decision module
77. In
order to control the frequency of switching between noise levels states, three
noise levels states
have been defined as noise, intermediary and no noise which are determined in
hangover
decision module 77 such that voice aware audio system 10 is not switched over
for sudden or
impulsive noises. Adaptive noise reduction module 78 processes the signal from
hangover
decision module 77 to reduce noise . Both raw signal G1 80 and processed
signal 82 G2 are

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
16
mixed in mixer 84 to provide clean signal 85 and transmitted to voice activity
determination
(VAD) architecture system 30 with the adaptive convex linear combination:
y= GI xl+(1¨G1 )x2,
where xl is the raw microphone input, x2 is the NR module output and y is the
input of
the VAD module.
GI depends on the root mean square (RMS) value which can be computed either in
a
time or frequency domain.
NR algorithms can be adjusted and their corresponding internal setting
parameters with
the objective to limit musical noise and audio artefacts to the minimum while
reducing ambient
noise to the maximum.
In one embodiment, voice aware audio system 10 can include headphone 12 having
a
microphone array and for example a four-channel procedure. An advantage of
multiple channel
procedure is that it brings innovative features that increase the efficiency.
Because a speaker is
localized in space, the propagation of its voice sound to the microphone array
follows a coherent
path, in opposition to diffuse noise. Typically, the voice picked up on one
microphone is a
delayed replica of what is recorded on a second microphone. Fig. 15A-15C
illustrate phase
difference patterns. The signal is a four-channel recording microphone array
first track depicted
which timing is the following: one speaker in front (from about 2 seconds to
about 6 seconds)
and two speakers, one in front and one in back (from about 6 seconds to about
10 seconds).
Noise has been artificially added to the input signal as shown in Fig. 15A.
Phase difference
between MLF and MLB (broadside) is shown in Fig. 15B and phase difference
between MRF
and MRB (end-fire) I shown in Fig. 15C. It is shown for both arrays that phase
difference
patterns do not look similar when speech is present or absent.
The microphone array can act as a spatial filter to attenuate sounds coming
from non-
desired directions while enhancing sounds coming from the selected one(s). The
use of a
microphone array can help to improve sound quality and/or increase VAD noise
robustness and
detection accuracy.
Fig. 16 illustrates an implementation of voice aware audio system 10 including
noise
sensing architecture system 50 receiving a noisy signal and determining a
clean signal. The clean
signal is used in voice activity detection architecture system 30. Microphone
array 100 can be
used with localization module 102 and beamforming module 104.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
17
Once voice is detected in one direction at one of microphones 15 in microphone
array
100, localization module 102 localizes a speaker direction of arrival.
Beamforming module 104
steers the microphone detecting the voice towards the determined direction and
consequently,
attenuates noise coming from other directions. Beamforming module 104 provides
an enhanced
voice signal delivered to speakers 17 of headphone 12 as shown in Fig. 6, with
statistically and
spatially attenuated external noise.
In an alternate embodiment, noise is coming from all directions. For example,
noise can
occur in all directions in a train, plane, boat and the like where noise is
mainly due to the motor
engine with no precise direction of arrival because of the cabin sound
reverberation. Conversely,
a speaker of interest, is always located in a single point of space.
Reverberation is rarely a
problem because of the proximity of the speaker for example a few meters max.
Fig. 17 illustrates an implementation of voice aware audio system 10 including
noise
sensing architecture system 50 receiving a noisy signal and determining a
clean signal and the
use the microphone array to take advantage of the difference between noise and
a signal. In
parallel to noise reduction (NR) module 70 and voice activity detection
architecture system 30,
an incoming signal coming from a different direction, such as for example
front and rear, are
compared are received in beamforming module 104 and compared in similarity
module 106. If
speech is present, a difference between the two spectrums should be observed
considering that
the speaker cannot be placed on multiple positions at the same time. If speech
is absent, a low
difference between spectrums, can be observed considering noise is more or
less the same
whatever the direction the headphone is looking to. A signal determined in
similarity module 106
can be combined in mixer 107 with a voiced signal and possible artefacts from
voice activity
detection architecture system 30. Using such a similarity-based feature can
help in eliminating
false alarm of voice activity detection architecture system for increasing its
robustness to noise.
Fig. 18 illustrates an implementation of voice aware audio system 10 including
cancelling of unwanted voices in a case where multiple speakers are placed
around the user. The
user wants to speak with one speaker from a specific direction, for example
the front.
Microphone array 100 can be used a zone of awareness 108 to remove all signals
coming from
non-desired directions in beamforming module 104 to pre-process signals into a
noisy signal
coming from the zone of awareness only before entering into noise reduction
(NR) module 70
and voice activity detection architecture system 30.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
18
It is preferable that voice awareness audio system 10 ensures high
intelligibility. As the
user is interrupted by an external voice, it is desirable to keep the music
level constant and add
the external voice while ensuring the user hears clearly the voice message.
This advantage can be
achieved by controlling both voice false alarms detections and listening
conditions. Voice false
alarms can be determined voice activity detection architecture system 30. In
one embodiment,
the present invention provides mixing external speech detected by voice
activity detection
architecture system 30 with music coming from headphone 12 as shown in Fig. 6.
It is desirable to ensure the speaker voice delivered by headphones 12 is well
understood
by the user. In one embodiment muting or at least reducing music sound level
while speech is
detected and transmitted. Mixing strategies for improving the voice
intelligibility can include
adaptive spectral equalization; spatial dissociation; and studio-inspired ad-
hoc processing
which can be processed separately or together.
Listening to a speech signal mixed with music drastically decreases its
intelligibility,
especially when music already contains vocal signal. There is evidence from
many sources that
increasing the signal-to-noise ratio (SNR) onto speech fundamental frequency
increases the
speech understanding. By extension, the higher the SNR for all the harmonics,
the better.
In the present invention spectral and temporal information for both voice
coming from
voice activity detection (VAD) architecture system 30 and music played by the
user in
headphone 12 are available. In one embodiment, energy of both signals can be
compared,
especially in the fundamental frequency and associated harmonics bands, and
the signals from
voice activity detection (VAD) architecture system 30 are increased if they
are relatively low
when compared to music.
Fig. 19 illustrates an implementation of voice aware audio system 10 including
adaptive
spectral equalization method 200. Each time voice is detected, adaptive
spectral equalization
method 200 can be performed. In block 201, an estimate is determined of a
spectral density
power of music. In block 202, an estimate is determined of a spectral density
power of speech. In
block 203, an estimate of fundamental frequency of speech and formants from
block 202 is
determined. In block 204, an energy ratio is computed between speech formants
from block 203
and music form block 201 to determine voice-to-music ratios (VMR) for each
spectral band. In
block 205, an FFT-based equalizer (EQ) is applied onto bands with low VMRs
determined
from block 204.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
19
Fig. 20A illustrates graph 300 of power and frequency for speech spectrum 301
compared with music spectrum 302 having bad intelligiblity. For bands 304
where the energy
of voice formants is low relatively to music determined by block 204, an FFT-
based equalizer is
applied in block 205 to enhance them. Fig. Fig. 20B illustrates graph 300 of
power and
frequency for speech spectrum 301 compared with music spectrum 302 having good
intelligiblity after enhancement.
Fig. 21A-21B illustrates an implementation of voice aware audio system 10
including
spatial dissociation 4 00. This strategy assumes that, once a signal of
interest is detected, this
latter can be localized using the embedded microphone array. For example, via
cross-correlation-
based methods. Fig. 21A illustrates bad intelligibility with mono speech at
position 402 and
stereo music at positions 403. According to the speaker direction of arrival,
an HRTF-based filter
is applied to signal delivered by the voice activity detection (VAD ) 30 to
externalize it
according to the real speaker position (3D Effect).
This allow user 401 to separate sound signals in space. As shown in Fig. 20B
illustrating
good intelligibility music will be perceived in the center of the head at
position 406 while speech
will be perceived outside of the head at position 404. In the same time, the
music could
temporarily be switched from stereo to mono. Restoring spatial hearing is
known to significantly
increase the speech intelligibility.
Fig. 22 illustrates an implementation of voice aware audio system 10 including
compression-based processing 500 to raise the presence of voice when mixed
with music, an ad-
hoc processing algorithm can be used. In block 501, the voice signal is
copied, compressed and
then the compressed signal is copied to the original voice signal. In block
502, light saturation is
applied to the resulting signal. In block 503, an ad-hoc equalizer is applied.
In block 501, compression reduces inter-phoneme intensity differences, so that
the
temporal masking is reduced and speech loudness is increased. The summation of
both
compressed and original voice signals ensure the voice still sounds natural.
Block 502 brings
more harmonics. It is known for example that fundamental frequency (FO), as
well as Fl and F2
harmonic informations are critically important for vowel identification and
consonant perception.
Block 5033 aims at cleaning the voice signal by removing low frequency noise
and increase
frequency bands of interest, for example: low cut -18 dB / octave up to 70 Hz,
-3 dB around
250, -2 dB around 500 Hz, +2.5 dB around 3.3 kHz and +7 dB around 10 kHz.

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
Fig. 23A illustrates bad intelligibility with the gain 602 of voice signal 601
being
combined with music signal 604 in mixer 605 to provide input 606 to drivers.
Figs 23B
illustrates system 600 implementing compression-based processing 500. Voice
signal 601 is
applied to compression module 607 to provide a compressed signal. The
compressed signal is
5 combined with gain 602 of voice signal 601 in mixer 608. Output of mixer
608 is applied to
saturation module 609 to perform light saturation of block 502 and
equalization module 610 to
apply an ad-hoc equalizer. Output of equalization module 610 being combined
with music signal
604 in mixer 612 to provide input 614 to drivers
The noise-robust VAD method or algorithm of the present invention uses a
select-then-
10 check strategy approach. First step is done in the frequency domain with
a relatively large input
buffer which allows to reduce the impact of noise. Voiced speech signal
presence is detected via
a multiband Wiener entropy feature and shown how computational complexity can
be reduced
without harming the properties of the classic Wiener entropy.
Second part of the algorithm is done in the time domain with a simplified
version of the
15 YIN algorithm where pitch estimation has been replaced by its simple
detection. In order to
reduce further the computational complexity, an absolute value difference is
used instead of the
classical squared difference. This algorithm runs over successive subframes
along the total input
frame.
The present invention provides a derivation of an adjustable sound awareness
zone
20 system: Using the amplitude of the input signal and some features that
help to distinguish
between the user and distant external voices, the system allows the user to
define a spherical area
around his head where normal voices can be taken into account by the VAD
algorithm. If a user
is talking with a normal voice volume outside of this sphere then the system
will reject it.
The present invention provides derivation of a noise sensing system.
The noise reduction method or algorithm as well as the other main modules like
VAD
and the array processing algorithms may suffer from the fact that their
internal settings can't
handle easily all the possible noise levels from quiet situations to very
noisy ones. To improve
the performances of our system, a noise sensing mechanism of the present
invention is derived
and it is shown how its integration in the system of the present invention
improves significantly
the performances of the noise reduction and the VAD algorithms. Indeed, the
noise sensing
allows a reconfigurable algorithmic architecture with self-adjustable internal
parameters

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
21
including the following inter-actively related modules: VAD ; Noise reduction
; Voice
localization and Beamforming using a microphone array system; and
Computational complexity
reduction of different algorithms.
The present invention shows how computational complexity burden can be
significantly
reduced. This either reduces the power consumption or gives more room for
further processing.
The present invention provides derivation of audio mixing schemes which is
done under the
constraints of keeping the music volume constant while increasing the voice
intelligibility.
Alternative embodiments of the invention may be implemented as pre-programmed
hardware elements, other related components, or as a combination of hardware
and software
components, including hardware processors. Embodiments of the present
invention may be
implemented in connection with a special purpose or general purpose processor
device that
include both hardware and/or software components, or special purpose or
general purpose
computers that are adapted to have processing capabilities.
Embodiments may also include physical computer-readable media and/or
intangible
computer-readable media for carrying or having computer-executable
instructions, data
structures, and/or data signals stored thereon. Such physical computer-
readable media and/or
intangible computer-readable media can be any available media that can be
accessed by a general
purpose or special purpose computer. By way of example, and not limitation,
such physical
computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical
disk
storage, magnetic disk storage or other magnetic storage devices, other
semiconductor storage
media, or any other physical medium which can be used to store desired data in
the form of
computer-executable instructions, data structures and/or data signals, and
which can be accessed
by a general purpose or special purpose computer. Within a general purpose or
special purpose
computer, intangible computer-readable media can include electromagnetic means
for conveying
a data signal from one part of the computer to another, such as through
circuitry residing in the
computer.
When information is transferred or provided over a network or another
communications
connection (either hardwired, wireless, or a combination of hardwired or
wireless) to a computer,
hardwired devices for sending and receiving computer-executable instructions,
data structures,
and/or data signals (e.g., wires, cables, optical fibers, electronic
circuitry, chemical, and the like)
should properly be viewed as physical computer-readable mediums while wireless
carriers or

CA 03084890 2020-06-05
WO 2019/111050
PCT/IB2018/001503
22
wireless mediums for sending and/or receiving computer-executable
instructions, data structures,
and/or data signals (e.g., radio communications, satellite communications,
infrared
communications, and the like) should properly be viewed as intangible computer-
readable
mediums. Combinations of the above should also be included within the scope of
computer-
readable media.
Computer-executable instructions include, for example, instructions, data,
and/or data
signals which cause a general purpose computer, special purpose computer, or
special purpose
processing device to perform a certain function or group of functions.
Although not required,
aspects of the invention have been described herein in the general context of
computer-
executable instructions, such as program modules, being executed by computers,
in network
environments and/or non-network environments. Generally, program modules
include routines,
programs, objects, components, and content structures that perform particular
tasks or implement
particular abstract content types. Computer-executable instructions,
associated content
structures, and program modules represent examples of program code for
executing aspects of
.. the methods disclosed herein.
Embodiments may also include computer program products for use in the systems
of the
present invention, the computer program product having a physical computer-
readable medium
having computer readable program code stored thereon, the computer readable
program code
comprising computer executable instructions that, when executed by a
processor, cause the
system to perform the methods of the present invention.
It is to be understood that the above-described embodiments are illustrative
of only a few
of the many possible specific embodiments, which can represent applications of
the principles of
the invention. Numerous and varied other arrangements can be readily devised
in accordance
with these principles by those skilled in the art without departing from the
spirit and scope of the
invention.

Dessin représentatif

Désolé, le dessin représentatif concernant le document de brevet no 3084890 est introuvable.

États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Lettre envoyée 2023-12-13
Exigences pour une requête d'examen - jugée conforme 2023-12-06
Toutes les exigences pour l'examen - jugée conforme 2023-12-06
Requête visant le maintien en état reçue 2023-12-06
Requête d'examen reçue 2023-12-06
Représentant commun nommé 2020-11-07
Inactive : Page couverture publiée 2020-08-11
Lettre envoyée 2020-07-06
Demande de priorité reçue 2020-06-30
Exigences applicables à la revendication de priorité - jugée conforme 2020-06-30
Exigences applicables à la revendication de priorité - jugée conforme 2020-06-30
Lettre envoyée 2020-06-30
Demande de priorité reçue 2020-06-30
Demande reçue - PCT 2020-06-30
Inactive : CIB en 1re position 2020-06-30
Inactive : CIB attribuée 2020-06-30
Inactive : CIB attribuée 2020-06-30
Inactive : CIB attribuée 2020-06-30
Inactive : CIB attribuée 2020-06-30
Inactive : CIB attribuée 2020-06-30
Inactive : Correspondance - PCT 2020-06-05
Exigences pour l'entrée dans la phase nationale - jugée conforme 2020-06-05
Demande publiée (accessible au public) 2019-06-13

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Le dernier paiement a été reçu le 2023-12-06

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Taxe nationale de base - générale 2020-06-05 2020-06-05
Enregistrement d'un document 2020-06-05 2020-06-05
TM (demande, 2e anniv.) - générale 02 2020-12-07 2020-11-30
TM (demande, 3e anniv.) - générale 03 2021-12-07 2021-12-03
TM (demande, 4e anniv.) - générale 04 2022-12-07 2022-12-07
Requête d'examen - générale 2023-12-07 2023-12-06
Rev. excédentaires (à la RE) - générale 2022-12-07 2023-12-06
TM (demande, 5e anniv.) - générale 05 2023-12-07 2023-12-06
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
HED TECHNOLOGIES SARL
Titulaires antérieures au dossier
LILIANE HUGUET
TIMOTHY DEGRAYE
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document (Temporairement non-disponible). Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(yyyy-mm-dd) 
Nombre de pages   Taille de l'image (Ko) 
Description 2020-06-04 22 1 020
Dessins 2020-06-04 21 630
Revendications 2020-06-04 4 173
Abrégé 2020-06-04 1 56
Page couverture 2020-08-10 1 32
Courtoisie - Lettre confirmant l'entrée en phase nationale en vertu du PCT 2020-07-05 1 588
Courtoisie - Certificat d'enregistrement (document(s) connexe(s)) 2020-06-29 1 351
Courtoisie - Réception de la requête d'examen 2023-12-12 1 423
Requête d'examen 2023-12-05 4 134
Paiement de taxe périodique 2023-12-05 3 90
Traité de coopération en matière de brevets (PCT) 2020-06-04 17 647
Traité de coopération en matière de brevets (PCT) 2020-06-04 1 42
Rapport de recherche internationale 2020-06-04 5 132
Demande d'entrée en phase nationale 2020-06-04 12 494
Correspondance 2020-06-04 4 89
Paiement de taxe périodique 2022-12-06 1 27