Language selection

Search

Patent 2400104 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2400104
(54) English Title: METHOD FOR DETERMINING A CURRENT ACOUSTIC ENVIRONMENT, USE OF SAID METHOD AND A HEARING-AID
(54) French Title: PROCEDE DE DETERMINATION D'UNE SITUATION D'ENVIRONNEMENT ACOUSTIQUE MOMENTANEE, UTILISATION DE CE PROCEDE, ET PROTHESE AUDITIVE
Status: Deemed Abandoned and Beyond the Period of Reinstatement - Pending Response to Notice of Disregarded Communication
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04R 25/00 (2006.01)
  • G10L 15/14 (2006.01)
(72) Inventors :
  • ALLEGRO, SILVIA (Switzerland)
  • BUCHLER, MICHAEL (Switzerland)
(73) Owners :
  • PHONAK AG
(71) Applicants :
  • PHONAK AG (Switzerland)
(74) Agent: ROBIC AGENCE PI S.E.C./ROBIC IP AGENCY LP
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2001-01-05
(87) Open to Public Inspection: 2001-03-29
Availability of licence: N/A
Dedicated to the Public: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/CH2001/000008
(87) International Publication Number: WO 2001020965
(85) National Entry: 2002-08-13

(30) Application Priority Data: None

Abstracts

English Abstract


The invention relates to a method for determining a current acoustic
environment. The method is characterised in that specific characteristics are
extracted from an acoustic signal which has been recorded using at least one
microphone (2a, 2b) and that the current acoustic environment is determined in
an identification phase, on the basis of the extracted characteristics.
According to the invention, at least auditory-based characteristics are
determined in the extraction phase. The invention also relates to the use of
said method and to a hearing-aid.


French Abstract

L'invention concerne un procédé de détermination d'une situation d'environnement acoustique momentanée, consistant à extraire dans une phase d'extraction, des caractéristiques spécifiques depuis un signal acoustique enregistré au moyen d'au moins un micro (2a, 2b), et à déterminer dans une phase d'identification, la situation d'environnement acoustique momentanée au moyen des caractéristiques extraites. Selon l'invention, au moins des caractéristiques à base auditive sont déterminées dans la phase d'extraction. L'invention concerne également l'utilisation du procédé selon l'invention, ainsi qu'une prothèse auditive.

Claims

Note: Claims are shown in the official language in which they were submitted.


-13-
CLAIMS:
1. Method for determining momentary acoustic, said method
consists in that
- characteristic features are extracted from an acoustic
signal captured by at least one microphone (2a, 2b)
during an extraction phase, and
- said momentary acoustic situation is determined on the
basis of the extracted features during an identification
phase,
by determining at least auditory,based features during the
extraction phase.
2. Method according to claim 1, characterized in that, for
the identification of the characteristic features during
the extraction phase, Auditory Scene Analysis (ASA)
techniques are employed.
3. Method according to claim 1 or 2, characterized in that
ASA-(Auditory Scene Analysis) methods are being used to
determine the characteristic features during the extraction
phase.

-14-
4. Method according to one of the claims 1 to 3,
characterized in that one or several of the following
auditory-based features are identified during the
extraction of said characteristic features: Loudness,
spectral pattern, harmonic structure, common on- and
offsets, coherent amplitude modulations, coherent frequency
modulations, coherent frequency transitions and binaural
effects.
5. Method according to one of the preceding claims,
characterized in that any other suitable features are
identified in addition to the auditory-based features.
6. Method according to one of the preceding claims,
characterized in that, to create auditory objects, the
auditory-based and any other features are grouped along the
principles of the Gestalt theory.
7. Method according to claim 6, characterized in that the
extraction of features and/or the grouping of the features
are/is performed either in context-independent or in
context-dependent fashion in the sense of human auditory
perception, based upon additional information or hypotheses
relative to the signal content and providing an adaptation
to the respective acoustic situation.

-15-
8. Method according to one of the preceding claims,
characterized in that, during the identification phase,
data are accessed which were acquired in an off-line
training phase.
9. Method according to one of the preceding claims,
characterized in that the extraction phase and the
identification phase take place in continuous fashion or at
regular or irregular time intervals.
10. Use of the method according to one of the claims 1 to 9
to adjust a hearing device (9) to a momentary acoustic
situation.
11. Use according to claim 10, characterized in that, on
the basis of a detected momentary acoustic situation, a
program or a transmission function between at least one
microphone (2a, 2b) and a receiver (6) in the hearing
device (1) is selected.
12. Use according to claim 9 or 10, characterized in that,
in response to a detected momentary acoustic situation, any
ocher function is triggered in the hearing device (1).
13. Use of the method according co one of the claims 1 to 9
to recognize speech.

-16-
14. Hearing device (1) with a transmission unit (4) which
is, on its input side, operationally connected to at least
one microphone (2a, 2b) and, on its output side, to a
receiver (6), characterized in that the input signal of the
transmission unit (4) is simultaneously fed to a signal
analyzing unit (7) for the extraction of at least auditory-
based features, and that the signal analyzing unit (7) is
operationally connected to a signal identifying unit (8) in
which the momentary acoustic situation is determined, and
that the signal identifying unit (B) is operationally
connected to the transmission unit (4) to adjust a program
or a transfer function.
15. Hearing device (1) according to claim 14, characterized
in that a user input unit (11) is provided which is
operationally connected to the transmission unit (4).
16. Hearing device (1) according to claim 14 or 15,
characterized in that a control unit (9) is provided and
that the signal identifying unit (8) is operationally.
connected to said control unit (9).
17. Hearing device (1) according to Claim 15 or 16,
characterized in that the user input unit (11) is
operationally Connected to the control unit (9).
18. Hearing device (1) according to one of the claims 14 to
17, characterized in that any means serving to transfer

-17-
parameters from a training unit (10) to the signal
identifying unit (8) are provided.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02400104 2002-08-13
1
METHOD FOR DETERMINING A CURRENT ACOUSTIC ENVIRONMENT, USE
OF SAID METHOD AND A HEARING-AID
The present invention is related to a method for
determining a momentary acoustic situation, m use of the
method for hearing devices and a hearing device.
Modern-day hearing devices, when employing different'
hearing programs - typically two to a maximum of three such
hearing programs - permit their adaptation to varying
acoustic situations. The idea is to optimize the
effectiveness of the hearing device for its user in all
situations.
The hearing program can be selected either via a remote
control or by means of a switch on the hearing device
a.tself. Fox many users, however, having to switch program
settings is a nuisance, or difficu~.t, or even impossible.
Nor is it always easy even for experienced wearers of
hearing devices to determine at what point in time which
program is most comfortable and offers optimal speech.
discrimination. An automatic recognition of the acoustic
situation and corresponding automatic switching of the
hearing program in the hearing device is therefore
desirable.

CA 02400104 2002-08-13
- 2 -
At present, there exist several different approaches.to the
automatic classification of acoustic situations. All of the
methods concerned involve the extraction of different
features from the input signal which.may come from one or
several microphones in the hearing device. Based on these
features, a pattern-recognition device employing a
particular algorithm makes a determination as to the
attribution of the analyzed signal to a specific acoustic
situation. These various existing methods differ from one
another both in terms of the features on the basis of which
they define the acoustic situation (signal analysis) and
with regard to the pattern-recognition device which serves
to classify these features (signal identification).
For the extraction of features in audio signals, J.M. Kates '
in his article entitled "Classification of Background
Noises for Hearing-Aid Applications " (1995, Journal of the
Acoustic Society of America 97(1), pp 461 - 469), suggested
an analysis of time-related sound-level fluctuations and of
the spectrum. Furthermore, an analysis of the amplitude
histogram is proposed in the European patent having the
number EP-81-0 732 036 in order to reach the same goal.
Finally, the extraction of features has been investigated
and implemented based on an analysis of different
modulation frequencies. In this connection, reference~is
made to the two papers by Ostendorf et. al. entitled
"Empirical Classification of Different Acoustic Signals and
of Speech by Means of a Modulation-Frequency Analysis"
(1997, DAGA 97, pp 608 - 609), and "Classification of
1054')36

~
CA 02400104 2002-08-13
- 3
Acoustic Signals Based on the Analysis of Modulation
Spectra for Application in Digital Hearing Devices" .(1998, .
DAGA 98, pp 402 - 403). A similar approach is disclosed in
an article by Edwards et. al. entitled "Signal-processing '
algorithms for a new software-based,_digital hearing
device" (1998, The Hearing Journal 51, pp 44 - 52). Other
possible features include the sound-level itself of the
zero-crossing rate as described e.g. in the article by H.L.
Hirsch entitled "Statistical Signal Characterization"
(Artech House 1992). It is evident that the features used
to date far the analysis of audio signals are strictly
technically based.
Tt is fundamentally possible to use prior art pattern
identification methods for sound classification purposes.
Particularly suitable pattern recognition systems are the
so-called distance classifiers, Bayes classifiers, fuzzy
logic systems or neural networks. Details of the first two
of the methods mentioned are contained in the publication
entitled ''Pattern Classification and Scene Analysis" by
Richard O. Duda and Peter E. Hart (John Wiley & Sons, .
1973). For information on neural networks, reference is
made to the treatise by Christopher M_ Bishop, entitled
"Neural Networks for Pattern Recognition" (1995, Oxford
University Press). Reference is also made to the following
publications: Ostendorf et. al., "Classification or
Acoustic Signals Based on the Analysis of Modulation
Spectra for Application in Digital Hearing Devices"
(Zeitschrift fs~udiologie (Journal of Audiology), pp 148
1054736

~
CA 02400104 2002-08-13
- 150): F. Feldbusch, "Sound Recognition Using Neural
Networks" (199$, Journal of Aud.iology, pp 30 -- 36) ;
European patent application having publication number EP-
A1-0 814 636; and US patent having publication number US-5
604 812. Yet all of the pattern-recognition methods
mentioned are deficient in one respect in that they merely
model static properties of the sound categories of
interest.
the known methods for sound classification, involving
feature extraction and patter recognition, have the
drawback that, although unambiguous and solid
identification of speech signals is basically possible, a
number of different acoustic situations cannot be
satisfactorily classified, or not at all. While these known
methods permit a distinction between pure speech signals
and "non-speech" signals - meaning all other acoustic
situations -, it is not enough for selecting an optimal
hearing program for a momentary acoustic situation. As a
xesult thereof, the number of possible hearing programs is
either limited to those two automatically recognizable
acoustic situations or either the hearing device wearer
himself has to recognize the acoustic situations that are
not covered while manually selecting the appropriate
hearing program.
It is therefore the object of the present invention to
introduce first of all a method for determining a momentary
1054736

- CA 02400104 2002-08-13
- 5 -
acoustic situation, which method, compared to-prior~art
methods, is substantially more reliable and more precise.
This is accomplished by measuxes specified in claim 1.
S Advantageous embodiments of the present invention, a use of
the method as well as a hearing device are specified in
additional claims.
The invention is based on an extraction of signal features
with the subsequent separation of different sound sources
as well as an identification of different sounds. Instead
of or besides technically-based features, auditory-based
features are taken into account in the signal analyzing
process for the feature extraction. These auditory-based
features are being determined by the method of the Auditory
Scene Analysis (ASA). In a further embodiment of the method
according to the present invention, a grouping of the
features using the Gestalt principles is employed in a
context-dependent or context-independent manner. the
identification or classification, respectively, of the
audio signals is performed by using, in a preferred
embodiment, Hidden Markov Models (HMM) applied on the
extracted features. The present invention has the advantage
that the number of recognizable sound categories and
therewith the number of hearing programs is increased. As a
result thereof, the performance of the sound classification
and therewith the comfort is increased for the user of the
hearing device.
1054736

CA 02400104 2002-08-13
- 6 -
The following will explain the present invention in more
detail by way of an example with reference to a drawing.
The only figure is a functional block diagram of a hearing
device in which the method according to the present
invention has been implemented.
In the only figure, a hearing device is identified by 1. '
For the purpose of the following description,.the teem
"hearing device" is intended to include hearing aids as
used to compensate for the hearing impairment of a person,
but also all other acoustic communication systems such as
e.g. radio transceivers.
The hearing device 1 incorporates in conventional fashion
two electro-acoustic converters 2a, 2b and 6, these being
vne or several microphones 2a, 2b and a speaker 6, also
referred to as receiver. A main component of the hearing
device 1 is a transmission unit identified by ~ in which
transmission unit 4, in case of a hearing aid, signal
modification takes place in adaptation to the requirements
of the user of the hearing device 1. However, the
operations performed in the transmission unit 4 are not
only a function of the nature of a specific purpose of the
hearing device 1 but are also, and particularly, a function
of the momentary acoustic situation. Eor this reason,~there
have already been hearing aids on the market where the
wearer can manually switch between different hearing
programs tailored to specific acoustic situations. There
also exist hearing aids capable of automatically
1054736

~
CA 02400104 2002-08-13
- 7
recognizing the acoustic situation. In this connection,
reference is again made to th~ European patents EP-81- 0
732 036 and EP-A1-0 814 636 and to the US patent 5 604 812,
as well. as to the "Claro Autoselect" brochure by Phvnak
Hearing Systems (28148 (GH) /0300, 1999) .
In addition to the aforementioned components - such as
microphones 2a, 2b, the transmission unit 4 and the
receiver 6 - the hearing device 1 contains a signal
analyzing unit 7 and a signal identifying unit 8. If the
hearing device 1 is based on digital technology, one or
several analog-to-digital converters 3a, 3b are arranged in
between the microphones 2a, 2b and the transmission unit 4
and one digital-to-analog converter 5 is provided in
between the transmission unit 4 and the receiver 6. While a
digital .implementation of the invention is preferred,' it is
equally possible to use analog components throughout. In
this case, of couxse, the converters 3a, 3b and 5 are not
needed.
The signal analyzing unit 7 receives the name input signal
as the transmission unit 4. Finally, the signal identifying
unit 8, which is connected to the output of the signal
analyzing unit 7, is connected to the transmission unit 4
and to a control unit 9.
A training unit is identified by 10, which training unit 10
serves to establish in off-line operation the parameters
1054736

' CA 02400104 2002-08-13
_ g _
required in the signal identifying unit 8 for the
classification process.
Hy means of a user input unit 11, the user can override the
S settings of the transmission unit 4 and the control unit 9
as established by the signal analyzing unit 7 and the
signal identifying unit 8.
The method according to the present invention is explained
as follows:
It is essentially based on the fact that characteristic
features are extracted from an acoustic signal in an
extraction phase, whereby, instead of or in addition to the
technically--based features - such as the above mentioned
zero-crossing rates, time-related sound-level fluctuations,
different modulation frequencies, the sound level itself,
the spectral peak, the amplitude distribution, etc. -
auditory-based features are employed as well. These
auditory-based features are determined by means of an
Auditory Scene Analysis (ASA) and include in particular the
loudness, the spectral pattern (timbre), the harmonic
structure (pitch), common build-up and decay times (on-
/offsets), coherent amplitude modulations, coherent
frequency modulations, coherent frequency transitions,
binaural effects, etc_ Detailed descriptions of Auditory
Scene Analysis can be found e.g. in the articles by A.
Eregman, "Auditory Scene Analysis" (MIT Press, 1990) and
1054736

~
CA 02400104 2002-08-13
_ g
W.A. Yost, "Fundamentals of Hearing - An Introduction
(Academic Press, 1977). The individual auditory-based
features are described, inter alia, by A. Yost and S. Sheft
in "Auditory Perception" (published in the "Human
Psychophysics" by W.A. Yost, A.N. Popper and R.R. Fay,
Springer 1993), by W.M. Hartmann in "Pitch, Periodicity,
and Auditory Organization" (Journal of the Acoustical
Society of America, 100(6), pp 3491 - 3502, 1996), and by
D.K. Mellinger arid B.M. Mont-Reynaud in "Scene Analysis"
(published in "Auditory Computation" by H.L. Hawkins, T.A:
McMullen, A.N. Popper arid R.R. Fay, Springer 1996).
As an example for the use of auditory-based features in
signal analysis, the characterization of the tonality of
the acoustic signal is now mentioned by analyzing the
harmonic structure, which is particularly useful for the
identification of tonal signals such as speech and music.
A further embodiment of the method aCCOrding to the present
invention, a grouping of the features is furthermore v
employed in the signal analyzing unit 7 by using of Gestalt
principles. This process applies the principles of the
Gestalt theory, by which such qualitative properties as
continuity, proximity, similarity, common fate, unity, good
continuation and others are examined, to the auditory-based
or perhaps technically-based features for the creation of
auditory objects. This grouping - and, for that matter, the
extraction of features in the extraction phase - can take
place in context-independent fashion, 1.e. without any
1054736

CA 02400104 2002-08-13
- 1Q -
enhancement by additional knowledge (so-called "primitive"
grouping), or in context-dependent fashion in the sense of
human auditory perception employing additional information
or hypotheses regarding the signal content (so-called
"schema-based" gxouping). This means that the context-
dependent grouping is adapted to any given acoustic
situation. For a detailed explanation of the principles of
the Gestalt theory and of the grouping process employing
Gestalt analysis, substitutional reference is made to the
publication entitled "Perception Psychology" by E.B.
Gvldstein (Spektrum Akademischer Verlage, 1997), "Neural
Fundamentals of Gestalt Perception" by A.K_ Engel and W.
Singer (Spektrum der Wissenschaft, 1998, pp 66 - 73), and
"Auditory Scene Analysis" by A_ Bregman, (MIT Press, 1990).
The advantage of applying this grouping process lies in the
fact that it allows further differentiation of the features
of the input signals. In particular, signal segments~are
identifiable, which originate in different sound sources.
The extracted features can thus be mapped to specific
individual sound sources, providing additional- information
on these sources and, hence, on the current auditory
situation.
The second aspect of the method according to the present
invention as described here relates to pattern recognition,
i.e. the signal. identification that takes place during the
identification phase. The preferred form of implementation
of the method according to the present invention employs
1054736

' CA 02400104 2002-08-13
the Hidden Maxkov Model (HMMI method in the signal
identifying unit 8 for the automatic classification~of the
acoustic situation. This also permits the use of time
changes of the computed characteristics for the .
classification process. Accordingly, it is possible to also
take into account dynamic and nvt only static properties of
the acoustic situation and the sound categories. Equally
possible is a combination of HMMs with other classifiers
such as mufti-stage recognition processes for identifying
the acoustic situation.
The output signal of the signal identifying unit 8 thus
contains information on the nature of the acoustic
surroundings (the acoustic situation). This information is
fed to the transmission unit 4 in which the program or set
of parameters is selected most suitable fax the identified
acoustic situation for the transmission. At the same time,
the information gathered in the signal identifying unit 8
is fed to the control. unit 9 for further actions whereby,
depending on the situation, any given action, such as an
acoustic signal, can be triggered.
If Hidden Markvv Models are being used in the
identification phase, ft will require a complex process for
2S establishing the parameters needed for the classification.
This parameter determination i.s therefore best done in the
off-line mode, individually for each category at a time.
The actual identification of various acoustic situations
requires vezy little memory space and computational
1054736

' CA 02400104 2002-08-13
- ~.2
capacity. It is therefore recommended that a training unit
be provided which has enough computing power for
parameter determination and which can be connected via
appropriate means to the hearing device I for data transfer
5 purposes. The connecting means mentioned may be simple
wires with suitable plugs.
The method according to the present invention thus makes it
possible to select from among numerous available settings
10 and automatically pollable act~.ons the one best suited
without the need Eor the user of the device to make the
selection. This makes the device significantly more
comfortable for the user since upon the recognition of a
new acoustic situation it promptly and automatically
selects the right program or function in the hearing device
1. '
The user of hearing devices often want to switch off the
automatic recognition of the acoustic situation and
corresponding automatic program selection, described above.
For this purpose a user input unit 12 is provided by means ..
of which it is possible to override the automatic response
or program selection. The user input 11 may be fn the form
of a switch on the hearing device 1 or a remote control
which the user can vperate_ There are also other options
which offer themselves, for instance a voice-activated user
input device.
1054736

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: IPC expired 2013-01-01
Inactive: IPC from MCD 2006-03-12
Inactive: IPC from MCD 2006-03-12
Application Not Reinstated by Deadline 2006-01-05
Time Limit for Reversal Expired 2006-01-05
Deemed Abandoned - Failure to Respond to Maintenance Fee Notice 2005-01-05
Letter Sent 2003-05-09
Inactive: Single transfer 2003-03-25
Inactive: Cover page published 2002-12-23
Inactive: Courtesy letter - Evidence 2002-12-23
Inactive: Notice - National entry - No RFE 2002-12-18
Application Received - PCT 2002-10-04
National Entry Requirements Determined Compliant 2002-08-13
Application Published (Open to Public Inspection) 2001-03-29

Abandonment History

Abandonment Date Reason Reinstatement Date
2005-01-05

Maintenance Fee

The last payment was received on 2003-12-08

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Fee History

Fee Type Anniversary Year Due Date Paid Date
Basic national fee - standard 2002-08-13
MF (application, 2nd anniv.) - standard 02 2003-01-06 2002-11-06
Registration of a document 2003-03-25
MF (application, 3rd anniv.) - standard 03 2004-01-05 2003-12-08
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
PHONAK AG
Past Owners on Record
MICHAEL BUCHLER
SILVIA ALLEGRO
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative drawing 2002-12-19 1 6
Description 2002-08-12 12 433
Abstract 2002-08-12 2 86
Drawings 2002-08-12 1 10
Claims 2002-08-12 5 117
Notice of National Entry 2002-12-17 1 189
Courtesy - Certificate of registration (related document(s)) 2003-05-08 1 107
Courtesy - Abandonment Letter (Maintenance Fee) 2005-03-01 1 174
Reminder - Request for Examination 2005-09-06 1 116
PCT 2002-08-12 6 196
Correspondence 2002-12-18 1 25
Fees 2002-11-05 1 32
Fees 2003-12-07 1 30