Sélection de la langue

Search

Sommaire du brevet 3195489 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 3195489
(54) Titre français: SYSTEME ET PROCEDE D'ASSISTANCE AUDITIVE
(54) Titre anglais: SYSTEM AND METHOD FOR AIDING HEARING
Statut: Examen
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • H4R 25/00 (2006.01)
  • A61B 5/12 (2006.01)
(72) Inventeurs :
  • OLAH, LASLO (Etats-Unis d'Amérique)
  • SOKOLOVSKII, GRIGORII (Etats-Unis d'Amérique)
  • LOSEV, SERGEY (Etats-Unis d'Amérique)
  • SOKOLOVSKAYA, EKATERINA (Etats-Unis d'Amérique)
(73) Titulaires :
  • TEXAS INSTITUTE OF SCIENCE, INC.
(71) Demandeurs :
  • TEXAS INSTITUTE OF SCIENCE, INC. (Etats-Unis d'Amérique)
(74) Agent: J. JAY HAUGENHAUGEN, J. JAY
(74) Co-agent:
(45) Délivré:
(86) Date de dépôt PCT: 2021-04-27
(87) Mise à la disponibilité du public: 2022-03-31
Requête d'examen: 2023-03-15
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Oui
(86) Numéro de la demande PCT: PCT/US2021/029414
(87) Numéro de publication internationale PCT: US2021029414
(85) Entrée nationale: 2023-03-15

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
17/029,764 (Etats-Unis d'Amérique) 2020-09-23

Abrégés

Abrégé français

La présente invention concerne un mode de réalisation du système (300), dans lequel une interface de programmation (16) est conçue pour communiquer avec un dispositif. Le système (300) crible, par l'intermédiaire d'un haut-parleur et une interface utilisateur (16) associée au dispositif, une oreille gauche et séparément, une oreille droite d'un patient. Le système (300) détermine ensuite une plage d'audition d'oreille gauche et une plage d'audition d'oreille droite. Les plages d'audition sont modifiées avec une évaluation subjective de la qualité sonore en fonction du patient. L'évaluation subjective de la qualité du son selon le patient peut être une évaluation complète d'un degré de gêne causé au patient par une déficience du son souhaité.


Abrégé anglais

In one embodiment of the system (300), a programming interface (16) is configured to communicate with a device. The system (300) screens, via a speaker and a user interface (16) associated with the device, a left ear - and separately, a right ear - of a patient. The system (300) then determines a left ear hearing range and a right ear hearing range. The hearing ranges are modified with a subjective assessment of sound quality according to the patient. The subjective assessment of sound quality according to the patient may be a completed assessment of a degree of annoyance caused to the patient by an impairment of wanted sound.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


What is claimed is:
1. A system (300) for aiding hearing, the system (300) comprising:
a programming interface (16) configured to communicate with a device, the
device
including a housing (322) securing a speaker, a user interface (16), a
processor (254), non-
transitory memory (252), and storage (374) therein, the device including a
busing
architecture (380) communicatively interconnecting the speaker, the user
interface (16), the
processor (254), the memory (252), and the storage (374);
the non-transitory memory (252) accessible to the processor (254), the non-
transitory
memory (252) including processor-executable instructions that, when executed,
by the
processor (254) cause the system (300) to:
screen, via the speaker and the user interface (16), a left ear of a patient
at an
incrementally selected frequency between a frequency range of 50Hz to 5,000Hz
at a first
increment at a decibel range of 10db to 120db, with detected frequencies being
re-ranged
tested at a second increment, the second increment being more discrete than
the first
increment;
screen, via the speaker and the user interface (16), the left ear of the
patient at
an incrementally selected frequency between a frequency range of 5,000Hz to
10,000Hz at a
third increment at a decibel range of 10dB to 120dB, with detected frequencies
to be re-
range tested at a fourth increment, the fourth increment being more discrete
than the third
increment;
determine a left ear preferred hearing range, the left ear preferred hearing
range being a range of sound corresponding to highest hearing capacity of the
left ear of the
patient between 50Hz and 10,000Hz;
screen, via the speaker and the user interface (16), a right ear of the
patient at
an incrementally selected frequency between a frequency range of 50Hz to
5,000Hz at the
first increment at a decibel range of 10db to 120db, with detected frequencies
being re-
ranged tested at the second increment;
screen, via the speaker and the user interface (16), the right ear of the
patient
at an incrementally selected frequency between a frequency range of 5,000Hz to
10,000Hz at
the third increment at a decibel range of 10dB to 120dB, with detected
frequencies to be re-
range tested at the fourth increment;
28

determine a right ear preferred hearing range, the left ear preferred hearing
range being a range of sound corresponding to highest hearing capacity of the
left ear of the
patient between 50Hz and 10,000Hz;
for the left ear preferred hearing range, complete an assessment of a degree
of
annoyance caused to the patient by an impairment of wanted sound through a
test of
different sounds;
modify the left ear preferred hearing range with a subjective assessment of
sound
quality according to the patient;
for the right ear preferred hearing range, complete an assessment of a degree
of
annoyance caused to the patient by an impairment of wanted sound through a
test of
different sounds; and
modify the right ear preferred hearing range with a subjective assessment of
sound
quality according to the patient.
2. The system (300) as recited in claim 1, wherein the test of different
sounds
further comprises a sound selected from the group consisting of human voices,
musical
sounds, animal sounds, and vehicular sounds.
3. The system (300) as recited in claim 1, wherein the device further
comprises
a smart device.
4. The system (300) as recited in claim 3, wherein the smart device further
comprises a device selected from the group consisting of smart watches, smart
phones, and
tablet computers.
5. The system (300) as recited in claim 1, wherein the device further
comprises
a device selected from the group consisting of computers and headset (436)
hearing tester.
6. The system (300) as recited in claim 1, wherein the processor (254)
executable instructions further comprise processor (254) executable
instructions that, when
executed, cause the processor (254) to utilize distributed processing between
the device and
a server (320) to screen, via the speaker and the user interface (16), the
left ear of the patient.
7. The system (300) as recited in claim 1, wherein the processor (254)
executable instructions further comprise processor (254) executable
instructions that, when
executed, cause the processor (254) to utilize distributed processing between
the device and
a server (320) to screen, via the speaker and the user interface (16), each of
the left ear of the
patient and the right ear of the patient.
29

8. A system (300) for aiding hearing, the system (300) comprising:
a programming interface (16) configured to communicate with a smart device,
the
smart device including a housing (322) securing a speaker, a user interface
(16), a processor
(254), non-transitory memory (252), and storage (374) therein, the device
including a busing
architecture (380) communicatively interconnecting the speaker, the user
interface (16), the
processor (254), the memory (252), and the storage (374), the smart device
being a device
selected from the group of consisting of smart watches, smart phones, and
tablet computers;
the non-transitory memory (252) accessible to the processor (254), the non-
transitory
memory (252) including processor-executable instructions that, when executed,
by the
processor (254) cause the system (300) to:
screen, via the speaker and the user interface (16), a left ear of a patient
at an
incrementally selected frequency between a frequency range of 50Hz to 5,000Hz
at 50Hz
increments at a decibel range of 10db to 120db, with detected frequencies
being re-ranged
tested at a 5Hz to 20Hz increment;
screen, via the speaker and the user interface (16), the left ear of the
patient at
an incrementally selected frequency between a frequency range of 5,000Hz to
10,000Hz in
200Hz increments at a decibel range of 10dB to 120dB, with detected
frequencies to be re-
range tested at a 5Hz to 20Hz increment to better identify the frequencies and
decibel levels
heard;
determine a left ear preferred hearing range, the left ear preferred hearing
range being a range of sound corresponding to highest hearing capacity of the
left ear of the
patient between 50Hz and 10,000Hz;
screen, via the speaker and the user interface (16), a right ear of the
patient at
an incrementally selected frequency between a frequency range of 50Hz to
5,000Hz at 50Hz
increments at a decibel range of 10db to 120db, with detected frequencies
being re-ranged
tested at a 5Hz to 20Hz increment;
screen, via the speaker and the user interface (16), the right ear of the
patient
at an incrementally selected frequency between a frequency range of 5,000Hz to
10,000Hz
in 200Hz increments at a decibel range of 10dB to 120dB, with detected
frequencies to be
re-range tested at a 5Hz to 20Hz increment to better identify the frequencies
and decibel
levels heard;

determine a right ear preferred hearing range, the left ear preferred hearing
range being a range of sound corresponding to highest hearing capacity of the
left ear of the
patient between 50Hz and 10,000Hz;
for the left ear preferred hearing range, complete an assessment of a degree
of
annoyance caused to the patient by an impairment of wanted sound through a
test of
different sounds;
modify the left ear preferred hearing range with a subjective assessment of
sound quality according to the patient;
for the right ear preferred hearing range, complete an assessment of a degree
of annoyance caused to the patient by an impairment of wanted sound through a
test of
different sounds; and
modify the right ear preferred hearing range with a subjective assessment of
sound quality according to the patient.
9. The system (300) as recited in claim 8, wherein the processor (254)
executable instructions further comprise processor (254) executable
instructions that, when
executed, cause the processor (254) to utilize distributed processing between
the device and
a server (320) to at least one of screen the left ear, screen the right ear,
determine the left ear
preferred hearing range, determine the right ear preferred hearing range,
completion of the
assessment of the left ear, completion of the assessment of the right ear,
modification of the
left ear preferred hearing range, and modification of the right ear preferred
hearing range.
10. A system (300) for aiding hearing, the system (300) comprising:
a programming interface (16) configured to communicate with a device, the
device
including a housing (322) securing a speaker, a user interface (16), a
processor (254), non-
transitory memory (252), and storage (374) therein, the device including a
busing
architecture (380) communicatively interconnecting the speaker, the user
interface (16), the
processor (254), the memory (252), and the storage (374);
the non-transitory memory (252) accessible to the processor (254), the non-
transitory
memory (252) including processor-executable instructions that, when executed,
by the
processor (254) cause the system (300) to:
screen, via the speaker and the user interface (16), a left ear of a patient
at an
incrementally selected frequency between a frequency range of 50Hz to 5,000Hz,
with
detected frequencies being re-ranged tested to better identify the frequencies
and decibel
levels heard;
31

screen, via the speaker and the user interface (16), the left ear of the
patient at
an incrementally selected frequency between a frequency range of 5,000Hz to
10,000Hz,
with detected frequencies to be re-range tested to better identify the
frequencies and decibel
levels heard;
determine a left ear preferred hearing range, the left ear preferred hearing
range being a range of sound corresponding to highest hearing capacity of the
left ear of the
patient between 50Hz and 10,000Hz;
screen, via the speaker and the user interface (16), a right ear of the
patient at
an incrementally selected frequency between a frequency range of 50Hz to
5,000Hz, with
detected frequencies being re-ranged tested to better identify the frequencies
and decibel
levels heard;
screen, via the speaker and the user interface (16), the right ear of the
patient
at an incrementally selected frequency between a frequency range of 5,000Hz to
10,000Hz,
with detected frequencies to be re-range tested to better identify the
frequencies and decibel
levels heard;
determine a right ear preferred hearing range, the left ear preferred hearing
range being a range of sound corresponding to highest hearing capacity of the
left ear of the
patient between 50Hz and 10,000Hz; and
for the left ear preferred hearing range, complete an assessment of a degree
of
annoyance caused to the patient by an impairment of wanted sound through a
test of
different sounds;
modify the left ear preferred hearing range with a subjective assessment of
sound
quality according to the patient;
for the right ear preferred hearing range, complete an assessment of a degree
of
annoyance caused to the patient by an impairment of wanted sound through a
test of
different sounds; and
modify the right ear preferred hearing range with a subjective assessment of
sound
quality according to the patient.
32

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
SYSTEM AND METHOD FOR AIDING HEARING
TECHNICAL FIELD OF THE INVENTION
This invention relates, in general, to hearing aids and, in particular, to
systems and
methods that aid hearing to provide signal processing and feature sets to
enhance speech and
sound intelligibility.
BACKGROUND OF THE INVENTION
Hearing loss can affect anyone at any age, although elderly adults more
frequently
experience hearing loss. Untreated hearing loss is associated with lower
quality of life and
is can have far-reaching implications for the individual experiencing
hearing loss as well as
those close to the individual. As a result, there is a continuing need for
improved hearing
aids and methods for use of the same that enable patients to better hear
conversations and the
like.
SUMMARY OF THE INVENTION
It would be advantageous to achieve a hearing aid and method for use of the
same
that would significantly change the course of existing hearing aids by adding
features to
correct existing limitations in functionality. It would also be desirable to
enable a
mechanical and electronics-based solution that would provide enhanced
performance and
improved usability with an enhanced feature set. To better address one or more
of these
concerns, a system and method for aiding hearing are disclosed. In one
embodiment of the
system, a programming interface is configured to communicate with a device.
The system
screens, via a speaker and a user interface associated with the device, a left
ear ¨ and
separately, a right ear - of a patient. The system then determines a left ear
hearing range and
a right ear hearing range. The hearing ranges are modified with a subjective
assessment of
sound quality according to the patient. The subjective assessment of sound
quality
according to the patient may be a completed assessment of a degree of
annoyance caused to
the patient by an impairment of wanted sound, for example. By way of further
example, the
subjective assessment of sound quality according to the patient may be a
completed
assessment of a degree of pleasantness caused to the patient by an enablement
of wanted
1

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
sound. By way of still further example, the subjective assessment may be the
best sound
quality to the patient.
In another embodiment of the system, a programming interface is configured to
communicate with a device. The system screens, via a speaker and a user
interface
associated with the device, a left ear ¨ and separately, a right ear - of a
patient at an
incrementally selected frequency between a frequency range of 50Hz to 5,000Hz,
with
detected frequencies being re-ranged tested to better identify the frequencies
and decibel
levels heard. A frequency range of 5,000Hz to 10,000Hz is then tested. The
system then
determines a left ear preferred hearing range and a right ear preferred
hearing range. The
preferred hearing ranges are modified with a subjective assessment of sound
quality
according to the patient. These and other aspects of the invention will be
apparent from and
elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the features and advantages of the
present
is invention, reference is now made to the detailed description of the
invention along with the
accompanying figures in which corresponding numerals in the different figures
refer to
corresponding parts and in which:
Figure 1A is a front perspective schematic diagram depicting one embodiment of
a
hearing aid being programmed with one embodiment of a system for aiding
hearing,
according to the teachings presented herein;
Figure 1B is a top plan view depicting the hearing aid of figure 1A being
utilized
according to the teachings presented herein;
Figure 2 is a front perspective view of one embodiment of the hearing aid
depicted in
figure 1A;
Figure 3A is a front-left perspective view of another embodiment of the
hearing aid
depicted in figure 1A;
Figure 3B is a front-right perspective view of the embodiment of the hearing
aid
depicted in figure 3A;
Figure 4 is a front perspective view of another embodiment of a hearing aid
being
programmed with one embodiment of a system for aiding hearing, according to
the teachings
presented herein;
Figure 5 is a flow chart depicting one method for aiding hearing, according to
the
teachings presented herein;
2

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
Figure 6 is a flow chart depicting one embodiment of a method for calibrating
and
setting the hearing aid for a preferred hearing range or preferred hearing
ranges, according to
the teachings presented herein;
Figure 7 is a flow chart depicting another embodiment of a method for
calibrating
and setting the hearing aid for a preferred hearing range or preferred hearing
ranges,
according to the teachings presented herein;
Figure 8 is a flow chart depicting one embodiment of modifying a preferred
hearing
ranges with a subjective assessment of sound quality according to the patient;
Figure 9 is a front perspective schematic diagram depicting one embodiment of
a
io .. system for aiding hearing, according to the teachings presented herein;
Figure 10 is a functional block diagram depicting one embodiment of the
hearing aid
depicted in figure 9;
Figure 11 is a functional block diagram of a smart device, which forms a
portion of
the system for aiding hearing depicted in figure 9;
Figure 12 is a functional block diagram depicting one embodiment of a server,
which
forms a portion of the system for aiding hearing depicted in figure 8;
Figure 13 is a front perspective schematic diagram depicting another
embodiment of
a system for aiding hearing, according to the teachings presented herein;
Figure 14 is a functional block diagram depicting one embodiment of hearing
aid test
equipment depicted in figure 12;
Figure 15 is a conceptual module diagram depicting a software architecture of
a
testing equipment application of some embodiments; and
Figure 16 is a graph of one embodiment of a completion of an assessment of a
patient.
DETAILED DESCRIPTION OF THE INVENTION
While the making and using of various embodiments of the present invention are
discussed in detail below, it should be appreciated that the present invention
provides many
applicable inventive concepts, which can be embodied in a wide variety of
specific contexts.
The specific embodiments discussed herein are merely illustrative of specific
ways to make
and use the invention, and do not delimit the scope of the present invention.
Referring initially to figure 1A and figure 1B, therein is depicted one
embodiment of
a hearing aid, which is schematically illustrated and designated 10. The
hearing aid 10 is
programmed according to a system for aiding hearing. As shown, a user U, who
may be
3

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
considered a patient requiring a hearing aid, is wearing the hearing aid 10
and sitting at a
table T at a restaurant or café, for example, and engaged in a conversation
with an individual
Ii and an individual 12. As part of a conversation at the table T, the user U
is speaking sound
Si, the individual Ii is speaking sound S2, and the individual 12 is speaking
sound S3.
Nearby, in the background, a bystander Bi is engaged in a conversation with a
bystander B2.
The bystander Bi is speaking sound S4 and the bystander B2 is speaking sound
S5. An
ambulance A is driving by the table T and emitting sound S6. The sounds Si,
S2, and S3 may
be described as the immediate background sounds. The sounds S4, SS, and S6 may
be
described as the background sounds. The sound S6 may be described as the
dominant sound
io as it is the loudest sound at table T.
As will be described in further detail hereinbelow, the hearing aid 10 is
programmed
with a qualified sound range for each ear in a two-ear embodiment and for one
ear in a one-
ear embodiment. As shown, in the two-ear embodiment, the qualified sound range
may be a
range of sound corresponding to a preferred hearing range for each ear of the
user modified
is with a subjective assessment of sound quality according to the user. The
preferred hearing
range may be a range of sound corresponding to the highest hearing capacity of
an ear of the
user U between 50Hz and 10,000Hz. Further, as shown, in the two-ear
embodiment, the
preferred hearing range for each ear may be multiple ranges of sound
corresponding to the
highest hearing capacity ranges of an ear of the user U between 50Hz and
10,000Hz. In
20 some embodiments of this multiple range of sound implementation, the
various sounds S
through S6 received may be transformed and divided into the multiple ranges of
sound. In
particular, the preferred hearing range for each ear may be an about 300Hz
frequency to an
about 500Hz frequency range of sound corresponding to highest hearing capacity
of a
patient.
25 The
subjective assessment according to the user may include a completed assessment
of a degree of annoyance caused to the user by an impairment of wanted sound.
The
subjective assessment according to the user may also include a completed
assessment of a
degree of pleasantness caused to the patient by an enablement of wanted sound.
That is, the
subjective assessment according to the user may include a completed assessment
to
30 determine best sound quality to the user. Sound received at the hearing
aid 10 is converted
to the qualified sound range prior to output, which the user U hears.
In one embodiment, the hearing aid 10 may create a pairing with a proximate
smart
device 12, such as a smart phone (depicted), smart watch, or tablet computer.
The proximate
4

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
smart device 12 includes a display 14 having an interface 16 having controls,
such as an
ON/OFF switch or volume controls 18 and mode of operation controls 20. A user
may send a
control signal wirelessly from the proximate smart device 12 to the hearing
aid 10 to control
a function, like volume controls 18. Further, in one embodiment, as shown by
processor
symbol P, after the hearing aid 10 creates the pairing with a proximate smart
device 12, the
hearing aid 10 and the proximate smart device 12 may leverage the wireless
communication
link therebetween and use processing distributed between the hearing aid 10
and the
proximate smart device 12 to process the signals and perform other analysis.
Referring to figure 2, as shown, in the illustrated embodiment, the hearing
aid 10 is
programmed according to the system for aiding hearing and the hearing aid 10
includes a left
body 32 and a right body 34 connected to a band member 36 that is configured
to partially
circumscribe the user U. Each of the left body 32 and the right body 34 cover
an external
ear of the user U and are sized to engage therewith. In some embodiments,
microphones 38,
40, 42, which gather sound directionally and convert the gathered sound into
an electrical
is signal, are located on the left body 32. With respect to gathering
sound, the microphone 38
may be positioned to gather forward sound, the microphone 40 may be positioned
to gather
lateral sound, and the microphone 42 may be positioned to gather rear sound.
Microphones
may be similarly positioned on the right body 34. Various internal
compartments 44 provide
space for housing electronics, which will be discussed in further detail
hereinbelow. Various
controls 46 provide a patient interface with the hearing aid 10.
Having each of the left body 32 and the right body 34 cover an external ear of
the
user U and being sized to engage therewith confers certain benefits. Sound
waves enter
through the outer ear and reach the middle ear to vibrate the eardrum. The
eardrum then
vibrates the oscilles, which are small bones in the middle ear. The sound
vibrations travel
through the oscilles to the inner ear. When the sound vibrations reach the
cochlea, they push
against specialized cells known as hair cells. The hair cells turn the
vibrations into electrical
nerve impulses. The auditory nerve connects the cochlea to the auditory
centers of the brain.
When these electrical nerve impulses reach the brain, they are experienced as
sound. The
outer ear serves a variety of functions. The various air-filled cavities
composing the outer
3 0 ear, the two most prominent being the concha and the ear canal, have a
natural or resonant
frequency to which they respond best. This is true of all air-filled cavities.
The resonance of
each of these cavities is such that each structure increases the sound
pressure at its resonant
frequency by approximately 10 to 12 dB. In summary, among the functions of the
outer ear:
5

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
a) boost or amplify high-frequency sounds; b) provide the primary cue for the
determination
of the elevation of a sound's source; c) assist in distinguishing sounds that
arise from in front
of the listener from those that arise from behind the listener. Headsets are
used in hearing
testing in medical and associated facilities for a reason: tests have shown
that completely
closing the ear canal in order to prevent any form of outside noise plays
direct role in
acoustic matching. The more severe hearing problem, the closer the hearing aid
speaker
must be to the ear drum. However, the closer to the speaker is to the ear
drum, the more the
device plugs the canal and negatively impacts the ear's pressure system. That
is, the various
chambers of the ear have a defined operational pressure determined, in part,
by the ear's
structure. By plugging the ear canal, the pressure system in the ear is
distorted and the
operational pressure of the ear is negatively impacted.
As alluded, "plug size" hearing aids having limitations with respect to
distorting the
defined operational pressure within the ear. Considering the function of the
outer ear's air
filled cavities in increasing the sound pressure at resonant frequencies, the
hearing aid 10 of
is figure 2 ¨ and other figures - creates a closed chamber around the ear
increasing the pressure
within the chamber. This higher pressure plus the utilization of a more
powerful speaker
within the headset at qualified sound range, e.g., the frequency range the
user hears best with
the best quality sound, provide the ideal set of parameters for a powerful
hearing aid.
Referring to figure 3A and figure 3B, as shown, in the illustrated embodiment,
the
hearing aid 10 is programmed according to a system for aiding hearing and the
hearing aid
10 includes a left body 52 having an ear hook 54 extending from the left body
52 to an ear
mold 56. The left body 52 and the ear mold 56 may each at least partially
conform to the
contours of the external ear and sized to engage therewith. By way of example,
the left body
52 may be sized to engage with the contours of the ear in a behind-the-ear-
fit. The ear mold
56 may be sized to be fitted for the physical shape of a patient's ear. The
ear hook 54 may
include a flexible tubular material that propagates sound from the left body
52 to the ear
mold 56. Microphones 58, which gather sound and convert the gathered sound
into an
electrical signal, are located on the left body 52. An opening 60 within the
ear mold 56
permits sound traveling through the ear hook 54 to exit into the patient's
ear. An internal
compartment 62 provides space for housing electronics, which will be discussed
in further
detail hereinbelow. Various controls 64 provide a patient interface with the
hearing aid 10
on the left body 52 of the hearing aid 10.
6

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
As also shown, the hearing aid 10 includes a right body 72 having an ear hook
74
extending from the right body 72 to an ear mold 76. The right body 72 and the
ear mold 76
may each at least partially conform to the contours of the external ear and
sized to engage
therewith. By way of example, the right body 72 may be sized to engage with
the contours
of the ear in a behind-the-ear-fit. The ear mold 76 may be sized to be fitted
for the physical
shape of a patient's ear. The ear hook 74 may include a flexible tubular
material that
propagates sound from the right body 72 to the ear mold 76. Microphones 78,
which gather
sound and convert the gathered sound into an electrical signal, are located on
the right body
72. An opening 80 within the ear mold 76 permits sound traveling through the
ear hook 74
to exit into the patient's ear. An internal compartment 82 provides space for
housing
electronics, which will be discussed in further detail hereinbelow. Various
controls 84
provide a patient interface with the hearing aid 10 on the right body 72 of
the hearing aid 10.
It should be appreciated that the various controls 64, 84 and other components
of the left and
right bodies 52, 72 may be at least partially integrated and consolidated.
Further, it should
is be appreciated that the hearing aid 10 may have one or more microphones
on each of the left
and right bodies 52, 72 to improve directional hearing in certain
implementations and
provide, in some implementations, 360-degree directional sound input.
In one embodiment, the left and right bodies 52, 72 are connected at the
respective
ear hooks 54, 74 by a band member 90 which is configured to partially
circumscribe a head
or a neck of the patient. A compartment 92 within the band member 90 may
provide space
for electronics and the like. Additionally, the hearing aid 10 may include
left and right
earpiece covers 94, 96 respectively positioned exteriorly to the left and
right bodies 52, 72.
Each of the left and right earpiece covers 94, 96 isolate noise to block out
interfering outside
noises. To add further benefit, in one embodiment, the microphones 58 in the
left body 52
and the microphones 78 in the right body 72 may cooperate to provide
directional hearing.
Referring to figure 4, therein is depicted another embodiment of the hearing
aid 10
that is programmed with the system for aiding hearing. It should be
appreciated by a review
of figure 2 through figure 4 that the system for aiding hearing presented
herein may program
any type of hearing aid. As shown, in the illustrated embodiment in figure 4,
the hearing aid
3 0 10 includes a body 112 having an ear hook 114 extending from the body
112 to an ear mold
116. The body 112 and the ear mold 116 may each at least partially conform to
the contours
of the external ear and sized to engage therewith. By way of example, the body
112 may be
sized to engage with the contours of the ear in a behind-the-ear-fit. The ear
mold 116 may
7

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
be sized to be fitted for the physical shape of a patient's ear. The ear hook
114 may include
a flexible tubular material that propagates sound from the body 112 to the ear
mold 116. A
microphone 118, which gathers sound and converts the gathered sound into an
electrical
signal, is located on the body 112. An opening 120 within the ear mold 116
permits sound
traveling through the ear hook 114 to exit into the patient's ear. An internal
compartment
122 provides space for housing electronics, which will be discussed in further
detail
hereinbelow. Various controls 124 provide a patient interface with the hearing
aid 10 on the
body 112 of the hearing aid 10.
Referring now to figure 5, one embodiment of a method for aiding hearing is
depicted. The methodology starts at block 140, when a patient is going to
undergo screening
to determine the preferred hearing range or preferred hearing ranges for
programming a
hearing aid, such as the hearing aid 10. At block 142, a big profile is
created at a first
hearing range. By way of example, an ear of a patient may be screened at an
incrementally
selected frequency between a frequency range of 50Hz to 5,000Hz, with detected
is frequencies noted. At block 144, another big profile is created, but at
a second hearing
range. By way of example, an ear of the patient may be screened at an
incrementally
selected frequency between a frequency range of 5,000Hz to 10,000Hz. At block
146, a
detailed profile is created by re-ranged testing where hearing was noted to
better identify the
frequencies and decibel levels heard. At block 148, a best sound quality
profile is created
whereby an assessment of a degree of annoyance caused to the patient by an
impairment of
wanted sound through a test of different sounds is completed. This provides
the preferred
hearing range modified with a subjective assessment of sound quality according
to the
patient and the methodology ends at block 150.
Referring now to figure 6, one embodiment of a method for calibrating and
setting
the hearing aid 10 for a preferred hearing range or preferred hearing ranges
utilizing the
methodology presented herein is shown. The method starts at block 160, when a
patient is
going to undergo testing to determine the preferred hearing range or preferred
hearing ranges
for use of the hearing aid 10. This method will provide the necessary
calibration and settings
for the hearing aid 10. The methodology screens a large number of hearing aid
screening
points and involve a methodology that is completely automated and overseen by
an assistant.
At block 162, an ear of the patient is selected. In one embodiment, a left ear
preferred
hearing range and a right ear preferred hearing range are determined
corresponding to the
left and right bodies 52, 72 of the hearing aid 10. Each of the left ear
preferred hearing range
8

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
and the right ear preferred hearing range may be a range of sounds
corresponding to the
highest hearing capacity of the respective ear of the patient. In one
implementation, this
range of sound is between 50Hz and 10,000Hz. Further, the left ear preferred
hearing range
and the right ear preferred hearing range may be mutually exclusive, at least
partially
overlap, or be identical. Further still, the left ear preferred hearing range
or the right ear
preferred hearing range may include multiple narrow hearing range bands. It
should be
appreciated that the profile of the left ear preferred hearing range and the
right ear preferred
hearing range will vary from patient to patient, i.e., user to user.
Once an ear is selected at block 162, the methodology proceeds to decision
block
164, where a round of testing is selected. In one embodiment, the hearing aid
testing
requires two rounds of testing. In a low frequency round, the ear under test
is tested between
50Hz and 5,000Hz at a variable increment, such as, for example, a 50Hz
increment, initially
with subsequent testing at a more discrete variable increment, such as, a 5Hz
to 20Hz
increment, for example, if hearing is detected. In a high frequency round, the
ear under test
is is
tested between 5,000Hz and 10,000Hz at a variable increment, such as a 200Hz
increment,
for example, initially with subsequent testing at a more discrete increment,
such as a 5Hz to
20Hz increment, for example, if hearing is detected to better identify the
preferred hearing
range, whether the left ear preferred hearing range or the right ear preferred
hearing range.
With low frequency testing selected, the methodology advances to block 166
where the
frequency to be tested is selected. For the low frequency testing, the testing
begins at 50Hz
and progressively is increased at a variable increment to 5,000Hz with
subsequent iterations.
At block 168, the decibel level is selected. The decibel level begins at 10dB
and increases to
120dB with subsequent iterations. At block 170, the test sound at the selected
frequency and
decibel level is provided to the selected ear of the patient.
At decision block 172, if the patient hears the test sound at the selected
frequency
and decibel level, then the patient pushes a button when the test sound is
first heard. If the
patient does not hear the test sound, then the methodology advances to
decision block 174,
where if there are additional decibels to test then the methodology returns to
block 168,
where, iteratively, the decibel level is increased and the testing continues
at block 170, as
previously discussed. On the other hand, if the decibel levels are exhausted
and there are no
more decibel levels to test, then the methodology advances to decision block
176. By way
of example, at decision block 174, if the decibel level is at maximum
strength, at 120dBs, for
example, then the decibel levels to test would be exhausted. It should be
appreciated that in
9

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
one implementation, the escalation of the decibel levels may be continuous
during the testing
or seem continuous to the patient during the testing.
Returning to decision block 172, if the test sound at the designated frequency
and
decibel level is heard, then the methodology advances to decision block 176.
At decision
block 176, if there are additional frequencies to test, then the methodology
returns to block
166, where another frequency is selected. If all frequencies to be tested,
then the
methodology returns to block 164, where the high frequency round of testing is
selected as
the testing for the ear under test of the patient for the low frequency is
completed. By way
of example, if a test sound at a particular frequency and decibel was heard
and the
io methodology advanced from decision block 172, then the frequencies
around the test sound
are tested at a 5Hz to a 20Hz increment in the methodology to identify the
exact frequency
range that is heard. By way of further example, if a test sound at a
particular frequency and
decibel were not heard and the methodology advanced from decision block 176,
then the
next frequency in the variable increment, such as a 50Hz increment, from 50Hz
to 5,000Hz
is will be selected at block 166. The methodology continues in this manner
through block 166,
block 168, block 170, decision block 172, decision block 174, and decision
block 176 until,
in the illustrated embodiment, all frequencies between 50Hz and 5,000Hz are
tested at a
variable increment, such as a 50Hz increment, at a decibel range of, for
example, from 10dB
to 120dB, with frequencies detected being ranged tested and retested at a more
discrete
20 increment, such as a 5Hz to 20Hz increment, for example.
Once the low frequency testing is completed for an ear under test, then the
methodology returns through block 162 where the ear under test is continued to
be selected
and at decision block 164 the high frequency testing is selected. In the high
frequency
testing, the methodology tests, in one embodiment, a frequency range of
5,000Hz to
25 10,000Hz at a variable increment, such as a 200Hz increment, at a
decibel range of 10dB to
120dB, with detected frequencies to be re-range tested at a more discrete
increment, such as
a 5Hz to 20Hz increment, to better identify the frequencies and decibel levels
heard. This
methodology is executed by block 186, block 188, block 190, decision block
192, decision
block 194, and decision block 196, which execute a methodology similar to the
block 166,
3 0 the block 168, the block 170, the decision block 172, the decision
block 174, and the
decision block 176 discussed hereinabove. Once the high frequency testing is
complete for
the ear under test at decision block 196 and there are no more frequencies to
be tested, then
the methodology advances to decision block 198, where if there is another ear
to be tested

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
the methodology returns to block 162 for testing the new ear under test for
both the low
frequency and high frequency testing. On the other hand, if all ears to be
tested have been
tested then the methodology advances to block 200 where the testing
methodology
concludes. At block 200, with the testing complete, if both ears were tested,
for example,
then a left ear preferred hearing range and a right ear preferred hearing
range will be
documented that indicate the range or ranges of frequencies for each ear at
particular decibel
levels where the patient can hear. The left ear preferred hearing range and
the right ear
preferred hearing range are utilized to calibrate the hearing aid 10.
Referring now to figure 7, another embodiment of a method for calibrating and
setting the hearing aid 10 for a preferred hearing range or preferred hearing
ranges utilizing
the methodology presented herein is shown. A frequency generator 220 and
recorder 222
interact with the methodology to provide a target frequency range 224 and one
or more of
the target frequency ranges 224 may be combined to arrive at the preferred
hearing range.
As will be discussed in further detail hereinbelow, the frequency generator
220 and the
is recorder 222 may be embodied on any combination of smart devices,
servers, and hearing
aid test equipment.
At block 230, an initial frequency of 50Hz at 10 dB is screened. As shown by
decision block 232, the patient's ability to hear the initial frequency is
recorded before the
process advances to the next frequency of a variable increment, which is 50Hz
at 20 dB, at
block 234 and the patient's ability to hear is recorded at decision block 236.
At block 238
and decision block 240, the process continues for the next incremental
frequency, e.g., 965
Hz at 20 dB. Similarly, at block 242 and decision block 244, the methodology
advances for
1,180 Hz at 20 dB before the process advances to block 246 and decision block
248 for
10,000 Hz at 20dB. As indicated in block 250, the testing methodology
continues for the
frequencies under test with the results being recorded.
Referring now to figure 8, where a methodology for providing best sound
quality is
shown. A memory 252 provides various test sounds. A processor 254 utilizes a
sound
quality algorithm to determine the subjective assessment of sound quality
according to the
patient. The subjective assessment according to the patient may include a
completed
3 0 assessment of a degree of annoyance caused to the patient by an
impairment of wanted
sound. The subjective assessment according to the patient may also include a
completed
assessment of a degree of pleasantness caused to the patient by an enablement
of wanted
11

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
sound. That is, the subjective assessment according to the patient may include
a completed
assessment to determine best sound quality to the user.
From figure 7, the target frequency ranges, which form the left and right ear
preferred hearing ranges, are provided to a memory 256, which stores the
results. At block
266, a sample test is initiated with sample sounds of a man's voice being
provided from
block 258. At decision block 268, if the sound is pleasant and does not cause
a degree of
annoyance to the patient by any impairments of wanted sound, then the results
are recorded.
The results are recorded likewise if the results are unfavorable. At block
270, a second
sample test of a man's voice is provided. At decision block 272, the sound is
assessed. As
io shown by block 274 and decision block 276, this testing assessment
continues for various
samples of the man's voice provided by block 258 as well as other testing
samples, including
a women's voice provided by block 260 in the memory 252, street vehicles
provided at
block 262, and music provided at block 264. In all instances, the results are
recorded in
memory 256. In this way, the methodology completes an assessment of a degree
of
is annoyance caused to the patient by an impairment of wanted sound trough
a test of different
sounds, including human voices, animal sounds, vehicular sounds, and music,
for example.
Thereafter, the methodology, modifies the left ear preferred hearing range and
the right ear
preferred hearing range, following separate ear testing, with a subjective
assessment of
sound quality according to the patient.
20 It
should be appreciated, however, that the methodology may modify a left ear
hearing range that is not the left ear preferred hearing range. Similarly, the
methodology
may modify a right ear hearing range that is not the right ear preferred
range. In these
embodiments, Thereafter, the methodology, modifies the left ear hearing range
and the right
ear hearing range, following separate ear testing, with a subjective
assessment of sound
25 quality according to the patient.
Referring now to figure 9, one embodiment of a system 300 for aiding hearing
is
shown. As shown, the user U, who may be considered a patient requiring a
hearing aid, is
wearing the hearing aid 10 and sitting at a table T. The hearing aid 10 has a
pairing with the
proximate smart device 12 such the hearing aid 10 and the proximate smart
device 12 may
30 determine the user's preferred hearing range for each ear and subsequently
program the
hearing aid 10 with the preferred hearing ranges. The proximate smart device
12, which
may be a smart phone, a smart watch, or a tablet computer, for example, is
executing a
hearing screening program. The display 14 serves as an interface for the user
U. As shown,
12

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
various indicators, such as indicators 302, 304, 306 show that the testing of
the left ear is in
progress at 965 Hz at 20 db. The user U is asked if the sound was heard at the
indicator 306
and the user U may appropriately respond at soft button 308 or soft button
310. In this way,
the system 300 screens, via a speaker and a user interface associated with the
proximate
smart device 12, a left ear ¨ and separately, a right ear - of the user U at
an incrementally
selected frequency between a frequency range of 50Hz to 5,000Hz, with detected
frequencies being re-ranged tested to better identify the frequencies and
decibel levels heard.
A frequency range of 5,000Hz to 10,000Hz is then tested. The system then
determines a left
ear preferred hearing range and a right ear preferred hearing range prior to
determining the
best sound quality, which is the hearing ranges, e.g., the preferred hearing
ranges, modified
with a subjective assessment of sound quality according to the patient. That
is, the left ear
hearing range and the right ear hearing range, following separate ear testing,
are applied with
a subjective assessment of sound quality according to the patient
As shown the proximate smart device 12 may be in communication with a server
320
is having a housing 322. The smart device may utilize distributed
processing between the
proximate smart device 12 and the server 320 to at least one of screen the
left ear, screen the
right ear, determine the left ear preferred hearing range, and determine the
right ear preferred
hearing range. As previously mentioned, the processing to screen the left ear,
screen the
right ear, determine the left ear preferred hearing range, and determine the
right ear preferred
hearing range may be located on a smart device, a server, hearing testing
equipment, or any
combination thereof.
Referring now to figure 10, an illustrative embodiment of the internal
components of
the hearing aid 10 is depicted. By way of illustration and not by way of
limitation, the
hearing aid 10 depicted in the embodiment of figure 2 and figures 3A, 3B is
presented. It
should be appreciated, however, that the teachings of figure 5 equally apply
to the
embodiment of figure 4. As shown, with respect to figures 3A and 3B, in one
embodiment,
within the internal compartments 62, 82, an electronic signal processor 330
may be housed.
The hearing aid 10 may include an electronic signal processor 330 for each ear
or the
electronic signal processor 330 for each ear may be at least partially
integrated or fully
3 0 integrated. In another embodiment, with respect to figure 4, within the
internal compartment
122 of the body 112, the electronic signal processor 330 is housed. In order
to measure,
filter, compress, and generate, for example, continuous real-world analog
signals in form of
sounds, the electronic signal processor 330 may include an analog-to-digital
converter
13

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
(ADC) 332, a digital signal processor (DSP) 334, and a digital-to-analog
converter (DAC)
336. The electronic signal processor 330, including the digital signal
processor embodiment,
may have memory accessible to a processor. One or more microphone inputs 338
corresponding to one or more respective microphones, a speaker output 340,
various
controls, such as a programming connector 342 and hearing aid controls 344, an
induction
coil 346, a battery 348, and a transceiver 350 are also housed within the
hearing aid 10.
As shown, a signaling architecture communicatively interconnects the
microphone
inputs 338 to the electronic signal processor 330 and the electronic signal
processor 330 to
the speaker output 340. The various hearing aid controls 344, the induction
coil 346, the
io battery 348, and the transceiver 350 are also communicatively
interconnected to the
electronic signal processor 330 by the signaling architecture. The speaker
output 340 sends
the sound output to a speaker or speakers to project sound and in particular,
acoustic signals
in the audio frequency band as processed by the hearing aid 10. By way of
example, the
programming connector 342 may provide an interface to a computer or other
device and, in
is particular, the programming connector 342 may be utilized to program and
calibrate the
hearing aid 10 with the system 300, according to the teachings presented
herein. The
hearing aid controls 344 may include an ON/OFF switch as well as volume
controls, for
example. The induction coil 346 may receive magnetic field signals in the
audio frequency
band from a telephone receiver or a transmitting induction loop, for example,
to provide a
20 telecoil functionality. The induction coil 346 may also be utilized to
receive remote control
signals encoded on a transmitted or radiated electromagnetic carrier, with a
frequency above
the audio band. Various programming signals from a transmitter may also be
received via
the induction coil 346 or via the transceiver 350, as will be discussed. The
battery 348
provides power to the hearing aid 10 and may be rechargeable or accessed
through a battery
25 compartment door (not shown), for example. The transceiver 350 may be
internal, external,
or a combination thereof to the housing. Further, the transceiver 350 may be a
transmitter/receiver, receiver, or an antenna for example. Communication
between various
smart devices and the hearing aid 10 may be enabled by a variety of wireless
methodologies
employed by the transceiver 150, including 802.11, 3G, 4G, Edge, WiFi, ZigBee,
near field
3 0 communications (NFC), Bluetooth low energy, and Bluetooth, for example.
The various controls and inputs and outputs presented above are exemplary and
it
should be appreciated that other types of controls may be incorporated in the
hearing aid 10.
Moreover, the electronics and form of the hearing aid 10 may vary. The hearing
aid 10 and
14

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
associated electronics may include any type of headphone configuration, a
behind-the-ear
configuration, an in-the-ear configuration, or in-the-ear configuration, for
example. Further,
as alluded, electronic configurations with multiple microphones for
directional hearing are
within the teachings presented herein. In some embodiments, the hearing aid
has an over-
s the-ear configuration where the entire ear is covered, which not only
provides the hearing aid
functionality but hearing protection functionality as well.
Continuing to refer to figure 10, in one embodiment, the electronic signal
processor
330 may be programmed with a qualified sound range having a preferred hearing
range
which, in one embodiment, is the preferred hearing sound range corresponding
to highest
hearing capacity of a patient. In one embodiment, the left ear preferred
hearing range and
the right ear preferred hearing range are each a range of sound corresponding
to highest
hearing capacity of an ear of a patient between 50Hz and 10,000Hz. The
preferred hearing
sound range for each of the left ear and the right ear may be an about 300Hz
frequency to an
about 500Hz frequency range of sound. With this approach, the hearing capacity
of the
is patient is enhanced. Existing audiogram hearing aid industry testing
equipment measures
hearing capacity at defined frequencies, such as 60Hz; 125Hz; 250Hz; 500Hz;
1,000Hz;
2,000Hz; 4,000Hz; 8,000Hz and existing hearing aids work on a ratio-based
frequency
scheme. The present teachings however measure hearing capacity at a small
step, which is a
variable increment, such as 5Hz, 10Hz, or 20Hz. Thereafter, one or a few, such
as three,
frequency ranges are defined to serve as the preferred hearing range or
preferred hearing
ranges. As discussed herein, in some embodiments of the present approach, a
two-step
process is utilized. First, hearing is tested in an ear between 50Hz and
5,000Hz at a variable
increment, such as a 50Hz increment, and between 5,000Hz and 10,0000Hz at
200Hz
increments to identify potential hearing ranges. Then, in the second step, the
testing may be
switched to a more discrete increment, such as a 5Hz, 10Hz, or 20Hz increment,
to precisely
identify the preferred hearing range, which may be modified with a subjective
assessment of
sound quality according to the patient to provide best sound quality.
Further, in one embodiment, with respect to figure 4, the various controls 344
may
include an adjustment that widens the about frequency range of about 200Hz,
for example,
3 0 to a frequency range of 100Hz to 700Hz or even wider, for example.
Further, the preferred
hearing sound range may be shifted by use of various controls 124. Directional
microphone
systems on each microphone position and processing may be included that
provide a boost to
sounds coming from the front of the patient and reduce sounds from other
directions. Such a

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
directional microphone system and processing may improve speech understanding
in
situations with excessive background noise. Digital noise reduction, impulse
noise
reduction, and wind noise reduction may also be incorporated. As alluded to,
system
compatibility features, such as FM compatibility and Bluetooth compatibility,
may be
included in the hearing aid 10.
The ADC 332 outputs a digital total sound (ST) signal that undergoes the
frequency
spectrum analysis. In this process, the base frequency (FB) and harmonics (Hi,
H2, ..., HN)
components are separated. Using the algorithms presented hereinabove and
having a
converted based frequency (CFB) set as a target frequency range, the harmonics
processing
io within the electronic signal processor 330 calculates a converted actual
frequency (CFA) and
a differential converted harmonics (DCHN) to create a converted total sound
(CST), which is
the output of the harmonics processing by the electronic signal processor 330.
More particularly, total sound (ST) may be defined as follows:
ST = FB H1 H2 HN, wherein
ST = total sound;
FB = base frequency range, with
FB = range between FBL and FBH with FBL being the lowest frequency
value in base frequency and FBH being the highest frequency Value in Base
Frequency;
HN = harmonics of FB with HN being a mathematical multiplication of FB;
FA = an actual frequency value being examined;
HAI = 1st harmonic of FA;
HA2 = 2nd harmonic of FA; and
HAN = Nth harmonic of FA with HAN being the mathematical multiplication of
FA.
In many hearing impediment cases, the total sound (ST) may be at any frequency
range; furthermore the two ears true hearing range may be entirely different.
Therefore, the
hearing aid 10 presented herein may transfer the base frequency range (FB)
along with
several of the harmonics (HN) into the actual hearing range (AHR) by
converting the base
frequency range (FB) and several chosen harmonics (HN) into the actual hearing
range
(AHR) as one coherent converted total sound (CST) by using the following
algorithm
defined by following equations:
Equation (1):
16

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
FA X CFBL
___________ = CFA
FBL
Equation (2):
CFA
=
FA
Equation (3):
CHAN-
M x HN
-
wherein for Equation (1), Equation (2), and Equation (3):
M = multiplier between CFA and FA,
CST = converted total sound;
CFB = converted base frequency;
CHAi = 1' converted harmonic;
CHA2 = 2nd converted harmonic;
CHAN= Nth converted harmonic;
CFBL = lowest frequency value in CFB;
CFBH = Highest frequency value in CFB; and
CFA = Converted actual frequency.
By way of example and not by way of limitation, an application of the
algorithm
utilizing Equation (1), Equation (2), and Equation (3) is presented. For this
example, the
following assumptions are utilized:
FBL = 170Hz
FBH = 330Hz
CFBL = 600Hz
CFBH = 880Hz
FA= 180Hz
Therefore, for this example, the following will hold true:
Hi = 360Hz
H4 = 720Hz
H8 = 1,440Hz
17

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
H16 = 2,880Hz
H32 = 5,760Hz
Using the algorithm, the following values may be calculated:
CFA = 635Hz
CHM = 1,267Hz
CHA4 = 2,534Hz
CHA8 = 5,068Hz
CHA16= 10,137Hz
CHA32 = 20,275Hz
To calculate the differentials (D) between the harmonics HN and the converted
harmonics (CH), the following equation is employed:
CHAN ¨ HN = D equation.
This will result in differential converted harmonics (DCH) as follows:
DCHi = 907Hz
DCH4 = 1,814Hz
DCH8 = 3,628Hz
DCH16 = 7,257Hz
DCH32 = 14,515Hz
In some embodiments, a high-pass filter may cut all differential converted
harmonics
(DCH) above a predetermined frequency. The frequency of 5,000Hz may be used as
a
benchmark. In this case the frequencies participating in converted total sound
(CST) are as
follows:
CFA = 635Hz
DCHi = 907Hz
DCH4 = 1,814Hz
DCH8 = 3,628Hz
The harmonics processing at the DSP 334 may provide the conversion for each
participating frequency in total sound (ST) and distributing all participating
converted actual
frequencies (CFA) and differential converted harmonics (DCHN) in the converted
total sound
(CST) in the same ratio as participated in the original total sound (ST). In
some
implementations, should more than seventy-five percent (75%) of all the
differential
converted harmonics (DCHN) be out of the high-pass filter range, the harmonics
processing
18

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
may use an adequate multiplier (between 0.1-0.9) and add the created new
differential
converted harmonics (DCHN) to converted total sound (CST).
The processor may process instructions for execution within the electronic
signal
processor 330 as a computing device, including instructions stored in the
memory. The
memory stores information within the computing device. In one implementation,
the
memory is a volatile memory unit or units. In another implementation, the
memory is a non-
volatile memory unit or units. The memory is accessible to the processor and
includes
processor-executable instructions that, when executed, cause the processor to
execute a
series of operations. The processor-executable instructions cause the
processor to receive an
io input analog signal from the microphone inputs 338 and convert the input
analog signal to a
digital signal. The processor-executable instructions then cause the processor
to transform
through compression, for example, the digital signal into a processed digital
signal having
the preferred hearing range. The transformation may be a frequency
transformation where
the input frequency is frequency transformed into the preferred hearing range.
Such a
is transformation is a toned-down, narrower articulation that is clearly
understandable as it is
customized for the user. The processor is then caused by the processor-
executable
instructions to convert the processed digital signal to an output analog
signal and drive the
output analog signal to the speaker output 340.
Referring now to figure 11, the proximate smart device 12 may be a wireless
20 communication device of the type including various fixed, mobile, and/or
portable devices.
To expand rather than limit the discussion of the proximate smart device 12,
such devices
may include, but are not limited to, cellular or mobile smart phones, tablet
computers,
smartwatches, and so forth. The proximate smart device 12 may include a
processor 370,
memory 372, storage 374, a transceiver 376, and a cellular antenna 378
interconnected by a
25 busing architecture 380 that also supports the display 14, I/0 panel
382, and a camera 384.
It should be appreciated that although a particular architecture is explained,
other designs
and layouts are within the teachings presented herein.
The proximate smart device 12 includes the memory 372 accessible to the
processor
370 and the memory 372 includes processor-executable instructions that, when
executed,
3 0 cause the processor 370 to screen, via the speaker and the user
interface, a left ear of a
patient at an incrementally selected frequency between a frequency range of
50Hz to
5,000Hz at a variable increment, such as a 50Hz increment, at a decibel range
of 10db to
120db, with detected frequencies being re-ranged tested at a more discrete
increment, such
19

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
as a 5Hz to 20Hz increment. Also the processor-executable instructions cause
the processor
370 to screen, via the speaker and the user interface, the left ear of the
patient at an
incrementally selected frequency between a frequency range of 5,000Hz to
10,000Hz at a
variable increment, such as a 200Hz increment, at a decibel range of 10dB to
120dB, with
detected frequencies to be re-range tested at a more discrete increment, such
as a 5Hz to
20Hz increment.
The processor-executable instructions may also determine a left ear preferred
hearing
range, which is a range of sound corresponding to highest hearing capacity of
the left ear of
the patient between 50Hz and 10,000Hz. The processor-executable instructions
then cause
the processor 370 to screen, via the speaker and the user interface, a right
ear of the patient at
an incrementally selected frequency between a frequency range of 50Hz to
5,000Hz at a
variable increment, such as a 50Hz increment, at a decibel range of 10db to
120db, with
detected frequencies being re-ranged tested at a more discrete increment, such
as a 5Hz to
20Hz increment. Similarly, the processor 370 is caused to screen, via the
speaker and the
is user
interface, the right ear of the patient at an incrementally selected frequency
between a
frequency range of 5,000Hz to 10,000Hz at a variable increment, such as a
200Hz
increment, at a decibel range of 10dB to 120dB, with detected frequencies to
be re-range
tested at a more discrete increment, such as a 5Hz to 20Hz increment. Then a
left ear
preferred hearing range is determined, which is a range of sound corresponding
to highest
hearing capacity of the left ear of the patient between 50Hz and 10,000Hz.
Also, the
processor executable instructions may cause the processor 370 to, when
executed, utilize
distributed processing between the proximate smart device 12 and a server to
at least one of
screen the left ear, screen the right ear, determine the left ear preferred
hearing range, and
determine the right ear preferred hearing range.
The processor-executable instructions may then determine best quality sound
for the
left ear hearing range and the right ear hearing range, each of which may or
may not be
preferred hearing ranges. The instructions cause the processor, for the left
ear hearing range,
to complete an assessment of a degree of annoyance caused to the patient by an
impairment
of wanted sound trough a test of different sounds. Then the processor-
executable
instructions cause the processor to create the qualified sound range by
modifying the left ear
hearing range with a subjective assessment of sound quality according to the
patient to
provide best sound quality. Similarly, for the right ear preferred hearing
range, the
processor-executable instructions cause the processor to create the qualified
sound range by

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
completing an assessment of a degree of annoyance caused to the patient by an
impairment
of wanted sound trough a test of different sounds and then modify the right
ear preferred
hearing range with a subjective assessment of sound quality according to the
patient to
provide best sound quality. This creates the qualified sound range.
After the hearing aid 10 is programmed, in operation, the teachings presented
herein
permit the proximate smart device 12 such as a smart phone to form a pairing
with the
hearing aid 10 and operate the hearing aid 10. As shown, the proximate smart
device 12
includes the memory 372 accessible to the processor 370 and the memory 372
includes
processor-executable instructions that, when executed, cause the processor 370
to provide an
interface for an operator that includes an interactive application for viewing
the status of the
hearing aid 10. The processor 370 is caused to present a menu for controlling
the hearing
aid 10. The processor 370 is then caused to receive an interactive instruction
from the user
and forward a control signal via the transceiver 376, for example, to
implement the
instruction at the hearing aid 10. The processor 370 may also be caused to
generate various
is reports
about the operation of the hearing aid 10. The processor 370 may also be
caused to
translate or access a translation service for the audio.
Referring now to figure 12, one embodiment of the server 120 as a computing
device
includes, within the housing 322, a processor 400, memory 402, and storage 404
interconnected with various buses 412 in a common or distributed, for example,
mounting
architecture that also supports inputs 406, outputs 408, and network interface
410. In other
implementations, in the computing device, multiple processors and/or multiple
buses may be
used, as appropriate, along with multiple memories and types of memory.
Further still, in
other implementations, multiple computing devices may be provided and
operations
distributed therebetween. The processor 400 may process instructions for
execution within
the server 320, including instructions stored in the memory 402 or in storage
404. The
memory 402 stores information within the computing device. In one
implementation, the
memory 402 is a volatile memory unit or units. In another implementation, the
memory 402
is a non-volatile memory unit or units. Storage 404 includes capacity that is
capable of
providing mass storage for the server 320, including crane service database
storage capacity.
Various inputs 406 and outputs 408 provide connections to and from the server
320, wherein
the inputs 406 are the signals or data received by the server 320, and the
outputs 408 are the
signals or data sent from the server 320. The network interface 410 provides
the necessary
device controller to connect the server 320 to one or more networks.
21

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
The memory 402 is accessible to the processor 400 and includes processor-
executable instructions that, when executed, cause the processor 400 to
execute a series of
operations. The processor 400 may be caused to screen, via the speaker and the
user
interface, a left ear of a patient at an incrementally selected frequency
between a frequency
range of 50Hz to 5,000Hz at 50Hz increments at a decibel range of 10db to
120db, with
detected frequencies being re-ranged tested at a 5Hz to 20Hz increment. Also,
the
processor-executable instructions cause the processor 400 to screen, via the
speaker and the
user interface, the left ear of the patient at an incrementally selected
frequency between a
frequency range of 5,000Hz to 10,000Hz in 200Hz increments at a decibel range
of 10dB to
120dB, with detected frequencies to be re-range tested at a 5Hz to 20Hz
increment.
The processor-executable instructions may also determine a left ear preferred
hearing
range, which is a range of sound corresponding to highest hearing capacity of
the left ear of
the patient between 50Hz and 10,000Hz. The processor-executable instructions
then cause
the processor 400 to screen, via the speaker and the user interface, a right
ear of the patient at
is an incrementally selected frequency between a frequency range of 50Hz to
5,000Hz at a
variable increment, such as a 50Hz increment, at a decibel range of 10db to
120db, with
detected frequencies being re-ranged tested at a more discrete increment, such
as a 5Hz to
20Hz increment. Similarly, the processor 400 is caused to screen, via the
speaker and the
user interface, the right ear of the patient at an incrementally selected
frequency between a
frequency range of 5,000Hz to 10,000Hz at a variable increment, such as a
200Hz
increment, at a decibel range of 10dB to 120dB, with detected frequencies to
be re-range
tested at a more discrete increment, such as a 5Hz to 20Hz increment. Then a
left ear
preferred hearing range is determined, which is a range of sound corresponding
to highest
hearing capacity of the left ear of the patient between 50Hz and 10,000Hz.
Also, the
processor executable instructions may cause the processor 400 to, when
executed, utilize
distributed processing between the server 320 and either the proximate smart
device 12 or
hearing testing equipment to at least one of screen the left ear, screen the
right ear, determine
the left ear preferred hearing range, and determine the right ear preferred
hearing range.
The processor-executable instructions may then determine best quality sound
for the
3 0 left ear preferred hearing range and the right ear preferred hearing
range.
As previously discussed, these processor-executable instructions operate on
left and right
hearing ranges as well as left and right preferred hearing ranges. The
instructions cause the
processor, for the left ear preferred hearing range, to complete an assessment
of a degree of
22

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
annoyance caused to the patient by an impairment of wanted sound trough a test
of different
sounds. Then the processor-executable instructions cause the processor to
modify the left
ear preferred hearing range with a subjective assessment of sound quality
according to the
patient to provide best sound quality. Similarly, for the right ear preferred
hearing range, the
processor-executable instructions cause the processor to complete an
assessment of a degree
of annoyance caused to the patient by an impairment of wanted sound trough a
test of
different sounds and then modify the right ear preferred hearing range with a
subjective
assessment of sound quality according to the patient to provide best sound
quality.
Referring now figure 13, another embodiment of a system 430 for aiding hearing
is
shown. As shown, a user V, who may be considered a patient requiring a hearing
aid, is
utilizing hearing testing device 434 with a testing/programming unit 432 and a
headset 436
having headphones 437 with a transceiver 438 for communicating with the
hearing testing
device 434. A push button 442 is coupled with cabling 440 to the headset 436
to provide an
interface for the user V to indicate when a particular sound, i.e., frequency
and decibel is
is heard. In this way, the system 430 screens, via a speaker in the headset
436 and a user
interface with the push button 442, a left ear ¨ and separately, a right ear -
of the user V at an
incrementally selected frequency between a frequency range of 50Hz to 5,000Hz,
with
detected frequencies being re-ranged tested to better identify the frequencies
and decibel
levels heard. A frequency range of 5,000Hz to 10,000Hz is then tested. The
system then
determines a left ear preferred hearing range and a right ear preferred
hearing range.
Referring now to figure 14, the hearing testing device 434 depicted as a
computing
device is shown. Within a housing (not shown), a processor 450, memory 452,
storage 454,
and a display 456 are interconnected by a busing architecture 458 within a
mounting
architecture. The processor 450 may process instructions for execution within
the
computing device, including instructions stored in the memory 452 or in
storage 454. The
memory 452 stores information within the computing device. In one
implementation, the
memory 452 is a volatile memory unit or units. In another implementation, the
memory 452
is a non-volatile memory unit or units. The storage 454 provides capacity that
is capable of
providing mass storage for the hearing testing device 434. Various inputs and
outputs
3 0 provide connections to and from the computing device, wherein the
inputs are the signals or
data received by the hearing testing device 434, and the outputs are the
signals or data sent
from the hearing testing device 434. In the following description, it should
be appreciated
that various inputs and outputs may be partially or fully integrated.
23

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
By way of example, with respect to inputs and outputs, the hearing testing
device 432
may include the display 456, a user interface 460, a test frequency output
462, a headset
output 464, a timer output 466, a handset input 468, a frequency range output
470, and a
microphone input 472. The display 456 is an output device for visual
information, including
.. real-time or post-test screening results. The user interface 460 may
provide a keyboard or
push button for the operator of the hearing testing device 432 to provide
input, including
such functions as starting the screening, stopping the screening, and
repeating a previously
completed step. The test frequency output 462 may display the range to be
examined, such
as a frequency between 100Hz and 5,000Hz. The headset output 464 may output
the signal
under test to the patient. The timer output 466 may include an indication of
the length of
time the hearing testing device 432 will stay on a given frequency. For
example, the hearing
testing device 432 may stay 30 seconds on a particular frequency. The handset
input 468
may be secured to a handset that provides "pause" and "okay" functionality for
the patient
during the testing. The frequency range output 462 may indicate the test
frequency range
is .. per step, such as 50Hz or other increment, for example. The microphone
input 472 receives
audio input from the operator relative to screening instructions intended for
the patient, for
example.
The memory 452 and the storage 454 are accessible to the processor 450 and
include
processor-executable instructions that, when executed, cause the processor 450
to execute a
.. series of operations. With respect to processor-executable instructions,
the processor-
executable instructions may cause the processor 450 to permit the hearing
testing device 432
to be conducted by one ear at a time. The processor-executable instructions
may also cause
the processor 450 to permit the patient to pause the process in response to a
signal received
at the handset input 468. As part of the processor-executable instructions,
the processor 450,
between 50Hz and 5,000Hz, may be caused to start the hearing testing device
432 at 50Hz
by giving a 50Hz signal for a predetermined length of time, such as 20 seconds
to 30
seconds starting at 10db and stopping at 120db. The processor-executable
instructions may
cause the processor 450 to receive a detection signal from the handset input
468 during
screening. Then, the processor-executable instructions cause the hearing
testing device 432
3 0 to test to the next frequency at as step, such as 100Hz, for example,
and continue the
screening process.
As part of the processor-executable instructions, the processor 450, between
5,000Hz
and 10,000Hz, may be caused to start the hearing test device 434 at 5,200Hz by
giving a
24

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
5,200Hz signal for a predetermined length of time, such as 20 seconds to 30
seconds starting
at 10db and stopping at 120db. The processor-executable instructions may cause
the
processor 450 to receive a detection signal from the handset input 468 during
screening.
Then, the processor-executable instructions cause the hearing test device 434
to test to the
next frequency at as step, such as 5,400Hz, for example, and continue the
screening process.
The processor-executable instructions may cause the screening for the
designated ear to be
complete at 5,400Hz at which time the entire process may start over for
another ear or
another patient. The system then determines a left ear preferred hearing range
and a right ear
preferred hearing range.
The processor-executable instructions may then determine best quality sound
for the
left ear preferred hearing range and the right ear preferred hearing range.
The instructions
cause the processor, for the left ear preferred hearing range, to complete an
assessment of a
degree of annoyance caused to the patient by an impairment of wanted sound
trough a test of
different sounds. Then the processor-executable instructions cause the
processor to modify
is the left
ear preferred hearing range with a subjective assessment of sound quality
according
to the patient to provide best sound quality. Similarly, for the right ear
preferred hearing
range, the processor-executable instructions cause the processor to complete
an assessment
of a degree of annoyance caused to the patient by an impairment of wanted
sound trough a
test of different sounds and then modify the right ear preferred hearing range
with a
subjective assessment of sound quality according to the patient to provide
best sound quality.
Referring now to figure 15, conceptually illustrates the software architecture
of a
testing equipment application 500 of some embodiments that may determine the
preferred
hearing ranges for patients. In some embodiments, the testing equipment
application 500 is
a stand-alone application or is integrated into another application, while in
other
embodiments the application might be implemented within an operating system
530.
Furthermore, in some embodiments, the testing equipment application 500 is
provided as
part of a server-based solution or a cloud-based solution. In some such
embodiments, the
application is provided via a thin client. That is, the application runs on a
server while a user
interacts with the application via a separate machine remote from the server.
In other such
embodiments, the application is provided via a thick client. That is, the
application is
distributed from the server to the client machine and runs on the client
machine.
The testing equipment application 500 includes a user interface (UI)
interaction and
generation module 502, management (user) interface tools 504, test procedure
modules 506,

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
frequency generator modules 508, decibels modules 510, notification/alert
modules 512,
report modules 514, database module 516, an operator module 518, and a health
care
professional module 520. The testing equipment application 500 has access to a
testing
equipment database 522, which in one embodiment, may include test procedure
data 524,
patient data 526, and presentation instructions 528. In some embodiments,
storages 524,
526, 528 are all stored in one physical storage. In other embodiments, the
storages 524, 526,
528 are in separate physical storages, or one of the storages is in one
physical storage while
the other is in a different physical storage.
Continuing to refer to figure 15, the system 300 identifies "the" sound that
is most
pleasant for the patient in all circumstances, such as single voice, crowd
voice, and concert
voice, for example. The system 300 is capable of combining various sounds
through base
and harmonic frequency manipulations creating a cardinal algorithm that is
most pleasant for
the patient. In fact, as presented herein, patients may be able to self-test
or having minimal
assistance during the testing. By way of example, with respect to the
subjective assessment,
is the following values may be defined:
MNB1-K ¨ Man voice Base Frequencies
MNH1-K - Man voice Harmonic Frequencies
WNB1-K ¨ Woman voice Base Frequencies
WNH1-K - Woman voice Harmonic Frequencies
SNB1-K ¨ Street Car sound Base Frequencies
SNH1-K - Street Car sound Harmonic Frequencies
CNB1-K ¨ Classical Music sound Base Frequencies
CNH1-K - Classical Music sound Harmonic Frequencies
Pn - the most pleasant sound for the patient at the point of testing.
Now, "N=1" represents predominantly deep sound while "N=N" represents a
predominantly high frequency voice and "K" represents the number of frequency
components, such that:
MN = MNB1 MNH1-K
Therefore,
M1 P1
M2 P1
M3 = P1
26

CA 03195489 2023-03-15
WO 2022/066223
PCT/US2021/029414
P1+ W1 P2
P1 + W2 P2
P1B1+ P1B2 + P1B3 - P1B4 + P1H1 + P1H3 - P1H2 + W2B2 + W2B7 +
W2B 10 ¨ W2B 5 ¨ W2B6 + W2H1 + W2H2 - W2H3 = P2
P2 + S1 P3
P2+ S2 P3
P2 + S3 P3
P2+ S4 P3
P2B2+ P2B3 + P2B4 ¨ P2B5 ¨ P2B7 + P2H2 + P2H3 ¨ P2H1 + S4B1 +
S4B3 ¨ S4B2 + S4H2 + S4H3 ¨ S4H1 - =P3
P3 + Cl P4
P3 + C2 P4
P3 + C3 P4
P3B1+ P3B2 - P3B4 + P3H1 + P3H4 ¨ P3H2 + C3B2 + C3B8 ¨ C3B3 +
C3H1 + C3H2 ¨ C3H4 - =P4
Therefore, P4 = PnIB1-K + PnIH1-K = BSQ; where the designated B1-K and H1-
K could be values between 1-K. The values in the cited equations are examples
only.
The order of execution or performance of the methods and data flows
illustrated and
described herein is not essential, unless otherwise specified. That is,
elements of the
methods and data flows may be performed in any order, unless otherwise
specified, and that
the methods may include more or less elements than those disclosed herein. For
example, it
is contemplated that executing or performing a particular element before,
contemporaneously with, or after another element are all possible sequences of
execution.
While this invention has been described with reference to illustrative
embodiments,
this description is not intended to be construed in a limiting sense. Various
modifications
and combinations of the illustrative embodiments as well as other embodiments
of the
invention, will be apparent to persons skilled in the art upon reference to
the description. It
is, therefore, intended that the appended claims encompass any such
modifications or
embodiments.
27

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Inactive : Lettre officielle 2024-03-28
Inactive : Lettre officielle 2024-03-28
Lettre envoyée 2023-04-14
Inactive : CIB attribuée 2023-04-13
Inactive : CIB attribuée 2023-04-13
Demande de priorité reçue 2023-04-13
Exigences applicables à la revendication de priorité - jugée conforme 2023-04-13
Lettre envoyée 2023-04-13
Lettre envoyée 2023-04-13
Demande reçue - PCT 2023-04-13
Inactive : CIB en 1re position 2023-04-13
Exigences pour une requête d'examen - jugée conforme 2023-03-15
Exigences pour l'entrée dans la phase nationale - jugée conforme 2023-03-15
Toutes les exigences pour l'examen - jugée conforme 2023-03-15
Déclaration du statut de petite entité jugée conforme 2023-03-15
Demande publiée (accessible au public) 2022-03-31

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Le dernier paiement a été reçu le 2024-03-13

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Taxe nationale de base - petite 2023-03-15 2023-03-15
Enregistrement d'un document 2023-03-15 2023-03-15
Requête d'examen - petite 2025-04-28 2023-03-15
TM (demande, 2e anniv.) - petite 02 2023-04-27 2023-04-20
TM (demande, 3e anniv.) - petite 03 2024-04-29 2024-03-13
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
TEXAS INSTITUTE OF SCIENCE, INC.
Titulaires antérieures au dossier
EKATERINA SOKOLOVSKAYA
GRIGORII SOKOLOVSKII
LASLO OLAH
SERGEY LOSEV
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document (Temporairement non-disponible). Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.

({010=Tous les documents, 020=Au moment du dépôt, 030=Au moment de la mise à la disponibilité du public, 040=À la délivrance, 050=Examen, 060=Correspondance reçue, 070=Divers, 080=Correspondance envoyée, 090=Paiement})


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Description 2023-03-14 27 1 462
Dessins 2023-03-14 10 233
Revendications 2023-03-14 5 237
Dessin représentatif 2023-03-14 1 33
Abrégé 2023-03-14 2 76
Paiement de taxe périodique 2024-03-12 1 27
Courtoisie - Lettre du bureau 2024-03-27 2 188
Courtoisie - Lettre du bureau 2024-03-27 2 188
Courtoisie - Lettre confirmant l'entrée en phase nationale en vertu du PCT 2023-04-13 1 596
Courtoisie - Réception de la requête d'examen 2023-04-12 1 420
Courtoisie - Certificat d'enregistrement (document(s) connexe(s)) 2023-04-12 1 351
Demande d'entrée en phase nationale 2023-03-14 15 830
Rapport de recherche internationale 2023-03-14 1 54
Paiement de taxe périodique 2023-04-19 1 27