Sélection de la langue

Search

Sommaire du brevet 2234330 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 2234330
(54) Titre français: JOUETS INTERACTIFS
(54) Titre anglais: INTERACTIVE TOYS
Statut: Réputée abandonnée et au-delà du délai pour le rétablissement - en attente de la réponse à l’avis de communication rejetée
Données bibliographiques
Abrégés

Abrégé anglais


The interactive toy includes a speech synthesizer for converting digital data
representative of speech into audible synthesized speech; an infra-red
transceiver for
wirelessly communicating infra-red messages over a field-of view to a second
toy; a
microphone; and a selector for selecting between a transmission mode and a
stand-alone
mode. The toy is programmed so that, in the transmission mode, it receives a
first infra-red
signal from a second toy identifying a synthesized speech phrase generated by
the
second toy, supplies selected digital speech data representative of a reply
synthesized
speech phrase to the speech synthesizer in response to the first signal, and
transmits a
second infra-red signal to the second toy indicative of the selected reply
phrase. In the
stand-alone mode, the toy monitors the microphone for sound input, and
supplies selected
digital speech data to the speech synthesizer after the sound input has
ceased. The toy
also includes a motor, connected to a movable body part, which is actuated in
timed
relation to the synthesized speech, in order to mimic human mannerisms when
speaking.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


-15-
Claims
1. An interactive toy, comprising:
a memory for storing digital data representative of speech phrases;
a speech synthesizer, connected to the memory, for converting the digital data
into
audible synthesized speech phrases;
an infra-red transceiver for communicating infra-red signals over a field-of-
view
to a second toy;
a selector for selecting between a stand-alone mode and an infra-red
transmission
mode;
a microphone; and
processing means, connected to the selector, the microphone, the infra-red
transceiver, the speech synthesizer, and the memory, for determining the
selected mode
and,
(i) in the event the stand-alone mode is selected,
(a) monitoring the microphone for sound input, and
(b) in the event that sound input is present, selecting digital speech data
representative of a speech phrase and supplying such data to the
speech synthesizer after the sound input has ceased; and
(ii) in the event the infra-red transmission mode is selected,
(a) receiving a first infra-red signal from the second toy indicative of
a speech phrase audibly generated by the second toy,
(b) selecting digital speech data representative of a reply speech phrase
in response to the first signal and supplying such data to the speech
synthesizer, and
(c) transmitting a second signal to the second toy indicative of the
selected phrase.

-16-
2. The toy according to claim 1, further including a motor connected to a
movable
body part, wherein said processing means is operative to actuate the motor in
timed
relation to the synthesized speech produced by the speech synthesizer.
3. The toy according to claim 2, wherein the movable body part is an eyelid.
4. The toy according to claim 2, wherein the movable body part is a jaw.
5. The toy according to claim 1, wherein the selector includes a switch and
the
processing means is operative to initiate a simulated conversation with the
second toy
upon actuation of the switch.
6. The toy according to claim 1, further comprising a proximity sensor, and
wherein
the processing means is operative to initiate a simulated conversation with
the second toy
upon stimulation of the proximity sensor.
7. The toy according to claim 6, wherein the proximity sensor is a magnetic
proximity
sensor.
8. The toy according to claim 1, further comprising a motion detector, and
wherein
the processing means is operative to initiate a simulated conversation with
the second toy
upon stimulation of the motion detector.
9. The toy according to claim 1, wherein, in response to the first infra-red
signal from
the second toy, the processing means is operative to substantially randomly
select at least
one reply speech phrase from a plurality of predetermined possible reply
speech phrases.

-17-
10. The toy according to claim 1, further including means for placing the
speech
synthesizer in a low-power-drain sleep mode in the event no conversation
initiating event
or infra-red signal input has occurred within a pre-determined time period.
11. The toy according to claim 1, further including means for placing the
speech
synthesizer in a low-power-drain sleep mode in the event no sound input has
occurred
within a pre-determined time period.
12. An interactive toy, comprising:
a movable body part having a motor connected thereto;
a memory for storing digital data representative of speech phrases;
a speech synthesizer, connected to the memory, for converting the digital data
into
audible synthesized speech phrases;
an infra-red transceiver for communicating infra-red signals over a field-of
view
to a second toy; and
processing means, connected to the motor, the microphone, the infra-red
transceiver, the speech synthesizer, and the memory, for receiving a first
infra-red signal
from the second toy indicative of a speech phrase audibly generated by the
second toy,
selecting digital speech data representative of a reply speech phrase in
response to the first
signal and supplying such data to the speech synthesizer, actuating the motor
in timed
relation to the synthesized speech produced by the speech synthesizer, and
transmitting
a second signal to the second toy indicative of the selected phrase.
13. The toy according to claim 12, further including a microphone, and a
selector for
selecting a stand-alone mode, wherein in the event the stand-alone mode is
selected the
processing means is operative to monitor the microphone for sound input,
select digital
speech data representative of a speech phrase, and supply such data to the
speech
synthesizer after the sound input has ceased.

-18-
14. The toy according to claim 12, wherein the movable body part is an eyelid.
15. The toy according to claim 12, wherein the movable body part is a jaw.
16. The toy according to claim 12, further including a switch, wherein the
processing
means is operative to initiate a simulated conversation with the second toy
upon actuation
of the switch.
17. The toy according to claim 12, further comprising a proximity sensor,
wherein the
processing means is operative to initiate a simulated conversation with the
second toy
upon stimulation of the proximity sensor.
18. The toy according to claim 17, wherein the proximity sensor is a magnetic
proximity sensor.
19. The toy according to claim 12, further comprising a motion detector,
wherein the
processing means is operative to initiate a simulated conversation with the
second toy
upon stimulation of the motion detector.
20. The toy according to claim 12 wherein, in response to the first infra-red
signal
from the second toy, theprocessing means is operative to substantially
randomly select at
least one reply speech phrase from a plurality of predetermined possible reply
speech
phrases.
21. The toy according to claim 12, further including means for placing the
speech
synthesizer in a low-power-drain sleep mode in the event no conversation
initiating event
or infra-red signal input has occurred within a pre-determined time period.

-19-
22. The toy according to claim 13, further including means for placing the
speech
synthesizer in a low-power-drain sleep mode in the event no sound input has
occurred
within a pre-determined time period.

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 02234330 1998-04-08
INTERACTIVE TOYS
Field of Invention
The invention generally relates to the art of toy-making, and more
particularly to interactive toys, such as dolls, which simulate intelligent
conversation
S therebetween.
Background of Invention
Talking dolls, i.e., dolls which emit human-like speech or sound typically
in response to some physical stimuli, have been successfully manufactured and
marketed
for many years. However, a doll which simulates intelligent conversation
between itself
and a counterpart doll has not, to the applicant's knowledge, been
successfully
commercialized.
I S For example, U.S. patent no. 4,857,030 issued August 15, 1989 to Rose
and assigned to Coleco Industries, Inc. discloses conversing dolls which
comprise speech
synthesizing systems and appear to intelligently converse with one another.
These dolls
employ radio frequency transceivers in order to signal, over a radio link, an
indication of
what particular synthesized phrase has been spoken by a first doll and to
request a
response which appears to be intelligent with respect to the synthesized
speech of the first
doll.
The above-mentioned conversing dolls suffer from a variety of deficiencies
affecting their cost and performance. For example, the consumer must purchase
two
dolls, each of which is relatively expensive due to the incorporation of the
radio
transceiver devices. In addition, although the dolls may simulate human
speech, the dolls
20465086.1

CA 02234330 1998-04-08
-2-
themselves are static and do not realistically simulate human mannerisms when
speaking,
thereby depreciating the realism of play.
Accordingly, the invention seeks to provide a low cost, mufti-functional,
interactive doll capable of amusing children in a variety of ways. The
invention also seeks
to provide an interactive doll which mimics human mannerisms while simulating
speech
to thereby enhance the realism of play. In addition, the invention seeks to
provide
imaginative ways of engaging the interactive capability of conversing dolls,
especially in
interfacing with the typical daily routine of child's life.
Summary of Invention
According to one aspect of the invention, an interactive toy is provided
which includes a speech synthesizer, connected to a memory, for converting
digital data
representative of speech into audible synthesized speech. The toy also
includes an infra-
red transceiver for wirelessly communicating infra-red messages over a field-
of view to
a second toy. A data processing means is connected to the speech synthesizer
and, in a
transmission mode, is operative to receive a first infra-red signal from the
second toy
identifying a synthesized speech phrase generated by the second toy, supply
selected
digital speech data representative of a reply synthesized speech phrase to the
speech
synthesizer in response to the first signal, and transmit a second infra-red
signal to the
second toy indicative of the selected reply phrase.
In the preferred embodiment, the interactive toy includes a built-in
microphone, and a selector for selecting between a stand-alone mode and the
infra-red
transmission mode. In the event the stand-alone mode is selected, the
processing means
is operative to monitor the microphone for sound input, and supply selected
digital speech
data to the speech synthesizer after the sound input has ceased.
20465086. l

CA 02234330 1998-04-08
-3-
In the preferred embodiment, the toy also includes a motor connected to a
movable body part. The motor is connected to the processing means which is
operative
to actuate the motor in timed relation to the synthesized speech produced by
the speech
synthesizer. In the preferred embodiment, the toy is a doll having movable
eyelids and
jaws. In this manner, the toy, especially in the form of the preferred doll,
mimics human
mannerisms when simulatingly conversing with another doll.
In the preferred embodiment, the interactive toy further includes a switch,
and the processing means is operative to initiate a simulated conversation
with the second
toy upon actuation of the switch. In alternative embodiments, the conversation
may be
initiated by stimulating a magnetic proximity sensor or a motion detector.
Brief Descrit~tion of Drawines
The foregoing and other aspect of the invention are discussed in greater
detail below with reference to the following drawings, provided for the
purpose of
description and not of limitation, wherein:
Fig. I is a system block diagram of electrical circuitry employed in a
preferred embodiment of the invention;
Fig. 2A is a cross-sectional view of a movable toy body part, specifically
a movable eyelid mechanism in a. retracted position, in accordance with the
preferred
embodiment;
Fig. 2B is a cross-sectional view of the movable eyelid mechanism in an
extended position;
Fig. 2C is an exploded view of the movable eyelid mechanism;
Fig. 2D is a cross-sectional view of a movable toy body part, specifically
a movable jaw mechanism, in accordance with the preferred embodiment;
20465086.1

CA 02234330 1998-04-08
-4-
Fig. 2E is a cross-sectional view of the movable jaw mechanism taken
along line II-II in Fig. 2D;
Fig. 2F is an exploded view of the movable jaw mechanism;
Fig. 3 is a data diagram illustrating a sample simulated intelligent
conversation between two interactive toys according to the preferred
embodiment;
Fig. 4 is a flow chart illustrating the programming of a transmission mode
according to the preferred embodiment wherein two interactive toys may
simulate
intelligent conversation therebetween;
Fig. 5 is a protocol diagram illustrating the preferred format of messages
wirelessly communicated between interactive toys;
Fig. 6 is a flow chart illustrating a preferred method for determining
whether a received infra-red signal is a valid message or not;
Fig. 7 is a diagram of two state tables illustrating a preferred mechanism
for generating substantially non-repetitive simulated conversation between two
interactive
toys;
Fig. 8 is a flow chart illustrating the programming of a stand-alone mode
according to the preferred embodiment wherein an interactive toy responds to
voice
stimuli presented by a user; and
Fig. 9 is a system block diagram of electrical circuitry employed in an
alternative embodiment of the invention.
Detailed Description of Preferred Embodiments
Fig. 1 is block diagram of electrical circuitry employed in a preferred
embodiment of the invention. This electrical circuitry is preferably mounted
in a cavity
of the interactive toy so as to be hidden from view.
20465086.1

CA 02234330 1998-04-08
-$-
As shown in Fig. I, the preferred embodiment employs a low cost,
programmable, single integrated circuit, speech synthesizer 10 having an on-
chip memory
for storing digital data representative of phrases of speech. The speech
synthesizer 10 is
preferably any of the W851 XX family of speech synthesizers available from
Winbond
Electronics Corp. of Hsinchu, Taiwan, Republic of China. The speech
synthesizer 10
provides a limited number of microprocessor-type instructions for program
development
and includes an interface to more powerful data processing capability such as
provided
by a full scale central processing unit or microprocessor. Alternative types
of speech
synthesizers which may be used include the TSPSOCOx/1 x family of speech
synthesizers
from Texas Instruments Incorporated, Dallas, Texas, U.S.A., which include a
built-in full
scale microprocessor. The latter device, however, is more expensive to procure
than the
preferred Winbond speech synthesizer.
As shown in Fig. 1, the speech synthesizer 10 is connected to a speaker 12
through an audio output line 14. 'The speech synthesizer 10 is also connected
through
output lines 16 and 18 to the base inputs of transistors 20 and 22 which are
respectively
connected to motors 24 and 26. The transistors 20 and 22 are installed in
series with the
motors 24 and 26 so as to control the supply of current thereto. The motors 24
and 26 are
connected to movable body parts of the toy.
For example, as shown in Figs. 2A - 2C, motor 24 is connected to a
movable eyelid mechanism 200 through a gear train 202. The mechanism 200
comprises
a movable eyelid part 204 which is pivotally disposed about a shaft 206. The
shaft 206
includes an integral extended arm 208 which rides against an outer edge 210 of
the eyelid
part 204. The shaft also features a protuberance 212 which is connected to a
spring 214
anchored to the body of the toy. Actuation of the motor 24 for a brief period
causes the
shaft 206 to rotate over an angle corresponding to the distance of travel
provided the
spring 214, and thereby cause the arm 208 to push the eyelid part 204 to an
extended
20465086.1

CA 02234330 1998-04-08
-6-
position as shown in Fig. 2B. When the motor 24 is turned off, the spring 214
retracts
causing the arm 208 to pull the eyelid part 204 back to a retracted position,
as shown in
Fig. 2A.
Motor 26, as shown in Figs. 2D - 2F, is connected to a movable jaw
mechanism 220. The mechanism 220 comprises a pivot arm 222 which features an
external jaw part 224 at a distal end thereof. The pivot arm 222 pivots about
an axle 226
mounted to the body of the toy. The proximal end of the pivot arm 222 features
a
rectangularly shaped beam 228 having a recess or slot 230 therein. An
eccentric stub
shaft 232 is connected to the proximal end of the pivot arm 222 and rides
against the slot
230. The eccentric stub shaft 232 is mounted to a gearbox 234 which is
connected to an
output gear 236 of motor 26. Actuating the motor 26 causes the eccentric stub
shaft 232
to rotate and rock the pivot arm 222, thereby causing the jaw part 224 to open
and close.
Motors 24 and 26 may be connected to alternative movable body parts,
depending on the design or character of the interactive toy. For instance, if
the interactive
toy embodies the character of Dumbo the Flying Elephant (TM - Walt Disney
Company)
the movable body parts may be elephants' ears.
Referring back to Fig. 1, the speech synthesizer 10 is also connected
through a data output line 30 and a data enable line 32 to a carrier
oscillation circuit 34
which, in turn, is connected to an infrared emitting diode 36. The carrier
oscillation
circuit 34, as known in the art per se, produces a carrier binary pulse stream
which is
modulated in accordance with the data present on output line 30. The data
enable output
32 controls whether or not the carrier oscillating circuit 34 produces a
modulated carrier
signal at its output. When enabled, the infrared emitting diode 36 radiates
the modulated
carrier signal of circuit 34 into space at infra-red (IR) frequencies over a
predetermined
field-of view. That is, the radiation pattern produced by diode 36 is not omni-
directional
20465086.1

CA 02234330 1998-04-08
but, having a progressively decreasing radiation output at increasing angular
displacements, resembles a substantially defined beam. In the preferred
embodiment,
infrared emitting diode 36 is an EL-1 L7 GaAIAs diode manufactured by the
Kodenshi
Corp. of Kyoto, Japan, which radiates an output beam over an approximately
forty (40)
degree field-of view.
The speech synthesizer is also connected to an infra-red receiver 38 which
includes a built-in infra-red detector 40. The IR receiver 38, as known in the
art per se,
demodulates a modulated binary pulse stream such as produced by the carrier
oscillation
circuit 34, and produces the baseband signal at input line 42. The preferred
IR receiver
38 is a model PIC-26043SM optic remote control receiver module (typically used
as a
remote control receiver in consumer electronic devices) which is also
manufactured by
the Kodenshi Corp. of Kyoto, Japan. Power to the IR receiver 38 is enabled and
disabled
by a switch 44 which is controlled by the speech synthesizer 10 via output
line 46.
Collectively, the carrier oscillation circuit 36 and the IR receiver 38
provide an infra-red
transmission means for wirelessly communicating messages to a second
interactive toy
over a predetermined field-of view.
In the preferred embodiment, infrared emitting diode 36 and infra-red
photodetector 40 are mounted in the interactive toy such that it must face in
a natural
direction to a second interactive toy in order to close a wireless
communication loop
therebetween. For instance, if both interactive toys resemble human figures,
diodes 36
and 40 are preferably mounted facing outwards toward the front of the toy,
e.g., in the
abdomen or eye sockets. This ensures that, ignoring reflections, the
interactive toys will
only be able to wirelessly communicate and hence simulate conversation with
one another
when they are substantially facing one another, thereby mimicking the normal
pose of two
individuals talking to one another a.nd enhancing the realism of play.
20465086.1

CA 02234330 1998-04-08
_g_
A microphone 48 is also connected to the speech synthesizer 10. Power
to the microphone 48 is enabled and disabled via a switch 50 which is
controlled by
speech synthesizer 10 through an output line 52.
Two momentary contact keys or push-buttons 54 and 56 are connected to
the speech synthesizer 10 via trigger input lines 58 and 60. The preferred
embodiment
features two possible modes for the interactive toy which are triggered by
actuation of one
the momentary contact push-buttons 54 and 56. Push-button 54, when actuated,
causes
the interactive toy to enter into a "transmission" mode wherein two such
interactive toys
simulate intelligent conversation therebetween. Push-button 56, when actuated,
causes
the interactive toy to enter into a "stand-alone" mode wherein the user can
directly interact
with the toy, i.e., without requiring a second toy.
In the transmission mode, an initiating event, such as a second actuation
of push-button 54, causes one of at least two interactive toys or dolls to
randomly select
a "dialogue" and play a "phrase" thereof. In this specification, the term,
"phrase" refers
to a collection of synthesized speech data that is audibly produced by one toy
typically
prior to response by the counterpart toy. The term "dialogue" refers to a
particular group
of predetermined possible phrases audibly generated by at least two
interactive toys in
order to simulate an intelligent conversation. For example, referring to Fig.
3, toy 1
begins a simulated conversation corresponding to dialogue A by playing the
phrase "Look
at all these people!". As soon as the synthesized phrase is generated by toy
1, it sends an
infra-red message to toy 2 identifying the particular phrase, #0100, that was
audibly
produced by toy 1. Based on this message, toy 2 selects and plays a
predetermined reply
phrase or one of a plurality of predetermined possible reply phrase. In the
illustrated
dialogue, this phrase is "Stand back, buddy. I'll protect you! I'll just fire
up my laser
gun!" As soon as the synthesized reply phrase is articulated by toy 2, it
sends an infra-red
message to toy 1 identifying the particular phrase, #1100, that was audibly
produced by
20465086.1

CA 02234330 1998-04-08
-9-
toy 2. Based on this message, toy 1 selects and plays a predetermined reply
phrase or one
of a plurality of predetermined possible reply phrases, and signals toy 2
accordingly. The
process continues until the end of the dialogue is reached.
In the preferred embodiment, each synthesized phrase is also associated
with "action" data specifying how motors 24 and 26 are actuated in timed
relation to the
playing of a phrase by speech synthesizer 10. For example, using the notation
"n" and
"u" to respectively represent the turn-on and turn-off of eyelid motor 24, toy
1 could be
programmed to move its eyelid far phrase, #0010, as follows: "nLookv at hall
thesev
people!"
Fig. 4 is a flowchart illustrating the preferred programming of speech
synthesizer 10 for the transmission mode. There are at least three events
which
carrespond to major entry points in the flowchart. Event 80 corresponds to the
actuation
of push-button 54 in order to place the toy in the transmission mode. Event 82
corresponds to an event or stimuli, such as a second activation of push-button
54, which
causes the toy to initiate a dialogue between it and a second toy. Event 84
corresponds
to reception of an infra-red signal by the IR receiver 38.
When the transmission mode is actuated at event 80, the speech synthesizer
10 randomly selects a dialogue in the event the present toy initiates
simulated conversation
with a second toy. At step 102, a dedicated speech synthesizer register used
to implement
a sleep countdown timer (hereinafter "sleep timer register") is reset to an
initial value. At
step 104, all inputs to the speech synthesizer are enabled. Steps 106 and 108
form a loop
used to decrement the sleep timer register until the sleep timer countdown is
finished.
The sleep timer countdown is preferably set to approximately 60 seconds. If
during this
time event 82 or event 84 did not occur, or if the toy has not been placed in
the stand-
20465086.1

CA 02234330 1998-04-08
-10-
alone mode, then at step 110 all outputs of the speech synthesizer 10 are
disabled and it
is placed in a low-power-drain sleep mode.
When event 82 corresponding to initiation of a simulated conversation
occurs, switch 44 (Fig. 1 ) is opened at step 112 in order to disable the IR
receiver 38 and
hence all IR input to the speech synthesizer. This ensures that the following
steps will not
be prematurely interrupted by IR signal input.
At step 114, a selected phrase is output to the speech synthesizer 10 and
motors 24 and 26 are actuated in timed relation to the synthesized speech in
accordance
with the associated actions. In the preferred embodiment, each phrase is
stored as a
plurality of linked speech components, and speech synthesizer 10 sets output
lines 16 and
18 (Fig. 1 ) which control motors 24 and 26 at discrete points during the
playback of the
phrase, between the playback of the individual phrases. For example, the
speech and
I S action sequence "nLookv at nail theses people!" comprises speech
synthesizer
instructions to: (a) set output line 18 to high; (b) play speech component
"Look"; (c) set
output line 18 to low; (d) play speech component "at"; (e) set output line 18
to high; (f)
play speech component "all these"; (g) set output line 18 to low; and (h) play
speech
component "people". This procedure is necessitated by the single execution
thread design
of the preferred Winbond speech synthesizer; however, other types of speech
synthesizers
may enable a greater degree of parallelism in executing general purpose
microprocessor
and speech synthesis specific instructions.
At step 116, the data enable line 32 (Fig. 1 ) is actuated in order to enable
the carrier oscillation circuit 34. At steps 118 and 120 the identifier for
the selected
speech sequence is transmitted, as described in greater detail below, to a
second or
counterpart interactive toy. At step 122, the carrier oscillation circuit 34
is disabled and
2oa6sos6. ~

CA 02234330 1998-04-08
-11-
control passes to steps 102 - 110 discussed above to begin another sleep
countdown
period.
When event 84 corresponding to reception of an IR signal occurs, step 124
checks whether in fact a valid message has been received, as described in
greater detail
below. If not, then control is passed back to step 104 to continue the sleep
timer
countdown. If a valid message is received, then, as described in greater
detail below, step
126 selects a reply phrase in response to a phrase identifier transmitted by
the counterpart
interactive toy. Control is then passed to step 112 in order to un-
interruptibly play the
selected phrase and its associated actions, transmit the identifier of the
reply phrase to the
counterpart interactive toy, and restart the sleep timer countdown.
The preferred transmission protocol for communication messages between
interactive toys is illustrated in Fig. :~. As shown in Fig. 5(a), each
message comprises an
identifier frame 130, a space 132, and a data frame 134. The identifier frame
130
comprises preamble 136 and ID data 138 as illustrated in Fig. 5(b). The
preamble 136 is
preferably a leading pulse train of specified length having a 50% duty cycle,
as shown in
Fig 5(c), which serves to alert the speech synthesizer that ID data 138 is
about to be
transmitted. The ID data 138 which follows identifies which specific
interactive toy the
message is addressed to, thereby providing a means for discriminating amongst
a number
of interactive toys. Alternatively, the ID data 138 may be used to identify
the particular
toy sending the message. Alternatively still, the ID data 138 may a used as a
protocol
identifer indicating how the following TX data should be used. Fig. 5(d) shows
the data
frame 134 which preferably comprises the preamble 136 and TX data 140. The TX
data
140 preferably identifies a recently generated speech phrase produced by the
interactive
toy which sent the message.
20465086.1

CA 02234330 1998-04-08
-12-
Referring additionally to Fig. 6, step 124 of the Fig. 3 flowchart which
checks whether or not a valid message was received by IR receiver 38 is shown
in greater
detail. The process steps of Fig. 6 will be self explanatory in view of the
discussion above
in relation to the transmission protocol illustrated in Fig. 5.
Fig. 7 illustrates a state table which is used by the preferred embodiment
to select a reply speech phrase at step 126 of the Fig. 4 flowchart. The
preferred state
table resembles a data tree, wherein each node represents a speech phrase
state. Two
trees, one for each of a pair of conversing toys, are required to represent a
dialogue. Each
node of the data tree preferably has multiple leaves depending therefrom, with
each leaf
representing a possible branch from the current speech phrase state. Thus,
continuing
with the example dialogue A shown in Fig. 3, the initial state of toy 1 is
labelled 0100.
Toy 2 receives #0100 as input, causing it to go to state 1100. Toy 1
subsequently receives
# 1100 as input. At this point, toy 1 can randomly select between leaf (c)
representing
reply speech phrase "Stand back, buddy. I'll protect you! I'll just fire up my
laser gun!"
or leaf (c') (shown in stippled lines) representing an alternative speech
phrase, such as
"Yes. These are BIG people!". In this manner, the set of possible speech
phrases for any
given dialogue can be relatively easily structured to simulate substantially
non-repetitive
intelligent conversation between two interactive toys.
Fig. 8 is a flowchart illustrating the preferred programming of speech
synthesiser 10 for the stand-alone mode. There are at least two events which
correspond
to major entry points in the flowchart: event 1 SO corresponds to the user
selection of the
stand-alone mode; and event 152 corresponds to the presence of sound input at
the
microphone 48.
When event 150 occurs, then switch 50 (Fig. 1 ) is closed at step 154 to
enable microphone input. At step 1 S6, a speech and action sequence is
randomly selected
20465086.1

CA 02234330 1998-04-08
-13-
anal played by the speech synthesizer 10, as described above. At step 158,
switch 44 (Fig.
1 ) is opened to disable the IR receiver 38 and all IR input to the speech
synthesizer. This
ensures that the following steps will not be interrupted by IR input, although
actuating the
transmission mode will immediately pass control to event 80 of Fig. 4. At step
160, the
S sleep timer countdown of preferably sixty seconds is started. If no
intervening event
occurs prior to the termination of the sleep timer countdown, then at steps
162, 164 and
166, switch 50 is opened to disable the microphone 48, switch 44 is closed to
power the
IR. receiver 38 and enable IR input to the speech synthesizer, and the speech
synthesizer
is placed in the low-power-drain sleep mode.
However, if at event 152 sound input is sensed at the microphone, then the
speech synthesizer 10 waits at step 168 for 1.5 seconds until the sound input
ceases before
control is passed to step 156 where another speech and action sequence is
randomly
selected and played by the speech synthesizer 10, and another sleep countdown
period is
1 S started. If desired, a unitary state table/tree such as shown in Fig. 7
may be employed to
link sequential speech phrases played by the speech synthesizer in this mode
in order to
simulate cohesive speech by the interactive toy.
As discussed above, pressing the transmission mode push-button 54 twice
in succession causes the interactive toy of the preferred embodiment to
initiate a
simulated conversation with a second interactive toy. Fig. 9 shows an
alternative
embodiment of the electrical circuitry (of Fig. 1 ) comprising additional
means for
initiating the simulated conversation. The alternative electrical circuitry
includes a speech
synthesizer 170 which is connected to a magnetic proximity sensor 172 and a
motion
detector 174. In the alternative embodiment, stimulating either of these
devices
constitutes an occurrence of event 82 (Fig. 4), thereby causing the toy to
initiate a
dialogue between it and a second toy.
20465086.1

CA 02234330 1998-04-08
-14-
Magnetic proximity sensor 172 is preferably a TS560 dry-reed switch
manufactured by Standex Electronics, Cincinnati, Ohio, U.S.A. This device is
actuated
when a permanent magnet is brought near it, and thus is capable of providing a
changing
edge input on input line 173. Preferably, the reed switch is mounted in one
interactive toy
and the permanent magnet is mounted in the counterpart interactive toy so that
when the
tvvo toys are brought into proximity with one another the simulated speech is
initiated.
For example, when the interactive toys are dolls resembling human figures, the
reed
switch and counterpart permanent magnet may be mounted in the hands of the
dolls so
that the simulated conversation is initiated when the two dolls "shake hands".
Alternative
proximity sensors are available, for instance, from the SUNX company of Japan.
Motion detector 174. is well-known is the art and available from a variety
of'sources. The motion detector preferably includes an enabling switch (not
shown) used
to arm the motion detector. The motion detector may also be used in the stand-
alone
mode to spontaneously trigger a pre-selected or randomly selected synthesized
speech
phrase from the doll. Thus, for example, when the motion detector is armed and
stimulated, the interactive toy may be programmed to inform his child owner:
"Intruder
alert. Intruder alert. Someone has entered your room!".
Those skilled in the: art will appreciate that numerous modifications and
v~~riations may be made to the pre:Ferred embodiments without departing from
the spirit
and scope of the invention.
20465086.1

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Inactive : Morte - Aucune rép. dem. par.30(2) Règles 2007-01-26
Demande non rétablie avant l'échéance 2007-01-26
Réputée abandonnée - omission de répondre à un avis sur les taxes pour le maintien en état 2006-04-10
Inactive : CIB de MCD 2006-03-12
Inactive : CIB de MCD 2006-03-12
Inactive : Abandon. - Aucune rép. dem. art.29 Règles 2006-01-26
Inactive : Abandon. - Aucune rép dem par.30(2) Règles 2006-01-26
Inactive : Dem. de l'examinateur art.29 Règles 2005-07-26
Inactive : Dem. de l'examinateur par.30(2) Règles 2005-07-26
Inactive : Correspondance - Transfert 2005-07-20
Lettre envoyée 2003-04-29
Inactive : Demandeur supprimé 2003-04-29
Toutes les exigences pour l'examen - jugée conforme 2003-04-03
Requête d'examen reçue 2003-04-03
Exigences pour une requête d'examen - jugée conforme 2003-04-03
Inactive : Inventeur supprimé 2000-05-05
Demande publiée (accessible au public) 1999-10-08
Inactive : Page couverture publiée 1999-10-07
Inactive : Correspondance - Formalités 1998-11-24
Inactive : Correspondance - Formalités 1998-11-16
Inactive : Lettre de courtoisie - Preuve 1998-10-19
Inactive : CIB en 1re position 1998-07-17
Symbole de classement modifié 1998-07-17
Inactive : CIB attribuée 1998-07-17
Inactive : Transfert individuel 1998-07-16
Inactive : Certificat de dépôt - Sans RE (Anglais) 1998-06-17
Demande reçue - nationale ordinaire 1998-06-16
Inactive : Demandeur supprimé 1998-06-16

Historique d'abandonnement

Date d'abandonnement Raison Date de rétablissement
2006-04-10

Taxes périodiques

Le dernier paiement a été reçu le 2005-03-29

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Taxe pour le dépôt - générale 1998-04-08
Enregistrement d'un document 1998-07-16
TM (demande, 2e anniv.) - générale 02 2000-04-10 2000-03-17
TM (demande, 3e anniv.) - générale 03 2001-04-09 2001-01-26
TM (demande, 4e anniv.) - générale 04 2002-04-08 2002-03-25
TM (demande, 5e anniv.) - générale 05 2003-04-08 2003-02-24
Requête d'examen - générale 2003-04-03
TM (demande, 6e anniv.) - générale 06 2004-04-08 2004-03-19
TM (demande, 7e anniv.) - générale 07 2005-04-08 2005-03-29
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
THINKING TECHNOLOGY INC.
Titulaires antérieures au dossier
ALBERT CHAN
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Dessin représentatif 1999-09-28 1 10
Dessins 1998-11-23 10 198
Description 1998-04-07 14 586
Abrégé 1998-04-07 1 26
Dessins 1998-04-07 10 211
Revendications 1998-04-07 5 138
Certificat de dépôt (anglais) 1998-06-16 1 162
Rappel de taxe de maintien due 1999-12-08 1 111
Rappel - requête d'examen 2002-12-09 1 113
Accusé de réception de la requête d'examen 2003-04-28 1 174
Courtoisie - Lettre d'abandon (R30(2)) 2006-04-05 1 166
Courtoisie - Lettre d'abandon (R29) 2006-04-05 1 166
Courtoisie - Lettre d'abandon (taxe de maintien en état) 2006-06-04 1 175
Correspondance 1998-06-22 1 31
Correspondance 1998-10-18 1 16
Correspondance 1998-11-15 3 81
Correspondance 1998-11-23 11 230
Taxes 2003-02-23 1 33
Taxes 2002-03-24 1 28
Taxes 2000-03-16 1 30
Taxes 2001-01-25 1 31
Taxes 2004-03-18 1 33
Taxes 2005-03-28 1 33