Sélection de la langue

Search

Sommaire du brevet 3068920 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 3068920
(54) Titre français: PROCEDE ET SYSTEME D'INDICATION DE REPONSE D'UN PARTICIPANT A UNE REUNION VIRTUELLE
(54) Titre anglais: VIRTUAL MEETING PARTICIPANT RESPONSE INDICATION METHOD AND SYSTEM
Statut: Examen
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • G06Q 10/10 (2023.01)
  • G06F 03/0481 (2022.01)
(72) Inventeurs :
  • JONES, ALEXANDER (Royaume-Uni)
  • JONES, MARIA FRANCISCA (Royaume-Uni)
(73) Titulaires :
  • MARIA FRANCISCA JONES
(71) Demandeurs :
  • MARIA FRANCISCA JONES (Royaume-Uni)
(74) Agent: AIRD & MCBURNEY LP
(74) Co-agent:
(45) Délivré:
(86) Date de dépôt PCT: 2018-06-13
(87) Mise à la disponibilité du public: 2019-01-10
Requête d'examen: 2023-12-13
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Oui
(86) Numéro de la demande PCT: PCT/GB2018/051619
(87) Numéro de publication internationale PCT: GB2018051619
(85) Entrée nationale: 2020-01-03

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
1710840.8 (Royaume-Uni) 2017-07-05

Abrégés

Abrégé français

L'invention concerne un procédé permettant d'indiquer des réponses émotionnelles dans une réunion virtuelle, ledit procédé consistant à : créer ou sélectionner des données d'avatar définissant un ou plusieurs avatars pour représenter un ou plusieurs utilisateurs correspondants en réponse à une entrée de l'utilisateur ou des utilisateurs correspondants ; recevoir une ou plusieurs sélections utilisateur de données de réunion définissant une ou plusieurs réunions virtuelles, une sélection utilisateur comprenant une indication spécifiant que l'utilisateur participe à la réunion virtuelle ; générer une sortie pour l'affichage d'une réunion virtuelle avec un ou plusieurs avatars représentant un ou plusieurs utilisateurs participant à la réunion à l'aide des données d'avatar et des données de réunion correspondant à la réunion virtuelle ; recevoir des données d'entrée émotionnelles d'un ou de plusieurs utilisateurs indiquant une réponse émotionnelle ou un langage corporel de l'utilisateur ou des utilisateurs participant à la réunion virtuelle ; traiter les données d'avatar à l'aide des données d'entrée émotionnelles ; et mettre à jour la sortie d'affichage de la réunion virtuelle pour restituer l'avatar ou les avatars pour l'utilisateur ou les utilisateurs afin d'afficher un état émotionnel respectif en fonction des données d'entrée émotionnelles respectives.


Abrégé anglais

A method of indicating emotive responses in a virtual meeting, the method comprising creating or select avatar data defining one or more avatars to represent one or more corresponding users in response to input from the one or more corresponding users; receiving one or more user selections of meeting data defining one or more virtual meetings, a user selection comprising an indication that the user is attending the virtual meeting; generating an output for display of a virtual meeting with one or more avatars representing one or more users attending the meeting using the avatar data and the meeting data corresponding to the virtual meeting; receiving emotive input data from one or more users indicative of an emotive response or body language of the one or more users attending the virtual meeting; processing the avatar data using the emotive input data; and updating the output for display of the virtual meeting to render the one or more avatars for the one or more users to display a respective emotive state dependent upon the respective emotive input data.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CLAIMS
1. A system for indicating emotive responses in a virtual meeting, the
system
comprising:
at least one processor; and
a memory storing instructions, which instructions being executable by the at
least one
processor to:
create or select avatar data defining one or more avatars to represent one or
more
corresponding users in response to input from the one or more corresponding
users;
receive one or more user selections of meeting data defining one or more
virtual
meetings, a user selection comprising an indication that the user is attending
the virtual
meeting;
generate an output for display of a virtual meeting with one or more avatars
representing one or more users attending the meeting using the avatar data and
the meeting
data corresponding to the virtual meeting;
receive emotive input data from one or more users indicative of an emotive
response
or body language of the one or more users attending the virtual meeting;
process the avatar data using the emotive input data; and
update the output for display of the virtual meeting to render the one or more
avatars
for the one or more users to display a respective emotive state dependent upon
the respective
emotive input data.
2. A system according to claim 1, wherein the instructions comprise
instructions
executable by the at least one processor to render the one or more avatars to
display body
language associated with the emotive input data.
3. A system according to claim 1 or claim 2, including instructions
executable by the at
least one processor to receive video data for a meeting, wherein the video
data includes video
images of one or more participants in a meeting, and the instructions
executable by the at
least one processor to generate the output for display comprise instructions
executable by the
at least one processor to generate the output for display as an augmented
reality meeting with
one or more avatars representing one or more users overlaid on the video data
with the video
images of the participants.
18

4. A system according to any preceding claim, including instructions
executable by the
at least one processor to store a predefined set of emotive states, wherein
instructions
executable by the at least one processor to receive the emotive input data
comprise
instructions to receive the emotive input data as a selection of an output for
display of a menu
of the emotive states.
5. A system according to any preceding claim, including instructions
executable by the
at least one processor to receive interaction input from one or more users
attending the virtual
meeting to cause the avatars to perform required interaction, and to update
the output for
display of the virtual meeting to render the one or more avatars for the one
or more users
from which interaction data is received to display the required interaction.
6. A method of indicating emotive responses in a virtual meeting, the
method
comprising:
creating or select avatar data defining one or more avatars to represent one
or more
corresponding users in response to input from the one or more corresponding
users;
receiving one or more user selections of meeting data defining one or more
virtual
meetings, a user selection comprising an indication that the user is attending
the virtual
meeting;
generating an output for display of a virtual meeting with one or more avatars
representing one or more users attending the meeting using the avatar data and
the meeting
data corresponding to the virtual meeting;
receiving emotive input data from one or more users indicative of an emotive
response or
body language of the one or more users attending the virtual meeting;
processing the avatar data using the emotive input data; and
updating the output for display of the virtual meeting to render the one or
more avatars for
the one or more users to display a respective emotive state dependent upon the
respective
emotive input data.
7. A method according to claim 6, wherein the one or more avatars are
rendered to
display body language associated with the emotive input data.
19

8. A method according to claim 6 or claim 7, including receiving video data
for a
meeting, wherein the video data includes video images of one or more
participants in a
meeting, and the output is generated for display as an augmented reality
meeting with one or
more avatars representing one or more users overlaid on the video data with
the video images
of the participants.
9. A method according to any one of claims 6 to 8, including storing a
predefined set of
emotive states, wherein the emotive input data is received as a selection of
an output for
display of a menu of the emotive states.
10. A system according to any one of claims 6 to 9, including receiving
interaction input
from one or more users attending the virtual meeting to cause the avatars to
perform required
interaction, and updating the output for display of the virtual meeting to
render the one or
more avatars for the one or more users from which interaction data is received
to display the
required interaction.
11. A carrier medium carrying processor executable code for execution by a
processor to
carry out the method of any one of claims 6 to 10.
12. A non-transient storage medium storing processor executable code for
execution by a
processor to carry out the method of any one of claims 6 to 10.

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
VIRTUAL MEETING PARTICIPANT RESPONSE INDICATION
METHOD AND SYSTEM
FIELD OF THE INVENTION
[0001] The present invention relates to a method and system for
indicating a response
of a participant in a virtual meeting.
BACKGROUND INFORMATION
[0002] For business and social reasons, computer users often arrange
meetings, such
as formal business meetings or informal gatherings in a virtual environment on
a computer-
networked system. Such meetings save the costs or travel to meet in person and
save travel
time. They are also very convenient and enable meetings of diverse and
distributed people at
short notice.
[0003] Virtual meetings can also form the basis of a framework for social
interactions
between members of a group of users. The interface hosting a virtual meeting
can also be
used as a means of providing many ancillary functions to accompany the
meeting.
[0004] In a meeting where people do not meet in person, it is important
to try to make
the interaction between people in the virtual environment as natural as
possible.
SUMMARY OF THE INVENTION
[0005] One aspect of the invention provides a system for indicating
emotive
responses in a virtual meeting, the system comprising at least one processor;
and a memory
storing instructions, which instructions being executable by the at least one
processor to:
create or select avatar data defining one or more avatars to represent one or
more
corresponding users in response to input from the one or more corresponding
users; receive
one or more user selections of meeting data defining one or more virtual
meetings, a user
selection comprising an indication that the user is attending the virtual
meeting; generate an
output for display of a virtual meeting with one or more avatars representing
one or more
users attending the meeting using the avatar data and the meeting data
corresponding to the
virtual meeting; receive emotive input data from one or more users indicative
of an emotive
response or body language of the one or more users attending the virtual
meeting; process
the avatar data using the emotive input data; and update the output for
display of the virtual
1

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
meeting to render the one or more avatars for the one or more users to display
a respective
emotive state dependent upon the respective emotive input data.
[0006] Another aspect of the invention provides a method of indicating
emotive
responses in a virtual meeting, the method comprising creating or select
avatar data defining
one or more avatars to represent one or more corresponding users in response
to input from
the one or more corresponding users; receiving one or more user selections of
meeting data
defining one or more virtual meetings, a user selection comprising an
indication that the user
is attending the virtual meeting; generating an output for display of a
virtual meeting with one
or more avatars representing one or more users attending the meeting using the
avatar data
and the meeting data corresponding to the virtual meeting; receiving emotive
input data from
one or more users indicative of an emotive response or body language of the
one or more
users attending the virtual meeting; processing the avatar data using the
emotive input data;
and updating the output for display of the virtual meeting to render the one
or more avatars
for the one or more users to display a respective emotive state dependent upon
the respective
emotive input data.
[0007] Another aspect of the invention provides a carrier medium or a
storage
medium carrying code executable by a processor to carry out the deferred
search method.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] Figure 1 is a schematic diagram illustrating a system according to
one
embodiment;
[0009] Figure 2 is a flow diagram of a method using the system of figure
1 according
to one embodiment;
[0010] Figure 3 is a schematic illustration of a user interface for a
virtual conference
generated according to one embodiment;
[0011] Figure 4 is a schematic diagram of a meeting using an augmented
reality
conference display according to one embodiment;
[0012] Figure 5 is a schematic illustration of a user interface for an
augmented reality
conference display generated in the embodiment of figure 4;
[0013] Figure 6 is a schematic illustration of a user interface for a
social meeting
generated according to one embodiment; and
[0014] Figure 7 is a schematic diagram of a basic computing device for
use in one
embodiment.
2

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
DETAILED DESCRIPTION
[0015] In the following detailed description, reference is made to the
accompanying
drawings that form a part hereof, and in which is shown by way of illustration
specific
embodiments in which the inventive subject matter may be practiced. These
embodiments
are described in sufficient detail to enable those skilled in the art to
practice them, and it is to
be understood that other embodiments may be utilized and that structural,
logical, and
electrical changes may be made without departing from the scope of the
inventive subject
matter. Such embodiments of the inventive subject matter may be referred to,
individually
and/or collectively, herein by the term "invention" merely for convenience and
without
intending to voluntarily limit the scope of this application to any single
invention or inventive
concept if more than one is in fact disclosed.
[0016] The following description is, therefore, not to be taken in a
limited sense, and
the scope of the inventive subject matter is defined by the appended claims.
[0017] In the following embodiments, like components are labelled with
like
reference numerals.
[0018] In the following embodiments, data is described as being stored in
at least one
database. The term database is intended to encompass any data structure
(and/or
combinations of multiple data structures) for storing and/or organizing data,
including, but
not limited to, relational databases (e.g., Oracle databases, mySQL databases,
etc.), non-
relational databases (e.g., NoSQL databases, etc.), in-memory databases,
spreadsheets, as
comma separated values (CSV) files, eXtendible markup language (XML) files,
TeXT (TXT)
files, flat files, spreadsheet files, and/or any other widely used or
proprietary format for data
storage. Databases are typically stored in one or more data stores.
Accordingly, each database
referred to herein (e.g., in the description herein and/or the figures of the
present application)
is to be understood as being stored in one or more data stores. A "file
system" may control
how data is stored and/or retrieved (for example, a disk file system like FAT,
NTFS, optical
discs, etc., a flash file system, a tape file system, a database file system,
a transactional file
system, a network file system, etc.). For simplicity, the disclosure is
described herein with
respect to databases. However, the systems and techniques disclosed herein may
be
implemented with file systems or a combination of databases and file systems.
[0019] In the following embodiments, the term data store is intended to
encompass
any computer readable storage medium and/or device (or collection of data
storage mediums
3

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
and/or devices). Examples of data stores include, but are not limited to,
optical disks (e.g.,
CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.),
memory
circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or
the like.
Another example of a data store is a hosted storage environment that includes
a collection of
physical data storage devices that may be remotely accessible and may be
rapidly provisioned
as needed (commonly referred to as "cloud" storage).
[0020] The functions or algorithms described herein are implemented in
hardware,
software or a combination of software and hardware in one embodiment. The
software
comprises computer executable instructions stored on computer readable carrier
media such
as memory or other type of storage devices. Further, described functions may
correspond to
modules, which may be software, hardware, firmware, or any combination
thereof. Multiple
functions are performed in one or more modules as desired, and the embodiments
described
are merely examples. The software is executed on a digital signal processor,
ASIC,
microprocessor, or other type of processor operating on a system, such as a
personal
computer, server, a router, or other device capable of processing data
including network
interconnection devices.
[0021] Some embodiments implement the functions in two or more specific
interconnected hardware modules or devices with related control and data
signals
communicated between and through the modules, or as portions of an application-
specific
integrated circuit. Thus, the exemplary process flow is applicable to
software, firmware, and
hardware implementations.
[0022] A generalized embodiment provides a method and system for
indicating
emotive responses in a virtual meeting, in which avatar data defining one or
more avatars to
represent one or more corresponding users in response to input from the one or
more
corresponding users is created or selected and one or more user selections of
meeting data
defining one or more virtual meetings is received. A user selection comprises
an indication
that the user is attending the virtual meeting. An output is generated for
display of a virtual
meeting with one or more avatars representing one or more users attending the
meeting using
the avatar data and the meeting data corresponding to the virtual meeting.
Emotive input data
is received from one or more users indicative of an emotive response or body
language of the
one or more users attending the virtual meeting. The avatar data is processed
using the
emotive input data, and the output for display of the virtual meeting is
updated to render the
4

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
one or more avatars for the one or more users to display a respective emotive
state dependent
upon the respective emotive input data.
[0023] The virtual meeting can be any form of meeting in a virtual
environment, such
as a business meeting, a conference, a social meeting, a chat room, a virtual
shop etc. In other
words, any virtual situation where users generate avatars to be present in a
virtual situation
where other avatars are present. The display of an emotive state in the
virtual environment
enables interaction with other users via the avatars to indicate emotive
states of users. Hence,
the emotive state of the avatars can be manipulated simply to reflect the
emotive state of the
user to simply interact with other users by body language and without
requiring text of any
other form of indications. Body language in avatars is the most natural form
of expression of
emotions to other users via the virtual environment.
[0024] The virtual meeting can be a 'pure' virtual meeting where all of
the images of
the participants are generated as avatars. Alternatively, the virtual meeting
may be an
augmented reality meeting in which video images of one or more participants in
a meeting
are displayed, and the augmented reality meeting has one or more avatars
representing one or
more users overlaid on the video data with the video images of the
participants. In this way,
those participants who are not part of the 'real' meeting can express
themselves and interact
using the body language of their avatars.
[0025] Interaction input can be received from one or more users attending
the virtual
meeting to cause the avatars to perform required interaction, and the output
for display of the
virtual meeting is updated to render the one or more avatars for the one or
more users from
which interaction data is received to display the required interaction. For
example, the
interaction can include the emotive interaction of a greeting, including
shaking hands, 'high
fiving', hugging or kissing.
[0026] The user interface can, in one embodiment, be provided as a
conventional web
site having a displayed output and a pointer device and keyboard input by a
user. In
alternative embodiments, the interface can be provided by any form of visual
output and any
form of input such as keyboard, touch screen, pointer device (such as a mouse,
trackball,
trackpad, or pen device), audio recognition hardware and/or software to
recognize a sounds
or speech from a user, gesture recognition input hardware and/or software,
etc.
[0027] In one embodiment, the method and system can be used with the
method and
system disclosed in copending US patent application number , filed on the
same
date as this application and entitled "VIRTUAL OFFICE", the content of which
is hereby

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
incorporated by reference in its entirety. Thus, the virtual meeting can be
part of a virtual
office to allow users to control their avatars to interact with images of
items of office
equipment to cause the items of office equipment to perform office functions.
[0028] In one embodiment, the method and system can be used with the
method and
apparatus disclosed in copending US patent application number ,
filed on the same
date as this application and entitled "METHOD AND APPARATUS TO TRANSFER DATA
FROM A FIRST COMPUTER STATE TO A DIFFERENT COMPUTER STATE", the
content of which is hereby incorporated by reference in its entirety.
[0029] In one embodiment, the method and system can be used with the
method and
apparatus disclosed in copending US patent application number ,
filed on the same
date as this application and entitled "EVENT BASED DEFERRED SEARCH METHOD
AND SYSTEM", the content of which is hereby incorporated by reference in its
entirety.
[0030] In one embodiment, the method and system can be used with the
method and
apparatus disclosed in co-pending US patent application number US15/395,343,
filed 30th
December 2016 and entitled "USER INTERFACE METHOD AND APPARATUS", the
content of which is hereby incorporated in its entirety. The user interface of
US15/395,343
can provide a means by which the user interacts with the system for inputs and
selections.
[0031] In one embodiment, the method and system can be used with the
electronic
transaction method and system disclosed in copending US patent application
number US
15/395,487, filed 30th December 2016 and entitled "AN ELECTRONIC TRANSACTION
METHOD AND APPARATUS", the content of which is hereby incorporated in its
entirety.
[0032] Specific embodiments will now be described with reference to the
drawings.
[0033] Figure 1 illustrates a generalized system according to one
embodiment.
[0034] Figure 1 illustrates two client devices 100A and 100B, each for
use by a user.
Any number of client devices may be used. The client devices 100A and 100B can
comprise
any type of computing or processing machine, such as a personal computer, a
laptop, a tablet
computer, a personal organizer, a mobile device, smart phone, a mobile
telephone, a video
player, a television, a multimedia device, personal digital assistant, etc. In
this embodiment
each client device executes a web browser 101A and 101B to enable it to
interact with hosted
web pages at a server system 1000. In an alternative embodiment, the web
browser 101A and
101B can be replaced by an application running on the client devices 100A and
100B.
6

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
[0035] The client devices 100A and 100B are connected to a network, which
is this
example is the internet 50. The network can comprise any suitable
communications network
for networking computer devices.
[0036] The server system 1000 comprises any number of server computers
connected
to the internet 50. The server system 1000 operates to provide the service
according to
embodiments of the invention. The server system 1000 comprises a web server
110 to host
web pages to be accessed and rendered by the browsers 101A and 101B. An
application
server 120 is connected to the web server 110 to provide dynamic data for the
web server
110. The application server 120 is connected to a data store 195. The data
store 195 stores
data in a number of different databases, namely a user database 130, an avatar
database 140, a
virtual world data store 150, a meeting database 160, and an emotional
response database
170. The user database 130 stores information on the user, which can include
an identifier,
name, age, username and password, date of birth, address, etc. The avatar
database 140 can
store data on avatars available to be created by users to represent themselves
and the user
generated avatars associated with the user data. The virtual world data store
150 stores data
required to create the virtual meeting environments. The meeting database 160
can store data
on specific meetings, including a meeting identifier, a meeting name,
associated users
attending the meeting (hence indirectly the avatars to be rendered in the
virtual meeting), an
identifier for any video stream to be rendered as part of an augmented reality
virtual meeting,
meeting date, meeting login information, etc. The emotional response database
170 can store
data indicative of a set of emotional responses that can be selected by a user
and used to
modify the rendered appearance of the avatars. The avatar data and processing
for rendering
in the virtual environment can be structured to allow each of the emotional
responses to be
applied. The emotional responses can be such things as: smile, laugh, cry,
greet by
handshake, hug or kiss, bored, frown, cross angry, amazed, relaxed,
interested/a look of
intent, etc.
[0037] Figure 2 is a flow diagram of a process for indicating emotive
responses in a
virtual meeting using the system of figure 1 according to one embodiment.
[0038] In step S10 a user creates or selects an avatar to represent them
in a virtual
meeting. In step Sll a user selection of meeting data defining a virtual
meeting is received. A
user selection comprises an indication that the user is attending the virtual
meeting. In step
S12 an output for display of the virtual meeting is generated with an avatar
representing the
users attending the meeting using the avatar data and the meeting data
corresponding to the
7

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
virtual meeting. In step S13 emotive input data is received from the user
indicative of an
emotive response or body language of the user attending the virtual meeting.
In step S14 the
avatar data is processed using the emotive input data and in step S15 the
output for display of
the virtual meeting is updated to render the avatar for the user to display an
emotive state
dependent upon the emotive input data.
[0039] Figure 3 is a schematic illustration of a user interface for a
virtual conference
generated according to one embodiment.
[0040] The display 200 includes a virtual conference area 201 to display
the virtual
conference and a reaction menu area 202 displaying user selectable menu items
to enable a
user to select to input an emotions response or body language to be applied to
their avatar in
the virtual conference for interaction with other attendees. The other
attendees will be able to
see the user's emotional reaction as applied to their avatar in the virtual
conference display
area enabling them to react accordingly, for example by changing the emotive
response
displayed by their own avatar or by taking some other action in the virtual
conference.
Although in this embodiment, the menu is illustrated as a text menu, the menu
could
comprise icons or images depicting various emotional states that the user can
select to modify
their avatar's appearance and behaviour to display the emotional response and
body language
according to the user's selection.
[0041] A menu could also be displayed to a user to allow a user to select
sounds or
music that the avatar could output in the virtual conference e.g. selected
wording or
readymade phrases like"greetings', 'hey', 'what's up?' or a birthday or
greeting messages
that could be ready made or composable by the user. These could be selectable
in different
accents, such as American or English or even an impersonation of a famous
person.
[0042] There could be a translation option which translates and replays a
message,
such as part of what the user wants to say, e.g. speak French when being
Romanic. This can
be a pre-saved recording or the system may translate what user (avatar) has
just said,
although it may be a bit delayed. In one example, there is a prerecorded and
saved message
option to use where the user is able to record and play back, via their avatar
as, for example, a
response to another avatar or guest user that they are meeting.
[0043]
[0044] The display 200 includes outside the area 201 for the virtual
conference a
shared message area 203 that can be used to share messages with any other user
individually,
in groups or globally to the virtual conference attendees. Also, outside the
area 201 for the
8

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
virtual conference, a shared display area 204 is displayed. In this example,
it corresponds to a
virtual white board 203 in the virtual conference so that anything drawn on
the shared display
area will appear on the virtual white board 203.
[0045] In the virtual conference area 201 there are displayed avatars of
attendees of
the meeting. Four are seated. Two attendees 206 are shown greeting each other
by shaking
hands. To achieve this, the users corresponding to the avatars 206 have
selected a reaction
menu item to shake hands. One avatar 207 for a user is shown displaying anger.
One avatar
208 is shown smiling.
[0046] The virtual conference can be controlled to operate as a
conventional
conference, with each user of a client device being able to speak to input
audio for
transmission to the client devices of the other attendees. In one example,
documents can be
entered into the meeting by placing them on the table in the virtual display.
The location of
the placement will effect who can see them. To show them to everyone, copies
of the
document may be placed before everyone. Documents can be dragged into a
virtual filing
cabinet 214 to file them, or the user can select to find or a file in the
virtual filing cabinet 214
or search the virtual filing cabinet 214 to cause a filing system to be
searched to find
documents. Users can make their avatars move in the virtual conference and
when they leave
the conference, they can be shown exiting through a door 205.
[0047] The perspective displayed of the virtual conference for each
attendee can vary
depending upon their assigned seating position around the table.
[0048] Figure 4 is a schematic diagram of a meeting using an augmented
reality
conference display according to one embodiment.
[0049] In the foreground, a physical real world conference is taking
place around a
table with four participants. At the end of the table is a display 300
displaying participants
attending virtually using their avatars 301 and 302. The avatar 301 has been
controlled by its
respective user by an emotive input to reflect a happy or smiley face. The
avatar 302 has been
controlled by its respective user by an emotive input to reflect an angry or
annoyed face.
[0050] The augmented reality conference can be controlled to operate as a
conventional conference, with each user of a client device being able to speak
to input audio
for transmission to the client devices of the other attendees and to speakers
associated with
the display 300. In one example, documents can be entered into the meeting by
placing them
on the table in the virtual display 300. The location of the placement can
effect who can see
them. To show them to everyone, copies of the document need to be placed
before everyone.
9

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
In one example, documents can be dragged into a virtual filing cabinet 304 to
file them. Users
can make their avatars move in the virtual conference and when they leave the
conference,
they can be shown exiting through a door 303. A video camera or webcam 305 is
provided to
provide a video feed of the real attendees to the remote or virtual attendees'
computers, as
shown in figure 5.
[0051] Figure 5 is a schematic illustration of a user interface for an
augmented reality
conference display generated for a virtual attendee of the embodiment of
figure 4
[0052] The display 350 includes an augmented reality conference area 310
to display
the augmented reality conference comprising a video stream of the physical
attendees and a
virtual conference segment conjoined. A reaction menu area 380 displays user
selectable
menu items to enable the user to select to input an emotional response or body
language to be
applied to their avatar in the augmented reality conference for interaction
with other
attendees. The other attendees will be able to see the user's emotional and
physical reaction
as applied to their avatar in the augmented reality conference display area
enabling them to
react accordingly, for example by changing the emotive response displayed by
their own
avatar or by taking some other action in the augmented reality conference.
Although in this
embodiment, the menu is illustrated as a text menu, the menu could comprise
icons or images
depicting various emotional states that the user can select to modify their
avatar's appearance
and behaviour to display the emotional response and body language according to
the user's
selection.
[0053] In on example, a user can select to share music data, which
assists in
displaying a user's mood, or expression of emotion, or it can be used in
response to another
users response e.g. to play, share, or save or to enjoy the tune or song, e.g.
a happy song to
share with other user (avatar). A user's mood can be displayed by playing
saved or selected
music e.g. sad music for feeling down, sad lonely, or blue, or happy music in
that they are
feeling good. Also, in one example, a use is able to tune into a radio station
and find a tune
that is apt for the user's emotion at the time.
[0054] Also, in one example, a user is to be able to select and apply
colours
(Chronotherapy, sometimes called colour therapy) e.g. virtual paint in
different colours. A
user may select to paint a virtual bedroom in a magical sparkly colour, or a
deep dark colour
to show friends how the user is feeling in the users virtual space.
[0055] The augmented reality conference can be controlled to operate as a
conventional conference, with each user of a client device attending the
virtual conference

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
segment being able to speak to input audio for transmission to the client
devices of the other
virtual attendees and to the speaker associated with the display 300 for the
physical (real)
attendees. In one example, documents that are physically entered into the real
conference can
be entered into the virtual conference by placing them on the table in the
virtual display
segment of the augmented reality conference. The location of the placement
will effect who
can see them. To show them to everyone in the virtual segment of the augmented
reality
conference, copies of the document can be placed before every virtual
attendee. Documents
can be dragged into a virtual filing cabinet 304 to file them. Users can make
their avatars
move in the virtual segment of the augmented reality conference and when they
leave the
conference, they can be shown exiting through a door 3.
[0056] The display 350 includes a shared message area 360 that can be
used to share
messages with any other user individually, in groups or globally to the
augmented reality
conference attendees. Also, a shared display area 370 is displayed.
[0057] Figure 6 is a schematic illustration of a user interface for a
social meeting
generated according to one embodiment.
[0058] A display 400 includes a virtual meeting area 410 in which avatars
can be
displayed in a virtual environment. In this embodiment, avatar 403 has been
controlled by its
user to smile, avatar 402 has been controlled to laugh and the two avatars 401
in the
foreground have been controlled to greet each other by shaking hands.
[0059] A reaction menu area 404 displays user selectable menu items to
enable a user
to select to input an emotional response or body language to be applied to
their avatar in the
virtual meeting for interaction with other attendees. The other attendees will
be able to see the
user's emotional reaction as applied to their avatar in the virtual meeting
display area 410
enabling them to react accordingly, for example by changing the emotive
response displayed
by their own avatar or by taking some other action in the virtual meeting.
[0060] The display 400 includes outside the area 410 for the virtual
meeting a shared
message area 405 that can be used to share messages with any other user
individually, in
groups or globally to the virtual meeting attendees. Also, outside the area
410 for the virtual
meeting, a shared display area 406 is displayed. In this example, it
corresponds to a news
item shared between the two users represented by the avatars 402 and 403. The
message area
displays a private message exchange between avatar 403 (David) and avatar 402
(Steve)
related to the news item. The avatars emotional response has been adjusted by
input by the
associated users to reflect their interaction regarding the news item.
11

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
[0061] The system can be controlled to allow users to join and move
between
meetings that take place in different rooms. These rooms could be displayed
schematically as,
for example, a room map to allow a user to select to move from one room to
another to join
and leave a meeting. The rooms can represent different types of meetings e.g.
a games room
meeting, a coffee table meeting etc. Also users can set up meetings and invite
other users to
the meetings with the virtual location and time of the meeting being set by
the inviting user.
[0062] In the displayed area of the meeting, identifiers of the avatars
can be displayed
or alternatively or in addition, a list of attendees can be displayed.
[0063] The virtual meeting using avatars could be in the environment
related to any
corresponding real world environment, such as in a shop, or in a gym.
[0064] In the embodiments described above, the user input to set the
emotional state
of the avatar is based on a simple menu selection. However, other forms of
user input can be
used. For example, a camera can be provided to take a picture or video of a
user's face and
possibly body and determine an emotional response of the user. Also, the user
could be
provided with the ability to input free text by typing or by recognition of
speech to describe
their emotional response to control their avatar. The picture or video of the
user could also be
used to capture the user's current clothing and to adapt the avatar to
represent the different
cloths worn by the user, e.g. outfits, a suit and tie, a dress, fancy dress
etc. This can be used to
facilitate the user's ability to dress smart or casual in a virtual meeting. A
user can choose a
dress to wear or a suit and tie which can be changed for each meeting, e.g. a
different colour
tie.
[0065] The avatar generated can be selected by the user to take any form.
For
example, the avatar could be an animal with the user's own features included
or any other
character mixed in with the user's i.e. human features which can be adapted.
[0066] This would suit different age groups as the environment for the
meeting can be
chosen as desired by the user or group of users. Groups of old and young
people e.g. a family
or social group, e.g. gran in Ireland meeting up virtually with young
grandchild in Australia
to be able to share a story and have a giggle. Users can choose casual dress
to suit or match
the virtual environment, or the virtual environment can change to match the
selected outfit.
Users can enjoy virtual accessories and items to meet their needs within the
virtual meeting,
which they could buy from a virtual shop, go into a virtual changing room, and
then they are
ready for the next virtual meeting.
12

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
[0067] A user can select for example from a menu whether to join another
virtual
meeting in another virtual meeting room.
[0068] In one example, the virtual meeting is in a virtual restaurant or
a social
gathering involving virtual food and/or drink.
BASIC COMPUTING DEVICE
[0069] Figure 7 is a block diagram that illustrates a basic computing
device 600 in
which the example embodiment(s) of the present invention may be embodied.
Computing
device 600 and its components, including their connections, relationships, and
functions, is
meant to be exemplary only, and not meant to limit implementations of the
example
embodiment(s). Other computing devices suitable for implementing the example
embodiment(s) may have different components, including components with
different
connections, relationships, and functions.
[0070] The computing device 600 can comprise any of the servers or the
user device
as illustrated in figure 1 for example.
[0071] Computing device 600 may include a bus 602 or other communication
mechanism for addressing main memory 606 and for transferring data between and
among
the various components of device 600.
[0072] Computing device 600 may also include one or more hardware
processors 604
coupled with bus 602 for processing information. A hardware processor 604 may
be a general
purpose microprocessor, a system on a chip (SoC), or other processor.
[0073] Main memory 606, such as a random access memory (RAM) or other
dynamic
storage device, also may be coupled to bus 602 for storing information and
software
instructions to be executed by processor(s) 604. Main memory 606 also may be
used for
storing temporary variables or other intermediate information during execution
of software
instructions to be executed by processor(s) 604.
[0074] Software instructions, when stored in storage media accessible to
processor(s)
604, render computing device 600 into a special-purpose computing device that
is customized
to perform the operations specified in the software instructions. The terms
"software",
"software instructions", "computer program", "computer-executable
instructions", and
"processor-executable instructions" are to be broadly construed to cover any
machine-
readable information, whether or not human-readable, for instructing a
computing device to
13

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
perform specific operations, and including, but not limited to, application
software, desktop
applications, scripts, binaries, operating systems, device drivers, boot
loaders, shells, utilities,
system software, JAVASCR1PT, web pages, web applications, plugins, embedded
software,
microcode, compilers, debuggers, interpreters, virtual machines, linkers, and
text editors.
[0075] Computing device 600 also may include read only memory (ROM) 608
or
other static storage device coupled to bus 602 for storing static information
and software
instructions for processor(s) 604.
[0076] One or more mass storage devices 610 may be coupled to bus 602 for
persistently storing information and software instructions on fixed or
removable media, such
as magnetic, optical, solid-state, magnetic-optical, flash memory, or any
other available mass
storage technology. The mass storage may be shared on a network, or it may be
dedicated
mass storage. Typically, at least one of the mass storage devices 610 (e.g.,
the main hard disk
for the device) stores a body of program and data for directing operation of
the computing
device, including an operating system, user application programs, driver and
other support
files, as well as other data files of all sorts.
[0077] Computing device 600 may be coupled via bus 602 to display 612,
such as a
liquid crystal display (LCD) or other electronic visual display, for
displaying information to a
computer user. In some configurations, a touch sensitive surface incorporating
touch
detection technology (e.g., resistive, capacitive, etc.) may be overlaid on
display 612 to form
a touch sensitive display for communicating touch gesture (e.g., finger or
stylus) input to
processor(s) 604.
[0078] An input device 614, including alphanumeric and other keys, may be
coupled
to bus 602 for communicating information and command selections to processor
604. In
addition to or instead of alphanumeric and other keys, input device 614 may
include one or
more physical buttons or switches such as, for example, a power (on/off)
button, a "home"
button, volume control buttons, or the like.
[0079] Another type of user input device may be a cursor control 616,
such as a
mouse, a trackball, a cursor, a touch screen, or direction keys for
communicating direction
information and command selections to processor 604 and for controlling cursor
movement
on display 612. This input device typically has two degrees of freedom in two
axes, a first
axis (e.g., x) and a second axis (e.g., y), that allows the device to specify
positions in a plane.
Other input device embodiments include an audio or speech recognition input
module to
14

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
recognize audio input such as speech, a visual input device capable of
recognizing gestures
by a user, and a keyboard.
[0080] While in some configurations, such as the configuration depicted
in figure 7,
one or more of display 612, input device 614, and cursor control 616 are
external components
(i.e., peripheral devices) of computing device 600, some or all of display
612, input device
614, and cursor control 616 are integrated as part of the form factor of
computing device 600
in other configurations.
[0081] In addition to or in place of the display 612 any other form of
user output
device can be sued such as an audio output device or a tactile (vibrational)
output device.
[0082] Functions of the disclosed systems, methods, and modules may be
performed
by computing device 600 in response to processor(s) 604 executing one or more
programs of
software instructions contained in main memory 606. Such software instructions
may be read
into main memory 606 from another storage medium, such as storage device(s)
610 or a
transmission medium. Execution of the software instructions contained in main
memory 606
cause processor(s) 604 to perform the functions of the example embodiment(s).
[0083] While functions and operations of the example embodiment(s) may be
implemented entirely with software instructions, hard-wired or programmable
circuitry of
computing device 600 (e.g., an ASIC, a FPGA, or the like) may be used in other
embodiments in place of or in combination with software instructions to
perform the
functions, according to the requirements of the particular implementation at
hand.
[0084] The term "storage media" as used herein refers to any non-
transitory media
that store data and/or software instructions that cause a computing device to
operate in a
specific fashion. Such storage media may comprise non-volatile media and/or
volatile media.
Non-volatile media includes, for example, non-volatile random access memory
(NVRAM),
flash memory, optical disks, magnetic disks, or solid-state drives, such as
storage device 610.
Volatile media includes dynamic memory, such as main memory 606. Common forms
of
storage media include, for example, a floppy disk, a flexible disk, hard disk,
solid-state drive,
magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other
optical
data storage medium, any physical medium with patterns of holes, a RAM, a
PROM, and
EPROM, a FLASH-EPROM, NVRAM, flash memory, any other memory chip or cartridge.
[0085] Storage media is distinct from but may be used in conjunction with
transmission media. Transmission media participates in transferring
information between
storage media. For example, transmission media includes coaxial cables, copper
wire and

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
fiber optics, including the wires that comprise bus 602. Transmission media
can also take the
form of acoustic or light waves, such as those generated during radio-wave and
infra-red data
communications. A machine readable medium carrying instructions in the form of
code can
comprise a non-transient storage medium and a transmission medium.
[0086] Various forms of media may be involved in carrying one or more
sequences of
one or more software instructions to processor(s) 604 for execution. For
example, the
software instructions may initially be carried on a magnetic disk or solid-
state drive of a
remote computer. The remote computer can load the software instructions into
its dynamic
memory and send the software instructions over a telephone line using a modem.
A modem
local to computing device 600 can receive the data on the telephone line and
use an infra-red
transmitter to convert the data to an infra-red signal. An infra-red detector
can receive the
data carried in the infra-red signal and appropriate circuitry can place the
data on bus 602.
Bus 602 carries the data to main memory 606, from which processor(s) 604
retrieves and
executes the software instructions. The software instructions received by main
memory 606
may optionally be stored on storage device(s) 610 either before or after
execution by
processor(s) 604.
[0087] Computing device 600 also may include one or more communication
interface(s) 618 coupled to bus 602. A communication interface 618 provides a
two-way data
communication coupling to a wired or wireless network link 620 that is
connected to a local
network 622 (e.g., Ethernet network, Wireless Local Area Network, cellular
phone network,
Bluetooth wireless network, or the like). Communication interface 618 sends
and receives
electrical, electromagnetic, or optical signals that carry digital data
streams representing
various types of information. For example, communication interface 618 may be
a wired
network interface card, a wireless network interface card with an integrated
radio antenna, or
a modem (e.g., ISDN, DSL, or cable modem).
[0088] Network link(s) 620 typically provide data communication through
one or
more networks to other data devices. For example, a network link 620 may
provide a
connection through a local network 622 to a host computer or to data equipment
operated by
an Internet Service Provider (ISP). ISP in turn provides data communication
services through
the world wide packet data communication network now commonly referred to as
the
"Internet". Local network(s) 622 and Internet use electrical, electromagnetic
or optical
signals that carry digital data streams. The signals through the various
networks and the
16

CA 03068920 2020-01-03
WO 2019/008320 PCT/GB2018/051619
signals on network link(s) 620 and through communication interface(s) 618,
which carry the
digital data to and from computing device 600, are example forms of
transmission media.
[0089] Computing device 600 can send messages and receive data, including
program
code, through the network(s), network link(s) 620 and communication
interface(s) 618. In the
Internet example, a server might transmit a requested code for an application
program
through Internet, ISP, local network(s) 622 and communication interface(s)
618.
[0090] The received code may be executed by processor 604 as it is
received, and/or
stored in storage device 610, or other non-volatile storage for later
execution.
[0091] One aspect provides a carrier medium, such as a non-transient
storage medium
storing code for execution by a processor of a machine to carry out the
method, or a transient
medium carrying processor executable code for execution by a processor of a
machine to
carry out the method. Embodiments can be implemented in programmable digital
logic that
implements computer code. The code can be supplied to the programmable logic,
such as a
processor or microprocessor, on a carrier medium. One such embodiment of a
carrier medium
is a transient medium i.e. a signal such as an electrical, electromagnetic,
acoustic, magnetic,
or optical signal. Another form of carrier medium is a non-transitory storage
medium that
stores the code, such as a solid-state memory, magnetic media (hard disk
drive), or optical
media (Compact disc (CD) or digital versatile disc (DVD)).
[0092] It will be readily understood to those skilled in the art that
various other
changes in the details, material, and arrangements of the parts and method
stages which have
been described and illustrated in order to explain the nature of the inventive
subject matter
may be made without departing from the principles and scope of the inventive
subject matter
as expressed in the subjoined claims.
17

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Requête visant une déclaration du statut de petite entité reçue 2024-04-10
Paiement d'une taxe pour le maintien en état jugé conforme 2024-03-27
Inactive : Lettre officielle 2024-03-27
Inactive : Supprimer l'abandon 2024-03-27
Inactive : Lettre officielle 2024-01-11
Inactive : Lettre officielle 2023-12-20
Inactive : CIB attribuée 2023-12-18
Inactive : CIB en 1re position 2023-12-18
Inactive : CIB attribuée 2023-12-18
Inactive : Rép. reçue: TME + surtaxe 2023-12-13
Exigences pour une requête d'examen - jugée conforme 2023-12-13
Toutes les exigences pour l'examen - jugée conforme 2023-12-13
Déclaration du statut de petite entité jugée conforme 2023-12-13
Exigences de rétablissement - réputé conforme pour tous les motifs d'abandon 2023-12-13
Réputée abandonnée - omission de répondre à un avis sur les taxes pour le maintien en état 2023-12-13
Requête en rétablissement reçue 2023-12-13
Requête visant une déclaration du statut de petite entité reçue 2023-12-13
Réputée abandonnée - omission de répondre à un avis relatif à une requête d'examen 2023-09-25
Lettre envoyée 2023-06-13
Lettre envoyée 2023-06-13
Inactive : CIB expirée 2023-01-01
Inactive : CIB expirée 2023-01-01
Inactive : CIB enlevée 2022-12-31
Inactive : CIB enlevée 2022-12-31
Représentant commun nommé 2020-11-07
Inactive : Page couverture publiée 2020-02-17
Lettre envoyée 2020-01-30
Inactive : CIB en 1re position 2020-01-23
Lettre envoyée 2020-01-23
Exigences applicables à la revendication de priorité - jugée conforme 2020-01-23
Demande de priorité reçue 2020-01-23
Inactive : CIB attribuée 2020-01-23
Inactive : CIB attribuée 2020-01-23
Demande reçue - PCT 2020-01-23
Exigences pour l'entrée dans la phase nationale - jugée conforme 2020-01-03
Demande publiée (accessible au public) 2019-01-10

Historique d'abandonnement

Date d'abandonnement Raison Date de rétablissement
2023-12-13
2023-12-13
2023-09-25

Taxes périodiques

Le dernier paiement a été reçu le 2023-12-13

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Enregistrement d'un document 2020-01-03 2020-01-03
Taxe nationale de base - générale 2020-01-03 2020-01-03
TM (demande, 2e anniv.) - générale 02 2020-06-15 2020-06-08
TM (demande, 3e anniv.) - générale 03 2021-06-14 2021-06-08
TM (demande, 4e anniv.) - générale 04 2022-06-13 2022-06-13
TM (demande, 5e anniv.) - petite 05 2023-06-13 2023-12-13
Surtaxe (para. 35(3) de la Loi) 2023-12-13 2023-12-13
Surtaxe (para. 27.1(2) de la Loi) 2023-12-13 2023-12-13
2024-09-25 2023-12-13
Requête d'examen - générale 2023-06-13 2023-12-13
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
MARIA FRANCISCA JONES
Titulaires antérieures au dossier
ALEXANDER JONES
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Abrégé 2020-01-02 1 68
Dessins 2020-01-02 7 236
Description 2020-01-02 17 975
Revendications 2020-01-02 3 118
Dessin représentatif 2020-01-02 1 14
Courtoisie - Lettre du bureau 2024-01-10 2 215
Rétablissement (RE) 2023-12-12 12 591
Taxe périodique + surtaxe 2023-12-12 12 611
Courtoisie - Lettre du bureau 2024-03-26 1 196
Déclaration de petite entité 2024-04-09 6 173
Courtoisie - Réception du paiement de la taxe pour le maintien en état et de la surtaxe 2024-03-26 1 434
Courtoisie - Lettre confirmant l'entrée en phase nationale en vertu du PCT 2020-01-29 1 593
Courtoisie - Certificat d'enregistrement (document(s) connexe(s)) 2020-01-22 1 334
Avis du commissaire - Requête d'examen non faite 2023-07-24 1 519
Avis du commissaire - non-paiement de la taxe de maintien en état pour une demande de brevet 2023-07-24 1 550
Courtoisie - Lettre d'abandon (requête d'examen) 2023-11-05 1 550
Courtoisie - Lettre d'abandon (taxe de maintien en état) 2024-01-23 1 550
Courtoisie - Lettre du bureau 2023-12-19 2 215
Demande d'entrée en phase nationale 2020-01-02 7 341
Traité de coopération en matière de brevets (PCT) 2020-01-02 2 128
Rapport de recherche internationale 2020-01-02 2 48