Language selection

Search

Patent 3045478 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3045478
(54) English Title: SYSTEMS AND METHODS OF HOMECAGE MONITORING
(54) French Title: SYSTEMES ET METHODES POUR CAGE D`HABITATION
Status: Examination Requested
Bibliographic Data
(51) International Patent Classification (IPC):
  • A01K 29/00 (2006.01)
  • G16H 15/00 (2018.01)
  • G06T 7/20 (2017.01)
  • G06N 3/02 (2006.01)
(72) Inventors :
  • BERMUDEZ CONTRERAS, EDGAR JOSUE (Canada)
  • SUTHERLAND, ROBERT JAMES (Canada)
  • MOHAJERANI, MAJID (Canada)
  • SINGH, SURJEET (Canada)
(73) Owners :
  • NEUROCAGE SYSTEMS LTD. (Canada)
(71) Applicants :
  • BERMUDEZ CONTRERAS, EDGAR JOSUE (Canada)
  • SUTHERLAND, ROBERT JAMES (Canada)
  • MOHAJERANI, MAJID (Canada)
  • SINGH, SURJEET (Canada)
(74) Agent: SJOVOLD, SUZANNE B.
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2019-06-06
(41) Open to Public Inspection: 2020-12-06
Examination requested: 2024-06-06
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data: None

Abstracts

English Abstract


Computer controlled systems and methods of an automated homecage monitoring
system predict a state of an animal and of its homecage. The prediction of the
state of
the animal may be based on a pose estimate of the animal, and at least one
sensor
input from an at least one sensor. The pose estimate may include a graph of
connected
nodes, the nodes representing coordinates of the sensor input corresponding to

predicted indicia on the animal's body. The animal state may include a
behavioral state
of an animal, a social state of an animal, a position state of an animal, a
sleep state of
an animal, and a biological state of an animal. The automated homecage
monitoring
system may allow for animal state data to be reported for an animal or animals
in situ.


Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS
We claim:
1. A method for quantifying animal activity, comprising:
- receiving an at least two video frames;
- determining at least two processed video frames from the at least two
video
frames by pre-processing;
- determining a background image from the at least two processed video
frames;
- determining an at least two thresholded video frames by applying the
background
image as a threshold to the at least two video frames;
- determining a contour from the at least two thresholded video frames; and
- determining an animal motion flag if the contour is larger than a minimum
area.
2. A method for quantifying animal activity, comprising:
- receiving at least two video frames;
- determining at least two processed video frames from the at least two
video
frames by pre-processing;
- determining an image background from the at least two processed video
frames;
- determining a contour from the at least two thresholded video frames;
- determining a centroid of the contour; and
- determining an animal track from the centroid of each of the at least two

thresholded video frames.
3. An animal behavior analysis system for an animal, comprising:
- a memory having:
- a first predictive model, the first predictive model for predicting a
pose estimate of the animal; and
- a second predictive model, the second predictive model for
predicting an animal state;
- at least one sensor, the at least one sensor for sensing the animal;
33

- a processor in communication with the memory and the at least one
sensor, the processor configured to:
- receive a sensor input from the at least one sensor;
- predict the pose estimate from the sensor input and the first
predictive
model; and
- predict the animal state based on the pose estimate, the sensor input
and the second predictive model.
4. The system of claim 3, wherein the predicting the pose estimate comprises:
- determining a position of an at least one indicia; and
- predicting the pose estimate based on the position of the at least one
indicia,
the sensor input, and the first predictive model.
5. The system of any one of claims 3 or 4, wherein the animal state includes
at least
one of a behavioral state of the animal, a social state of the animal, a
position state of
the animal, a sleep state of the animal, and a biological state of the animal.
6. The system of any one of claims 3 to 5, further comprising:
- a database in communication with the memory.
7. The system of claim 6, wherein the processor is further configured to:
- determine a report based on at least one of the animal state, the pose
estimate, the
sensor input, and the position of the at least one indicia; and
- stores the report in the database;
wherein the report, the pose estimate, and the position of the at least one
indicia
correspond to a common timebase.
8. The system of claim 7, wherein the processor is further configured to
output the
report.
34

9. The system of any one of claims 7 or 8, wherein the processor is further
configured to
output the report to a display device.
10. The system of any one of claims 7 or 8, wherein the processor is further
configured
to output the report to a server by network communication.
11. The system of any one of claims 3 to 10, wherein the processor is further
configured
to predict the pose estimate and predict the animal state generally in real-
time.
12. The system of any one of claims 3 to 10, wherein the processor is further
configured
to predict the pose estimate and predict the animal state using an offline
process.
13. The system of any one of claims 7 to 11, wherein the processor is further
configured
to determine, for the report, from a common timebase, a start time, an end
time, and an
elapsed time.
14. The system of any one of claims 7 to 13, wherein the processor is further
configured
to determine, for the report, from the position of the at least one indicia
and the pose
estimate, an occupancy map having an movement path of the animal.
15. The system of any one of claims 7 to 14, wherein the processor is further
configured
to determine, for the report, a movement speed along the movement path of the
animal.
16. The system of any one of claims 7 to 11, wherein the processor is further
configured
to:
- determine an object position from the sensor input, and
- determine, for the report, from object position, the position of the at
least one
indicia and the pose estimate, an interaction of the animal with the object.
17. The system of any one of claims 3 to 11, wherein the processor is further
configured
to determine at least one husbandry variable, the at least one husbandry
variable in the

homecage comprising a food supply level, a water supply level, a temperature,
a
humidity value, a bedding quality metric, and a nesting quality metric from
the sensor
input.
18. The system of any one of claims 7 to 11, further comprising:
- an actuator proximate to the animal; and
- wherein the processor is further configured to actuate the actuator if
the
report has a pre-determined actuation condition.
19. The system of claim 18 wherein the actuator is a haptic device.
20. The system of any one of claims 3 to 19 wherein the at least one sensor
comprises
at least one camera, at least one of a humidity sensor, at least one of a
temperature
sensor, and at least one of an ammonium sensor.
21. The system of claim 20 where the at least one camera has an at least one
infra-red
camera.
22. The system of any one of claims 3 to 21 wherein the first predictive model
is a deep
neural network.
23. The system of any one of claims 3 to 22 wherein the second predictive
model is a
recurrent neural network.
24. A method of animal state analysis for an animal, comprising:
- providing, at a memory, a first predictive model, the first predictive
model for
predicting a pose estimate of the animal;
- providing, at the memory, a second predictive model, the second
predictive
model for predicting a state of the animal;
- receiving, at a processor, a sensor input from an at least one sensor;
36

- predicting, at the processor, the pose estimate from the sensor input
and the first
predictive model; and
- predicting, at the processor, an animal state based on the pose estimate,
the
sensor input and the second predictive model.
25. The method of claim 24, wherein the predicting a pose estimate further
comprises:
- determining, at the processor, a position of an at least one indicia; and
- predicting, at the processor, the pose estimate based on the position of
the at
least one indicia, the sensor input, and the first predictive model.
26. The method of any one of claims 24 to 25, wherein the animal state
includes at least
one of a behavioral state of the animal, a social state of the animal, a
position state of
the animal, a sleep state of the animal, and a biological state of the animal.
27. The method of any one of claims 24 to 26, further comprising:
- determining, at the processor, a report based on at least one of the
animal state,
the pose estimate, the sensor input, and the position of the at least one
indicia,
- storing the report in a database, the database in communication with the
memory;
- wherein the report, the pose estimate, and the position of the at least
one indicia
correspond to a common timebase.
28. The method of claim 27, further comprising:
- outputting the report.
29. The method of claim 28, further comprising:
- outputting the report to a display device.
30. The method of claim 28, further comprising:
- outputting the report to a server by network communication.
37

31. The method of any one of claims 24 to 27, wherein the predicting, at the
processor,
the pose estimate and predicting, at the processor, the animal state is
performed
generally contemporaneously with the collection of sensor input.
32. The method of any one of claims 24 to 27, wherein the predicting, at the
processor,
the pose estimate and predicting the animal state is performed generally after
the
collection of sensor input.
33. The method of any one of claims 27 to 31, further comprising:
- determining, at the processor, for the report, from the common timebase,
a start
time, an end time, and an elapsed time.
34. The method of any one of claims 27 to 33, further comprising:
- determining, at the processor, for the report, from the position of the
at least one
indicia and the pose estimate, an occupancy map having a movement path of the
animal.
35. The method of any one of claims 27 to 34, further comprising:
- determining, at the processor, for the report, a movement speed along the

movement path of the animal.
36. The method of any one of claims 24 to 31, further comprising:
- determining, at the processor, an object position from the sensor input;
- determining, at the processor, for the report, from the object
position, the position
of the at least one indicia and the pose estimate, an interaction of the
animal with
the object.
37. The method of any one of claims 24 to 31, further comprising:
- determining, at the processor, at least one husbandry variable, the at
least one
husbandry variable in the homecage comprising a food supply level, a water
38


supply level, a temperature, a humidity value, a bedding quality metric, and a

nesting quality metric from the sensor input
38. The method of any one of claims 27 to 31, further comprising:
- if the report has a pre-determined actuation condition:
- actuating an actuator proximate to the animal.
39. The method of claim 38, wherein the actuator is a haptic device.
40. The method of any one of claims 24 to 39, wherein the at least one sensor
comprises at least one of an at least one camera, an at least one infra-red
camera, a
humidity sensor, a temperature sensor, and an ammonium sensor.
41. The method of any one of claims 24 to 40, wherein the first predictive
model is a
deep neural network.
42. The method of any one of claims 24 to 41, wherein the second predictive
model is a
recurrent neural network.
43. A system of generating a predictive model for predicting an animal state
for an
animal, comprising:
- a memory, the memory having:
- a plurality of sensor inputs; and
- a pose prediction model for predicting an animal pose from a
sensor
input;
- a processor configured to:
- generate a plurality of predicted animal poses associated with
the
plurality of the sensor inputs, by, for a first each sensor input in the
plurality of sensor inputs:
- predicting, using a first predictive model, a predicted animal
pose from the first each sensor input;

39

- associating the predicted animal pose with the first each sensor
input;
- generate a plurality of behavior labels associated with the plurality of
the sensor inputs, by, for a second each sensor input in the plurality of
sensor inputs:
- associating a behavior label with the second each sensor input;
- generate a second predictive model based on the plurality of
sensor
inputs, the plurality of predicted animal poses, and the plurality of
behavior labels.
44. The system of claim 43 wherein the processor is further configured to:
- display the sensor input to a user at a display device; and
- wherein the behavior label is received from the user using an input
device.
45. A method of generating a second predictive model, the second predictive
model for
predicting an animal state of an animal, comprising:
- providing, at a memory, a plurality of sensor inputs;
- providing, at the memory, a first predictive model for predicting animal
pose
from a sensor input;
- generating, at a processor, a plurality of predicted animal poses
associated
with the plurality of the sensor inputs, by, for a first each sensor input in
the
plurality of sensor inputs:
- predicting, using the first predictive model, a predicted animal pose
from the first each sensor input;
- associating the predicted animal pose with the first each sensor input;
- generating, at the processor, a plurality of behavior labels associated
with the
plurality of the sensor inputs, by, for a second each sensor input in the
plurality of sensor inputs:
- associating a behavior label with the second each sensor input;

- generating, at the processor, the second predictive model based on
the
plurality of sensor inputs, the plurality of predicted animal poses, and the
plurality of behavior labels.
46. The method of claim 45, further comprising:
- displaying the sensor input to a user at a display device; and
- wherein the behavior label is received from the user using an input
device.
41

Description

Note: Descriptions are shown in the official language in which they were submitted.


SYSTEMS AND METHODS OF HOMECAGE MONITORING
Field
[1] The described embodiments relate to the monitoring of animal behavior
in a
homecage.
Background
[2] Due to their reduced size and similar brain architecture as primates,
rodents
have been widely used for studying a variety of complex behaviors. Recent
development in sophisticated tools for measuring and manipulating brain
activity (e.g.
optogenetics, two photon imaging, widefield mesoscale imaging, fiber
photometry, mini
endoscopes) together with the many transgenic lines and disease models
available in
rodents further enhance their usefulness in studies.
[3] Behavior monitoring systems are useful tools that allow scientists to
characterize
behavioral changes associated with ill health in rodents. Since mice are
crepuscular and
nocturnal animals (they are active at dusk and dawn and throughout the night),

assessing their signs of ill health, pain and distress by animal care staff is
difficult.
Manual monitoring by animal care staff risks introducing observer bias in
monitoring
animal behavior, does not provide continuous monitoring, and lacks sensitive
monitoring
during dark periods when mice are most active. Further, the handling of
animals in
homecages may be stressful and may confound studies.
[4] Running wheels are widely used for homecage monitoring because of their
low
cost and ease of implementation for measuring the activity and/or changes in
circadian
rhythm, but their use has been shown to independently effect animal behavior
and
disease pathology.
[5] RFID tags for monitoring location are disadvantageous because of poor
spatial
resolution and the tag itself may provide discomfort to the animal.
[6] Existing video monitoring systems require human operators to identify
animal
behaviors, and do not provide automatic prediction of animal behavior in their

homecage.
1
CA 3045478 2019-06-06

[7] Monitoring the animals in their homecage has several advantages. In
studying
complex behavior, experimental design that requires direct animal-experimenter

interaction where an animal is removed from its homecage environment and
placed in
an unfamiliar apparatus (novel environment) are disruptive, time- and labor-
consuming
and require additional laboratory space. This disruption of removing the
animal from
their homecage may influence its behavior, general well-being, and metabolism,

affecting the phenotypic outcome even if the data collection method is
automated,
creating spurious findings.
Summary
[8] The present teachings are directed to systems and methods that can be
integrated in many settings, for monitoring and analyzing animal behavior in a

homecage environment. This automated homecage behavioral data acquisition
allows
investigators to study the effects of experimental manipulations and a wide
range of
neurological diseases by accurate measurement of progressive behavioral
changes in
the same animal without compromising its welfare, as well as removing
experimenter
bias. In addition, the systems and methods may be used for short-term welfare
assessment (e.g. post-surgery monitoring) by enabling continuous monitoring,
even in
the dark phase where welfare assessment without disturbance to the cage is
difficult
and subjective. Further, video data collected with this system may be used to
automatically classify sleep-wake states of an animal in the home-cage.
[9] In a first aspect, some embodiments provide a method for quantifying
animal
activity, comprising: receiving at least two video frames; determining at
least two
processed video frames from the at least two video frames by pre-processing;
determining an image background from the at least two processed video frames;
determining an at least two thresholded video frames by applying the
background image
as a threshold to the at least two video frames; determining a contour from
the at least
two thresholded video frames; and determining an animal motion flag if the
contour is
larger than a minimum area.
[10] In a second aspect, some embodiments provide a method for quantifying
animal
activity, comprising: receiving at least two video frames; determining at
least two
2
CA 3045478 2019-06-06

processed video frames from the at least two video frames by pre-processing;
determining an image background from the at least two processed video frames;
determining a contour from the at least two thresholded video frames;
determining the
centroid of the contour; and determining an animal track from the centroid of
each of the
at least two thresholded video frames.
[11] In a third aspect, some embodiments provide an animal behavior analysis
system, comprising: a memory having: a first predictive model, the first
predictive model
for predicting a pose estimate of the animal; and a second predictive model,
the second
predictive model for predicting an animal state; at least one sensor, the at
least one
sensor for sensing the animal; a processor in communication with the memory
and the
at least one sensor, the processor configured to:
receive a sensor input from an at
least one sensor; predict the pose estimate from the sensor input and the
first predictive
model; and predict an animal state based on the pose estimate, the sensor
input and
the second predictive model.
[12] In at least one embodiment, the predicting the pose estimate may
comprise:
determining a position of an at least one indicia; and predicting the pose
estimate based
on the position of an at least one indicia, the sensor input, and the first
predictive model.
[13] In at least one embodiment, the animal state may include at least one of
a
behavioral state of an animal, a social state of an animal, a position state
of an animal,
a sleep state of an animal, and a biological state of an animal.
[14] In at least one embodiment, the system may further comprise: a database
in
communication with the memory.
[15] In at least one embodiment, the processor may be further configured to:
determine a report based on at least one of the animal state, the pose
estimate, the
sensor input, and the position of the at least one indicia; and store the
report in the
database; wherein the report, the pose estimate, and the position of the at
least one
indicia may correspond to a common timebase.
[16] In at least one embodiment, the processor may be further configured to
output
the report.
[17] In at least one embodiment, the processor may be further configured to
output
the report to a display device.
3
CA 3045478 2019-06-06

[18] In at least one embodiment, the processor may be further configured to
output
the report to a server by network communication.
[19] In at least one embodiment, the processor may be further configured to
predict
the pose estimate and predict the animal state generally in real-time.
[20] In at least one embodiment, the processor may be further configured to
predict
the pose estimate and predict the animal state using an offline process.
[21] In at least one embodiment, the processor may be further configured to
determine, for the report, from the common timebase, a start time, an end
time, and an
elapsed time.
[22] In at least one embodiment, the processor may be further configured to
determine, for the report, from the position of the at least one indicia and
the pose
estimate, an occupancy map having a displacement of the animal.
[23] In at least one embodiment, the processor may be further configured to
determine, for the report, the speed of the animal's movement.
[24] In at least one embodiment, the processor may be further configured to:
determine a position of an object from the sensor input, and determine, for
the report,
from the position of the at least one indicia and the pose estimate, an
interaction of the
animal with the object.
[25] In at least one embodiment, the processor may be further configured to
determine at least one husbandry variable, the at least one husbandry variable
in the
homecage may comprise a food supply level, a water supply level, a
temperature, a
humidity value, a bedding quality metric, and a nesting quality metric from
the sensor
input.
[26] In at least one embodiment, the system may further comprise: an actuator
proximate to the animal; and wherein the processor may be further configured
to
actuate the actuator if the report matches a pre-determined actuation
condition.
[27] In at least one embodiment, the actuator may be a haptic device.
[28] In at least one embodiment, the at least one sensor may comprise at least
one
camera, at least one of a humidity sensor, at least one of a temperature
sensor, and at
least one of an ammonium sensor.
4
CA 3045478 2019-06-06

=
[29] In at least one embodiment, the at least one camera may have an at least
one
infra-red camera.
[30] In at least one embodiment, the first predictive model may be a deep
neural
network.
[31] In at least one embodiment, the second predictive model may be a
recurrent
neural network.
[32] In a fourth aspect, some embodiments provide a method of animal state
analysis,
comprising: providing, at a memory, a first predictive model, the first
predictive model for
predicting a pose estimate of the animal; providing, at the memory, a second
predictive
model, the second predictive model for predicting a state of the animal;
receiving, at a
processor, a sensor input from an at least one sensor; predicting, at the
processor, the
pose estimate from the sensor input and the first predictive model; and
predicting, at the
processor, an animal state based on the pose estimate, the sensor input and
the
second predictive model.
[33] In at least one embodiment, the predicting a pose estimate may further
comprise:
determining, at the processor, a position of an at least one indicia; and
predicting, at the
processor, the pose estimate based on the position of an at least one indicia,
the sensor
input, and the first predictive model.
[34] In at least one embodiment, the animal state may include at least one of
a
behavioral state of an animal, a social state of an animal, a position state
of an animal,
a sleep state of an animal, and a biological state of an animal.
[35] In at least one embodiment, the method may further comprise: determining,
at
the processor, a report based on at least one of the animal state, the pose
estimate, the
sensor input, and the position of the at least one indicia, storing the report
in a
database, the database in communication with the memory; wherein the report,
the
pose estimate, and the position of the at least one indicia may correspond to
a common
timebase.
[36] In at least one embodiment, the method may further comprise: outputting
the
report.
[37] In at least one embodiment, the method may further comprise: outputting
the
report to a display device.
CA 3045478 2019-06-06

=
[38] In at least one embodiment, the method may further comprise: outputting
the
report to a server by network communication.
[39] In at least one embodiment, the predicting, at the processor, the pose
estimate
and predicting, at the processor, the animal state may be performed generally
contemporaneously with the sensor input collection.
[40] In at least one embodiment, the predicting, at the processor, the pose
estimate
and predicting the animal state may be performed generally after the sensor
input
collection.
[41] In at least one embodiment, the method may further comprise: determining,
at
the processor, for the report, from the common timebase, a start time, an end
time, and
an elapsed time.
[42] In at least one embodiment, the method may further comprise: determining,
at
the processor, for the report, from the position of the at least one indicia
and the pose
estimate, an occupancy map having a movement of the animal.
[43] In at least one embodiment, the method may further comprise: determining,
at
the processor, for the report, the speed of the animal's movement.
[44] In at least one embodiment, the method may further comprise: determining,
at
the processor, a position of an object from the sensor input; determining, at
the
processor, for the report, from the position of the at least one indicia and
the pose
estimate, an interaction of the animal with the object.
[45] In at least one embodiment, the method may further comprise determining,
at the
processor, at least one husbandry variable, the at least one husbandry
variable in the
homecage may comprise a food supply level, a water supply level, a
temperature, a
humidity value, a bedding quality metric, and a nesting quality metric from
the sensor
input
[46] In at least one embodiment, the method may further comprise: if the
report
matches a pre-determined actuation condition: actuating an actuator proximate
to the
animal.
[47] In at least one embodiment, the actuator may be a haptic device.
6
CA 3045478 2019-06-06

[48] In at least one embodiment, the at least one sensor may comprise at least
one of
an at least one camera, an at least one infra-red camera, a humidity sensor, a

temperature sensor, and an ammonium sensor.
[49] In at least one embodiment, the first predictive model may be a deep
neural
network.
[50] In at least one embodiment, the second predictive model may be a
recurrent
neural network.
[51] In a fifth aspect, some embodiments provide a system of generating a
predictive
model for predicting an animal state, comprising: a memory, the memory having:
a
plurality of sensor inputs; and a pose prediction model for predicting animal
poses from
a sensor input; a processor configured to: generate a plurality of predicted
animal poses
associated with the plurality of the sensor inputs, by, for each sensor input
in the
plurality of sensor inputs: predicting, using the first predictive model, a
predicted animal
pose from the sensor input; associating the predicted animal pose with the
sensor input;
generate a plurality of behavior labels associated with the plurality of the
sensor inputs,
by, for each sensor input in the plurality of sensor inputs: associating a
behavior label
with the sensor input; generate the second predictive model based on the
plurality of
sensor inputs, the plurality of predicted animal poses, and the plurality of
behavior
labels.
[52] In at least one embodiment, the processor may be further configured to:
display
the sensor input to a user at a display device; and wherein the behavior label
may be
received from the user using an input device.
[53] In a sixth aspect, some embodiments provide a method of generating a
second
predictive model, the second predictive model for predicting an animal state,
comprising: providing, at a memory, a plurality of sensor inputs; providing,
at the
memory, a first predictive model for predicting animal poses from a sensor
input;
generating, at a processor, a plurality of predicted animal poses associated
with the
plurality of the sensor inputs, by, for each sensor input in the plurality of
sensor inputs:
predicting, using the first predictive model, a predicted animal pose from the
sensor
input; associating the predicted animal pose with the sensor input;
generating, at the
processor, a plurality of behavior labels associated with the plurality of the
sensor
7
CA 3045478 2019-06-06

inputs, by, for each sensor input in the plurality of sensor inputs:
associating a behavior
label with the sensor input; generating, at the processor, the second
predictive model
based on the plurality of sensor inputs, the plurality of predicted animal
poses, and the
plurality of behavior labels.
[54] In at least one embodiment, the method may further comprise: displaying
the
sensor input to a user at a display device; and wherein the behavior label may
be
received from the user using an input device.
Brief Description of the Drawings
[55] A preferred embodiment will now be described in detail with reference to
the
drawings, in which:
FIG.1 is a system view of an automated homecage monitoring system;
FIG. 2 is a block diagram of the microcontroller 112 in FIG. 1;
FIG. 3 is a block diagram of the server 104 in FIG. 1;
FIG. 4 is a software component diagram of an automated homecage monitoring
system;
FIG. 5 is a relationship diagram of an automated homecage monitoring system;
FIG. 6A is a cutaway top view of a homecage;
FIG. 6B is a front view of the homecage in FIG. 6A;
FIG. 6C is a cutaway top view of another homecage;
FIG. 6D is a front view of the homecage in FIG. 6C;
FIG. 7 is a cutaway top view of another homecage;
FIG. 8 is a sensor data diagram of an automated homecage monitoring system;
FIG. 9A is a method diagram for automated homecage monitoring;
FIG. 9B is a method diagram for automated homecage monitoring;
FIG. 10 is a data architecture diagram for automated homecage monitoring;
FIG. 11A is a graph diagram of an automated homecage monitoring system;
FIG. 11B is a graph diagram of an automated homecage monitoring system;
FIG. 12A is a front view of another homecage having an object;
FIG. 12B is a front view of another homecage having an actuator;
FIG. 12C is a front view of another homecage having a water tank;
8
CA 3045478 2019-06-06

FIG. 13 is a method diagram for automated homecage monitoring; and
FIG. 14 is a method diagram for automated homecage monitoring.
Description of Example Embodiments
[56] It will be appreciated that numerous specific details are set forth in
order to
provide a thorough understanding of the example embodiments described herein.
However, it will be understood by those of ordinary skill in the art that the
embodiments
described herein may be practiced without these specific details. In other
instances,
well-known methods, procedures and components have not been described in
detail so
as not to obscure the embodiments described herein. Furthermore, this
description and
the drawings are not to be considered as limiting the scope of the embodiments

described herein in any way, but rather as merely describing the
implementation of the
various embodiments described herein.
[57] It should be noted that terms of degree such as "substantially", "about"
and
"approximately" when used herein mean a reasonable amount of deviation of the
modified term such that the end result is not significantly changed. These
terms of
degree should be construed as including a deviation of the modified term if
this
deviation would not negate the meaning of the term it modifies.
[58] In addition, as used herein, the wording "and/or" is intended to
represent an
inclusive-or. That is, "X and/or Y" is intended to mean X or Y or both, for
example. As a
further example, "X, Y, and/or Z" is intended to mean X or Y or Z or any
combination
thereof.
[59] The embodiments of the systems and methods described herein may be
implemented in hardware or software, or a combination of both. These
embodiments
may be implemented in computer programs executing on programmable computers,
each computer including at least one processor, a data storage system
(including
volatile memory or non-volatile memory or other data storage elements or a
combination thereof), and at least one communication interface. For example
and
without limitation, the programmable computers (referred to below as computing

devices) may be a server, network appliance, embedded device, computer
expansion
module, a personal computer, laptop, personal data assistant, cellular
telephone, smart-
9
CA 3045478 2019-06-06

phone device, tablet computer, a wireless device or any other computing device

capable of being configured to carry out the methods described herein.
[60] In some embodiments, the communication interface may be a network
communication interface. In embodiments in which elements are combined, the
communication interface may be a software communication interface, such as
those for
inter-process communication (IPC). In still other embodiments, there may be a
combination of communication interfaces implemented as hardware, software, and

combination thereof.
[61] Program code may be applied to input data to perform the functions
described
herein and to generate output information. The output information is applied
to one or
more output devices, in known fashion.
[62] Each program may be implemented in a high level procedural or object
oriented
programming and/or scripting language, or both, to communicate with a computer

system. However, the programs may be implemented in assembly or machine
language, if desired. In any case, the language may be a compiled or
interpreted
language. Each such computer program may be stored on a storage media or a
device
(e.g. ROM, magnetic disk, optical disc) readable by a general or special
purpose
programmable computer, for configuring and operating the computer when the
storage
media or device is read by the computer to perform the procedures described
herein.
Embodiments of the system may also be considered to be implemented as a non-
transitory computer-readable storage medium, configured with a computer
program,
where the storage medium so configured causes a computer to operate in a
specific
and predefined manner to perform the functions described herein.
[63] Furthermore, the system, processes and methods of the described
embodiments
are capable of being distributed in a computer program product comprising a
computer
readable medium that bears computer usable instructions for one or more
processors.
The medium may be provided in various forms, including one or more diskettes,
compact disks, tapes, chips, wireline transmissions, satellite transmissions,
internet
transmission or downloads, magnetic and electronic storage media, digital and
analog
signals, and the like. The computer useable instructions may also be in
various forms,
including compiled and non-compiled code.
CA 3045478 2019-06-06

[64] As referred to herein, a homecage refers to any cage provided for
securing an
animal, such as an aquarium, a cage having bars, a box, a zoo cage, an aviary,
a
battery cage, a birdcage, or any other container as is known for securing an
animal.
The homecage may be made from any material such as clear plastic (such as
polycarbonate), wire mesh, wire bars, or any other material that can be used
to secure
an animal from passing through it. The homecage may be opaque, transparent or
translucent. The homecage may allow for air to pass into it, or may be
substantially
airtight. The homecage may have a separate air supply. The homecage may
contain
bedding, a food and water supply for the animal.
[65] While a single homecage is shown for FIGs 1-16, it is understood that the

homecage monitoring system would apply in similar fashion to one or more
homecages,
where each homecage has its own microcontroller. There may be many homecages
monitored in the system 100.
[66] Reference is first made to FIG. 1, showing a system view 100 of an
automated
homecage monitoring system. The automated homecage monitoring system has a
microcontroller 112, a server 104, a database 102, a network 108, and a
homecage
114. Optionally, the system view 100 may include a mobile device 106, a
computer
system 116, and report 110.
[67] The microcontroller 112 has a plurality of sensors to monitor homecage
114.
The microcontroller 112 may perform additional processing, or may send
collected
sensor data to the server 104 for processing via network 108. When the
microcontroller
112 is responsible for the additional processing, it may determine the sensor
data
information about the animal in the homecage 114, and provide monitoring and
prediction data to the server 104 via network 108 in addition to sensor data.
[68] Mobile device 106 and computer system 116 may be used by an end user to
access an application (not shown) running on server 104 over network 108. For
example, the application may be a web application, or a client/server
application. The
mobile device 106 and computer system 116 may each be a desktop computer,
mobile
device, or laptop computer. The mobile device 106 and computer system 116 may
be in
communication with server 104, and microcontroller 112. The mobile device 106
and
computer system 116 may display the web application, and may allow a user to
see
11
CA 3045478 2019-06-06

monitoring data from the homecage, or more than one homecages that are
monitored.
An administrator user may use the mobile device 106 or the computer system 116
to
access the server 104 and configure the homecage monitoring system. The user
at
mobile device 106 or computer system 116 may review monitoring data and apply
pose
estimate labels or state labels to sensor data from the homecage to "train"
the
automated homecage monitoring system. The server 104 receives the pose
estimate
labels or state labels and stores them in database 102.
[69] The server 104 may be a commercial off-the-shelf server, or another
server
system as are known. The server 104 may run a web application using an
application
server (not shown) accessible via network 108.
[70] The server 104 may generate and send an automated homecage monitoring
report 110 via network 108. The report 110 may be sent via email, SMS,
application-
based notification, etc. The users using mobile device 106 or computer system
116
may respond to the report 110 to review the monitoring data for a homecage 114
or a
group of homecages. The report 110 may be an alert sent to the users at mobile
device
106 or computer system 116 to warn them that intervention is required in one
or more
homecages 114 in a group of homecages.
[71] The server 104 may also provide a database including historical sensor
data
from the homecage 114, historical event data from the homecage 114, one or
more
predictive models for estimating animal poses, and one or more predictive
models for
predicting animal state.
[72] The server 104 may also store the sensor data from one or more
microcontrollers
associated with one or more homecages 114.
[73] Network 108 may be a communication network such as the Internet, a Wide-
Area
Network (WAN), a Local-Area Network (LAN), or another type of network. Network
108
may include a point-to-point connection, or another communications connection
between two nodes.
[74] Database 102 may be a relational database, such as MySQL or Postgres. The

database 102 may also be a NoSQL database such as MongoDB. The database may
store the reports and sensor data from the microcontroller 112 of each
homecage 114.
12
CA 3045478 2019-06-06

[75] Report 110 may be an HTML formatted, text formatted, or a file-based
report that
is provided by email, SMS, or application-based notification to a user. The
report 110
may include a URL link to the web application on server 104.
[76] The microcontroller 112 may be an embedded system such as Arduino , field-

programmable gate array (FPGA), or a small form-factor computer system such as
a
Raspberry Pi . The microcontroller 112 may have or be connected to at least
one
sensor (not shown) for sensing the animal in the homecage 114, or
environmental
conditions in the homecage 114.
[77] For FIGs. 2-3, like numerals refer to like elements between the figures,
such as
the network unit 202, display 204, interface unit 206, processor unit 208,
memory unit
210, I/O hardware 212, user interface 214, power unit 216, and operating
system 218.
[78] Referring to FIG. 2, a block diagram 200 is shown of the microcontroller
112 from
FIG. 1. The microcontroller 112 has a network unit 202, a display 204, an
interface unit
206, a processor unit 208, a memory unit 210, i/o hardware 212, a user
interface engine
214, and a power unit 216.
[79] The network unit 202 may be a standard network adapter such as an
Ethernet or
802.11x adapter. The processor unit 208 may include a standard processor, such
as the
Intel Xeon processor or an Advanced RISC Machine (ARM) processor, for
example.
Alternatively, there may be a plurality of processors that are used by the
processor unit
208 and may function in parallel.
[80] The processor unit 208 can also execute a graphical user interface (GUI)
engine
214 that is used to generate various GUIs. The user interface engine 214
provides for
administration of the homecage monitoring, and the information may be
processed by
the admin unit 226. User interface 214 may implement an Application
Programming
Interface (API) or a Web-based application that is accessible via the network
unit 202.
The API may provide connectivity with the server by either push or pull
requests, i.e. the
microcontroller 112 may send data to the server as it is collected, or the
server may
contact the microcontroller via network unit 202 to pull data.
[81] Memory unit 210 may have an operating system 218, programs 220, a
communications unit 222, a camera unit 224, a prediction unit 234, an admin
unit 226, a
master clock 228, a sensor unit 230, and an actuator unit 232. Some features
and
13
CA 3045478 2019-06-06

functions provided by memory unit 210 may be performed entirely at the
microcontroller,
entirely at the server, or a combination of both the microcontroller and the
server. In the
case where features and functions are performed at the server, the server may
operate
on sensor data sent from the microcontroller to the server.
[82] The operating system 218 may be a Microsoft Windows Server operating
system, or a Linux -based operating system, or another operating system. In
the case
that the microcontroller is an embedded system of an FPGA, it may run an
embedded
operating system, or no operating system at all.
[83] The programs 220 comprise program code that, when executed, configures
the
processor unit 208 to operate in a particular manner to implement various
functions and
tools for the microcontroller 200.
[84] Communications unit 222 provides functionality for sending and receiving
sensor
data and predictive models using network unit 202, as well as provided access
for users
to the admin unit 226 using network unit 202.
[85] Camera unit 224 receives data from the sensor unit 230, and may operate
to
provide image manipulation for collected sensor data. The camera unit 224 may
function to create image files for sending to the server via communications
unit 222.
The camera unit may also provide image manipulation as disclosed herein to
determine
monitoring data from image data collected by sensors in the homecage. Such
monitoring data may include the position of an animal in the homecage, the
path the
animal has taken in the homecage, the position of an object in the homecage,
the food
or water levels inside the homecage, or other visual based monitoring data.
[86] The prediction unit 234 operates to provide predictions based on the
sensor data
from sensor unit 230 and at least two predictive models. The prediction unit
234 may
determine a predictive model from historical sensor data and labels from the
database,
or historical sensor data and labels on the microcontroller 200. The
prediction unit 234
may have a first predictive model that can predict a pose estimate based on
sensor
data. The prediction unit 234 may have a second predictive model that can
predict an
animal state based on the pose prediction and sensor data. The prediction unit
234
may associate the predicted pose estimate and animal state with a report, and
may
send the report to the server. The report sent by the microcontroller 200 to
the server
14
CA 3045478 2019-06-06

may include sensor data and predictions from the prediction unit 234 that are
associated with a particular start and end time, and have a corresponding
common
timebase from master clock 228. The report sent by the microcontroller 200 to
the
server may include quantitative measures of the event, including start time,
end time,
elapsed time, frequency, etc. The prediction unit 234 may predict a pose
estimate
based on a position of an at least one animal feature in the sensor data. The
animal
state prediction by prediction unit 234 may include a behavioral state of an
animal, a
social state of an animal, a position state of an animal, a sleep state of an
animal, and a
biological state of an animal.
[87] The prediction unit 234 may provide an estimation of the remaining food
and
water supply levels or the current state of the bedding (e.g. clean, dirty,
flood, etc.) and
nesting in a homecage 114, and may provide an occupancy map of the homecage
114
showing an animal's movement path. The food and water supply levels, the
status of
the bedding and nesting and the occupancy map may be included in the report.
The
bedding and nesting status may be represented as quality metrics, and may be
represented on a scale of 0 to 100, a letter grade, etc. Specific quality
metrics may be
tracked related to the cleanliness of the bedding, or if a flood of the water
supply into the
homecage has occurred. The estimation of the current state of the homecage may
be
include at least one husbandry variable, the at least one husbandry variable
in the
homecage comprising a food supply level, a water supply level, a temperature,
a
humidity value, a bedding quality metric, and a nesting quality metric from
the sensor
input.
[88] Admin unit 226 may provide user access to the microcontroller 200. This
may
allow an administrator user to configure the monitoring parameters for a
homecage 114.
Separately, a user may access the admin unit 226 to review monitoring data,
view a
real-time video feed of the homecage, review historical events, review video
data
associated with a historical event, etc.
[89] Master clock 228 is an internal clock for microcontroller 200 for
associating a
common timebase with data collected by sensor unit 230. This common timebase
may
ensure that the collected data can be referenced based on time, and each
different type
of sensor data collected can be precisely identified based on the time it is
collected.
CA 3045478 2019-06-06

The master clock 228 may be synchronized with a master clock on the server so
that
both the server and the microcontroller use a generally synchronized clock to
record
and process monitoring data. The server and the microcontroller may use a
networking
protocol for clock synchronization such as the Network Time Protocol (NTP).
[90] The sensor unit 230 provides an interface to one or more sensors
connected to
microcontroller 200 using i/o hardware 212. Optionally, the sensor unit 230
may receive
sensor data from another source via a network connection using network unit
202. The
sensor unit 230 may pre-process sensor data, for example, it may add a video
filter to a
video sensor signal, or it may normalize an audio signal. In the case of image
data pre-
processed by sensor unit 230, filters may be applied such as a Gaussian blur,
averaging, color-grayscale conversions, and the like. The sensor unit 230 may
operate
to send sensor data to a server via communications unit 222 and network unit
202. The
sensor unit 230 may operate to determine the measurement frequency, and
bandwidth
of sensors connected to the microcontroller 200. The sensor unit 230 may be
modular,
and may allow for more than one sensor to be connected to the microcontroller
200,
and may allow for data to be collected from multiple sensors simultaneously.
Sensor
unit 230 may be compatible with multiple different types of sensors such as at
least one
camera including at least one an infra-red camera (including both passive and
active),
humidity sensors, temperature sensors, microphone sensors, light sensors,
radio-
frequency identification (RFID) sensors, pressure sensors, accelerometer
sensors,
proximity sensors, ultrasonic sensors, vibration sensors, electrical current
and electrical
potential sensors, fluid flow sensors, and ammonium sensors.
[91] The sensor unit 230 may log events based on the sensor data, including
string
pulling, animal movement paths, animal speed, novel object recognition, social

interaction, light-dark box, activity in the homecage 114, etc.
[92] Actuator unit 232 provides control of at least one actuator proximate to,
or inside
the homecage 114. The actuators may include linear actuators, pneumatic
actuators,
buzzers, speakers, thermal actuators, piezoelectric actuators,
servomechanisms,
solenoids, stepper motors, or the like.
[93] The prediction unit 234 may be configured to operate the actuator unit
232 based
on a prediction of pose estimate or animal state. The admin unit 226 may also
allow a
16
CA 3045478 2019-06-06

user to operate an actuator remotely. The actuators may also be actuated based
on
sensor data collected by sensor unit 230. For example, a researcher user may
configure the microcontroller 112 to wake an animal up with an actuator after
a
predetermined amount of sleep, and the microcontroller 112 may do so based on
a
predicted sleep state of an animal.
[94] Referring next to FIG. 3, there is shown a block diagram 300 of the
server 104
from FIG. 1. The server 104 has a network unit 302, a display 304, an
interface unit
306, a processor unit 308, a memory unit 310, i/o hardware 312, a user
interface engine
314, and a power unit 316.
[95] The server may perform many of the functions of the microcontroller 112
(see
FIG. 1), including the features of the camera unit 224, prediction unit 234,
admin unit
226, and master clock 228 (see FIG. 2). In the case where a feature of the
microcontroller is performed by the server 300, then the microcontroller may
function as
a "dumb" device where it accepts configuration parameters, sends collected
sensor data
to the server using the communications unit 222, and activates the at least
one actuator
of the actuator unit 232 in response to an instruction sent to the
microcontroller via
communications unit 222 (see FIG. 2).
[96] The operating system 318 may be a Microsoft Windows Server operating
system, or a Linux -based operating system, or another operating system. The
operating system 318 may also be a mobile operating system such as Google0
Android or Apple i0S.
[97] The programs 320 comprise program code that, when executed, configures
the
processor unit 308 to operate in a particular manner to implement various
functions and
tools for the server 300.
[98] Communications unit 322 provides functionality for sending and receiving
sensor
data and predictive models using network unit 302, as well as provide access
for users
to the admin unit 326 using network unit 302.
[99] Admin unit 324 may be an application available to users via the
communications
unit 322. The server admin unit 324 may allow users to view monitoring data,
predictions, and configuration information for at least one of the homecage
monitoring
system in FIG. 2. Users may use the admin unit 324 to manage the configuration
of at
17
CA 3045478 2019-06-06

least one homecage monitoring system. The admin unit 324 may also allow for
the
configuration of reports from homecage monitoring systems, and event alerts
from the
homecage monitoring systems (see e.g. 110 in FIG. 1).
[100] The server 300 may store monitoring data from the at least one homecage
monitoring system, event data, report data, alert data, and sensor data in
database 326.
The database 326 may also store predictive models for pose estimate and animal
state.
The database 326 may be responsive to queries from the homecage monitoring
system
microcontrollers, and may send stored data to the microcontroller using
communications
unit 322 and network unit 302.
[101] The event unit 328 provides functionality for combining sensor data,
monitoring
data, quantitative data, and prediction data into an report that can be stored
in database
326. Responsive to user configuration, the event unit 328 can also send
reports to
users via email or SMS. The reports generated by event unit 328 may be sent at

regular intervals, or may be sent on-demand. The reports may be HTML
formatted, or
text formatted. The reports generated by event unit 328 may include a URL link
to a
web application running on the server 300.
[102] The alert unit 330 may send alerts to users via SMS or email responsive
to
configurable events happening at a homecage system. For example, an alert may
be
sent if food or water or bedding state for a particular homecage is below a
specified
threshold. The alert unit 330 may send alerts based on predicted poses or
predicted
animal states. The alerts may have different severity levels, for example,
there may be
an "info" level, an "warning" level, an "error" level, or a "severe" level.
[103] Referring now to FIG. 4, there is a software component diagram of an
automated
homecage monitoring system 400. The homecage monitoring system 400 has a
communications module 402, a camera module 404 having a camera or image
analysis
module 406, a master clock 408 having a plurality of timestamps 416 on a
common
timebase, a prediction module 418, an administrator module 410, at least one
sensor
module 412 having a sensor analysis module 414, at least one actuator module
420
having an actuator analysis module 422.
18
CA 3045478 2019-06-06

[104] The communications module 402 sends reports and alerts. The reports and
alerts may be sent to a user via email, SMS or Twitter, may be sent to the
server, or
may be sent to both the user and the server.
[105] The camera module 404 acquires video using an attached camera sensor.
The
camera module 404 has an analysis sub-module that performs image processing on
the
video, including for example Gaussian blurring. Depending on requirements, the

camera or image analysis submodule 406 may be performed at the
microcontroller, at
the server, or at both the microcontroller and the server. The image analysis
submodule may have a modular design of the system allowing the user to modify
the
video analysis algorithm.
[106] Each frame of video from the camera module 404 may be timestamped with
the
time of collection from the master clock 408 using timestamps 416. These
timestamps
are used in order to synchronize video with other systems such as, for
example,
electrophysiology, pupil camera, or other sensor data. The master clock module
408
synchronizes the modules by generating timestamps on a common timebase, and
associating the timestamps 416 with sensor data collected via the sensor
modules 412,
actions taken by actuator modules 420, predictions by prediction module 418,
and other
monitoring data.
[107] Prediction module 418 may receive a first predictive model and a second
predictive model from the server using communications module 402. The first
predictive
model and the second predictive model may be used to determine, for collected
sensor
data (including video data from the camera sensor), a predicted pose and a
predicted
animal state.
[108] Administrator module 410 may produce reports based on sensor data,
prediction
data, and other monitoring data. The reports may be sent by administrator
module to
the server, to the user, or to both the server and the user.
[109] At least one sensor module 412 having a sensor analysis submodule 414
acquires sensor data from at least one sensor. The sensors may be inside the
homecage environment or may be proximate to the homecage. The sensor analysis
submodule may detect events based on sensor input. The sensor analysis
submodule
19
CA 3045478 2019-06-06

414 may pre-process collected sensor data, such as by averaging sensor data or

filtering sensor data.
[110] At least one actuator module 420 having an actuator analysis module 422
activates an actuator in a homecage or proximate to a homecage in response to
an
event, a prediction, or user input.
[111] Referring to FIG. 5, there is a relationship diagram 500 of the video
data
collection of the automated homecage monitoring system. The automated homecage

monitoring system may have at least one camera 502 connected to the
microcontroller
504. The microcontroller 504 may operate in a "dumb" or a "smart" mode.
[112] When operating in a "dumb" mode, the microcontroller 504 collects video
and
archives it with the server, where the server is responsible for the video
processing 508.
[113] When operating in a "smart" mode, the microcontroller 504 operates to
perform
simultaneous video acquisition and archiving 506 and video processing 508.
[114] Video acquisition and archiving 506 may include writing video frames 510

including timestamps 512 to a storage device at the microcontroller 504. The
acquisition and archiving 506 may also include sending the video data 510
(including
timestamps 512) to the server. The video acquisition and archiving 506 may
process
more than one camera at a time. For example (see FIGs. 6C and 6D) multiple
camera
sensors can be used for simultaneous acquisition of animal state from multiple

directions.
[115] Generally simultaneously, the video processing 508 may determine
tracking data
514 and write it to a storage device on the microcontroller 504. The tracking
data 514
may include an animal's path in the homecage. The tracking data 514 may also
include
event data and prediction data determined at the microcontroller. The tracking
data 514
may also be sent to the server.
[116] Referring to FIG. 6A, there is a cutaway top view of a homecage 600. The

homecage 600 has a base 610. Two sidewalls 604, a front wall 606 and a back
wall
608 extend from the base 610 to form an enclosure for animal 602. The
enclosure
may include ventilation holes in the front wall, the back wall, or the
sidewalls.
Alternatively the homecage may have a self-contained ventilation system. The
homecage 600 may be a variety of sizes relative to the animal, generally
offering
CA 3045478 2019-06-06

enough space for the animal to move around. The base 610, sidewalls 604, front
wall
606 and back wall 608 may be made from a variety of materials, including wood,
plastic,
or metal. The base 610, sidewalls 604, front wall 606 and back wall 608 may
have
various constructions including a solid piece of material, a mesh, spaced
apart bars.
[117] Referring to FIG. 6B, there is a front view 620 of the homecage in FIG.
6A. The
homecage 620 has a top 626, with a microcontroller 622 disposed on it. The
microcontroller 622 may be connected to an at least one sensor 624 through the
top
626. The at least one sensor 624 may be inside the homecage as shown, or
instead
may be positioned outside and proximate to the homecage. The at least one
sensor
624 may include at least one camera, at least one infra-red camera (including
both
passive and active), humidity sensors, temperature sensors, microphone
sensors, light
sensors, radio-frequency identification (RFID) sensors, pressure sensors,
accelerometer sensors, proximity sensors, ultrasonic sensors, vibration
sensors,
electrical current and electrical potential sensors, fluid flow sensors, and
ammonium
sensors.
[118] Referring to FIG. 6C, there is a cutaway front view 640 of another
homecage.
The homecage 600 has a base 650. Two sidewalls 644, a front wall 646 and a
back
wall 648 extend from the base 650 to form an enclosure for animal 642. The
enclosure may include ventilation holes in the front wall, the back wall, or
the sidewalls.
Each sidewall 644 has a sensor 652 connected to it. The sensors 652 may
communicate wirelessly with microcontroller 662, or they may be wired. The
sensors
may be a variety of different types, including at least one camera, at least
one infra-red
camera (including both passive and active), humidity sensors, temperature
sensors,
microphone sensors, light sensors, radio-frequency identification (RFID)
sensors,
pressure sensors, accelerometer sensors, proximity sensors, ultrasonic
sensors,
vibration sensors, electrical current and electrical potential sensors, fluid
flow sensors,
and ammonium sensors. There may be multiple sensors disposed together at 652.
[119] Referring to FIG. 6D, there is a front view 660 of the homecage from
FIG. 6C.
The homecage 660 has a microcontroller 662 disposed on top 654, at least one
sensor
disposed on sidewall 652, and at least one sensor connected to microcontroller
654
21
CA 3045478 2019-06-06

through top 654. The sensors 652, 654 may function individually or in
combination to
generate sensor data about animal 642 or the homecage 660 environment.
[120] Referring to FIG. 7, there is a cutaway top view 700 of another
homecage. The
animal 702 may move about the homecage on floor 710 in the region bounded by
sidewalls 704, front wall 706, and back wall 708. The homecage monitoring
system
may track the path taken by the animal 702, including a track plot 712 around
the
homecage 700.
[121] Referring to FIG. 8, there is a sensor data diagram 800 of an automated
homecage monitoring system. The sensor data diagrams 802, 804, 806, 808 show
the
result of animal motion flag in FIG. 9A. The sensor data is shown for moving
806 and
non-moving 802 animals during the day, as well as moving 808 and non-moving
804
during the night. Sensor data 810 shows the location of the centroid
determined using
the method in FIG. 9B. Sensor data 812 shows the track plot generated from the

centroid determined in FIG. 9B.
[122] Referring to FIG. 9A, there is a method diagram 900 for automated
homecage
motion detection. The method 900 is operated by the microcontroller or the
server
based on video data from an at least one sensor. The method 900 is operable to
find
an animal motion flag that identifies a Boolean value associated with whether
an animal
has moved.
[123] At 902, at least two video frames are received. The at least two video
frames
may be indexed to the same timebase, and each frame may have an associated
timestamp. The video may be received in a variety of formats, for example, the

Raspberry Pi camera board v2.1 has an 8MP Sony IMX219 sensor that collects
video data at 30 fps using a h264 encoding.
[124] At 904, at least two processed video frames are determined from the at
least two
video frames by pre-processing. The processing may include applying a Gaussian
filter
to each video frame to reduce noise, resizing the video frame, and conversion
to a
grayscale colour profile.
[125] At 906, an image background may be determined from the at least two
processed video frames. The processing may involve calculating a weighted
running
average of the at least two frames.
22
CA 3045478 2019-06-06

[126] At 908, an at least two thresholded video frames are determined by
subtracting
the background image from each of the processed video frames to determine
motion
areas which contain values above a certain threshold..
[127] At 910, a contour is determined from each thresholded video frame. The
contour
operation provides a contour-detected area which, in the homecage, is
generally an
area representing the animal's position.
[128] At 912, an animal motion flag is determined if the contour is larger
than a
minimum area. The minimum area may be a configurable threshold.
[129] Referring to FIG. 9B, there is a method diagram 950 for automated
homecage
monitoring. The method 950 is operated by the microcontroller or the server
based on
video data from an at least one sensor. The method 950 may operate to
determine a
tracking path of an animal in a homecage.
[130] At 952, at least two video frames are received. The at least two video
frames
may be indexed to the same timebase, and each frame may have an associated
timestamp. The video may be received in a variety of formats, for example, the

Raspberry Pi camera board v2.1 has an 8MP Sony IMX219 sensor that collects
video data at 30 fps using a h264 encoding.
[131] At 954, at least two processed video frames are determined from the at
least two
video frames by pre-processing. The pre-processing may include resizing the
video
frame, conversion to grayscale colour profile, and the application of a
Gaussian blur.
[132] At 956, an image background is determined from the at least two
processed
video frames. The processing may involve calculating a running average of the
at least
two frames.
[133] At 958, an at least two thresholded video frames are determined by
applying the
background image as a threshold to the at least two video frames.
[134] At 960, a contour is determined from the at least two thresholded video
frames.
The contour operation provides a contour-detected area which, in the homecage,
is
generally the animal's position.
[135] At 962, a centroid is determined for the contour on each frame. The
centroid is
the arithmetic mean position of the contour, and generally represents the
midpoint of the
animal.
23
CA 3045478 2019-06-06

[136] At 964, a track plot of the animal around the homecage is determined
from the
line plot of the centroid values of each of the at least two thresholded video
frames.
[137] Referring to FIG. 10, there is a data architecture diagram 1000 for
automated
homecage monitoring. The architecture 1000 shows a two-stage prediction, where
a
first prediction is made related to an animal's pose estimate. The post
estimation may
include the position of body features of an animal, such as eyes, nose, ears,
a tail, legs,
etc. A second prediction of an animal's state is made based on sensor data and
the
pose estimate. The animal's state may include a behavioral state of an animal,
a social
state of an animal, a position state of an animal, a sleep state of an animal,
and a
biological state of an animal.
[138] Data architecture 1000 shows a Deep Neural Network (DNN) architecture
that
uses a predictive model to predict an animal pose estimate. The pose estimate
includes the location of animal body features in sensor data. A pose estimate
is
generally described as a graph of parts, where a particular node in the graph
represents
a particular visual feature associated with a body feature. To produce a pose
estimate,
the graph is fit to an image using flexible interconnections. The DNN may be
DeepLabCut, developed by the Mathis Lab
(http://vvww.mousemotorlab.orq/deeplabcut).
[139] A set of sensor data inputs 1002, such as video data having a plurality
of video
frames generally represented by image data that are provided as input. Each
image
1004 has at least a first convolutional layer 1006 applied to it. Each
convolutional layer
1006, 1008, 1010 represents a convolution or pooling operation applied to the
input,
and then the input data is passed to another convolutional layer, for example
a second
convolutional layer 1008, and then again, to a third convolutional layer 1010,
etc. The
outputs of each convolutional layer may be inputs to another convolution
layer.
[140] The DNN may have a predictive model having many convolutional or pooling

layers, and the processing of the image to produce a pose estimate may involve
the
application of many convolutional layers to the input data. The image may be
subsampled, i.e. the image may be subdivided and each portion of the image may

analyzed by the DNN.
[141] The predictive model may be generated by training using a data set
having a set
of labels applied. During training, each layer is convolved to determine an
activation
24
CA 3045478 2019-06-06

map of an applied label associated with the activating input for the layer. As
a result,
the convolutional layer builds an activation map of when the filter activates
when it
detects some specific feature in the image input. Each convolutional layer in
the DNN
stacks its activation map along the depth of the network.
[142] In addition to the convolutional layers, the DNN may also have pooling
layers and
rectified linear unit layers as is known.
[143] The DNN has deconvolutional layers 1012 and 1014. The deconvolutional
layers
1012 and 1014 may be fully connected layers that provide high-level reasoning
based
on the output activations of the preceding convolutional layers.
[144] The DNN generates pose predictions 1016. The pose predictions 1016 are
used
to determine animal feature locations 1018. The pose predictions correspond to
the
locations (or coordinates) in a video frame of particular animal body parts or
indicia.
The pose predictions 1016 may also include, for example, the location of a
novel object
such as a ball introduced into the homecage. The DNN may function to produce
pose
predictions (including animal feature locations) for more than one animal, for
example,
two animals may be located within the homecage.
[145] The DNN may analyze the video input frame by frame. Each body part of
the
animal in the homecage may be tracked, and the DNN may produce a pose
prediction.
The DNN may be initially trained on a training set of labeled videos
(containing the
behaviors of interest). The initially trained DNN model may have further
training data
input until its pose prediction attains an acceptable accuracy metric.
[146] The pose feature locations 1018 are used by a Recurrent Neural Network
(RNN)
1026 to learn and predict particular sequences of animal body parts or indicia
that
identify states (including behaviors) that are more complex.
[147] The RNN 1026 is a particular type of artificial neural network where
nodes are
connected into a directed graph based on a temporal sequence. RNN 1026 nodes
1020 may have internal state store state data as part of processing a series
of inputs.
The RNN 1026 may be an Long-Short Term Memory (LSTM) network in which a node
has an input gate, an output gate and a forget gate. Using the LSTM model, the

network may learn the dynamics of animal (or animals) body parts that conform
a
particular behavior.
CA 3045478 2019-06-06

[148] The RNN model may be a network which is capable of learning sequential
information, and producing sequential predictions based on an input state. The
learning
of the RNN may classify the patterns of sequences of animal poses into
particular
behaviors of interest.
[149] The RNN 1026 has nodes 1020 and transitions 1022, and processes a series
of
pose predictions 1018 corresponding to video frames 1002 and other sensor
data.
[150] The nodes 1020 in the RNN are then used to produce a prediction of an
animal
state 1024. The animal state prediction may include a behavioral state of an
animal, a
social state of an animal, a position state of an animal, a sleep state of an
animal, and a
biological state of an animal.
[151] The transitions 1022 may be triggered based on changes in sensor data,
pose
predictions 1016 from the DNN, and pose feature locations 1018. The
transitions 1022
may trigger a state change of the RNN 1026, and may change a predicted output
animal state 1024.
[152] A behavioral state of an animal may include particular activities, such
as actions
contemplated by a research study. These may include string pulling,
interaction of an
animal with a novel object introduced into the homecage, social interaction
with other
animals, animal interaction with a dark box, rearing, grooming, playing in the
homecage,
or other homecage activity. The behavior state may include other stereotyped
movements in which motor function can be evaluated. The animal's interaction
with a
novel object may be used to determine a metric of the animal's curiosity.
[153] The position state of an animal may include the animal's position and
orientation
within the homecage. The position state may be based on the location of the
animal's
body features or indicia.
[154] The sleep state of an animal may include the animal's sleep/wake cycle,
and any
potential intermediary states.
[155] The biological state may include food or water intake, respiration,
urination,
bowel movements, etc. The biological state may include a level of activity of
the animal,
for example when the animal is highly active, moderately active, or sedentary.
26
CA 3045478 2019-06-06

r
[156] The RNN 1026 may track quantitative state information for the animal
state. This
may include the number of transitions 1022 into a state (or node) 1020, the
length of
time spent in a state (or node) 1020, or other metrics.
[157] Referring to FIG. 11A, there is shown an example graph diagram 1100 of
an
automated homecage monitoring system. The example graph 1100 includes the
output
of an animal state as predicted by the automated homecage monitoring system of
FIG.
9A. This example graph shows a metric associated with an animal's sleep/wake
cycle.
It is understood that any of the animal states may be graphed, more than one
predicted
state may be graphed on the same graph. The example graph 1100 shows a
predicted
animal state over a 24 hour period correlated to the number of changed pixels
in a
sensor input from a camera.
[158] The example graph 1100 may be referred to as an actogram. Existing
solutions
to monitor homecage activity and state often delivered actograms based on
running-
wheel statistics. The present homecage monitoring system allows researchers to

evaluate additional behavioral parameters beyond those captured using running
wheel
measurements.
[159] The graph 1100 may be included in the report generated by the automated
homecage monitoring system. The report may also include other metrics and
measurements from the homecage monitoring system.
[160] Referring to FIG. 11B, there is shown another example graph diagram 1150
of an
automated homecage monitoring system. The example graph diagram 1150 shows
sleep detection before, during and after a treatment for an animal in a
homecage. The
homecage monitoring system may function to track animal state such as
sleep/wake
cycles to determine research data for medical treatments. In the example
diagram
1150, bars denote the mean percentage amount of time that the animal spent
sleeping
during the light (white) and dark cycle (black) based on the predicted animal
state of the
homecage monitoring system. The error bars denote the standard error of the
mean
(SEM).
[161] Referring to FIG. 12A, there is a front view of another homecage 1200
having an
object. The homecage 1200 has an animal 1202 disposed inside it, a
microcontroller
1222, at least one sensor 1224 connected to the microcontroller, and a novel
object
27
CA 3045478 2019-06-06

,
1226. The homecage monitoring system may collect sensor data to predict and
track
the position of the novel object 1226. The position of the novel object 1226
and the
position of the animal 1202, in addition to the pose estimates of the animal,
may be
used to predict the interactions of the animal 1202 with the novel object
1226. These
predicted interactions may be an animal behavioral state such as "exploring".
In
addition to the predicted animal state, metrics relating to the interaction of
the animal
1202 and the novel object 1226 may be determined. For example, the duration of
time,
the number of interactions, and other particular metrics associated with the
behavior.
[162] Referring to FIG. 12B, there is a front view of another homecage 1230.
The
homecage 1230 has an animal 1232 disposed inside it, a microcontroller 1252,
at least
one sensor 1254 connected to the microcontroller, and an actuator 1256. The
actuator
1256 is connected to the microcontroller 1252, and may be activated based on
user
input, or based on a predicted behavior. The activation of the actuator 1256
may be
automatic based on a closed feedback loop and may activate based on a pre-
determined condition. The predetermined condition may be based on an predicted

animal state, a particular duration of a predicted animal state, or based on
another
metric determined by the homecage monitoring system.
[163] The actuator 1256 may be a haptic actuator, a linear actuator, a
pneumatic
actuator, a buzzer, a speaker, a thermal actuator, a piezoelectric actuator, a

servomechanism, a solenoid, a stepper motor, or the like.
[164] In one example, the actuator is a haptic actuator and the homecage
monitoring
system may be used for a sleep experiment. In such an example, the actuator
may be
used to wake an animal 1232 after a pre-determined sleep duration. The animal
state
prediction may be used in this example to automatically actuate the actuator
and
monitor the animals subsequent behavior. The haptic actuator may use a driver
DRV2605 and a vibration motor disc. The actuator 1256 may be attached to the
outside
walls 1234 of the homecage 1230, close to the nest location, The actuator may
be
attached inside (not shown) the homecage 1230 as required by experimental
design.
The actuator 1256 may be connected to the microcontroller 1252 directly, and
the direct
connection may include both data and power signals. The actuator 1256 may
alternatively be wirelessly connected to the microcontroller 1252, and may be
battery
28
CA 3045478 2019-06-06

powered. This actuator may be used to gently wake-up the animal 1232 in the
homecage. In the above example, the haptic vibration device may have a
vibrationally
dampening film (not shown) such as silicon disposed between it and the
homecage to
dampen the vibrations produced by this device so as to avoid disturbing the
neighboring
cages.
[165] Referring to FIG. 12C, there is a front view of another homecage 1260
having a
water tank. The homecage 1260 has an animal 1262 inside it, a microcontroller
1282,
at least one sensor 1284 connected to the microcontroller 1282, and a water
tank 1286.
[166] The water tank 1286 has a plurality of gradations 1290 and a salient
floating
device 1288 for measuring the water level within. The microcontroller 1282 may
collect
sensor data such as video frames from the at least one sensor 1284, and use
the state
prediction method recited herein to determine a water level for an animal
1262.
[167] While a water tank 1286 is shown, it is appreciated that the tank 1286
could be
filled with another consumable such as food. In such a case, the state
prediction
method may determine a food level.
[168] A water level prediction or a food level or a bedding status prediction
(or other
husbandry variables) by the homecage monitoring system 1260 may trigger a
report or
an alert to a user. For example, when the water tank is below a threshold, the

homecage monitoring system 1260 may send an alert to a user to refill the
water.
[169] Referring to FIG. 13, there is a method diagram 1300 for automated
homecage
monitoring.
[170] At 1302, a first predictive model is provided at a memory, the first
predictive
model for predicting a pose estimate of the animal.
[171] At 1304, a second predictive model is provided at the memory, the second

predictive model for predicting a state of the animal.
[172] At 1306, a sensor input is received at a processor from an at least one
sensor.
[173] At 1308, the pose estimate is predicted at a processor from the sensor
input and
the first predictive model. Optionally, the pose estimate may include the
position of an
at least one indicia indicating the body parts, or physical features of the
animal in the
homecage. The pose estimate may be a connected graph mapped onto the sensor
29
CA 3045478 2019-06-06

,
=
data and identifying the coordinates of the identified indicia as nodes
connected to the
other indicia identified on the animal.
[174] At 1310, an animal state is predicted at a processor based on the pose
estimate,
the sensor input and the second predictive model. The pose estimate may be
based on
the position of at least one indicia associated with an animal in the
homecage. The
predicted animal state may include at least one of a behavioral state of an
animal, a
social state of an animal, a position state of an animal, a sleep state of an
animal, and a
biological state of an animal. The animal state may be used to determine at
the
processor, a report based on at least one of the animal state, the pose
estimate, the
sensor input, and the position of the at least one indicia. The report may be
stored in a
database that is in communication with a memory. The report, the pose
estimate, and
the position of the at least one indicia may correspond to a common timebase.
The
output may be output, including to a display device or to a server via network

communication. The prediction of pose estimate, and the prediction of animal
state may
be performed by the microcontroller generally contemporaneously with the
sensor input
collection. Alternatively, the prediction may be performed generally after the
sensor
input collection.
[175] Furthermore, the generating the predicted animal pose may also include
determining metrics or other data associated with the animal state. This may
include,
for example, a start time (including a start time of a new animal state when
there is a
state transition), an end time (including an end time of the current animal
state), a
duration (or elapsed time), a frequency, any associated sensor data or pose
estimate.
Further, an occupancy map including an animal's movement path within the
homecage
(see e.g. FIG. 7) may be included in the report. The occupancy map may be
annotated
using the pose estimate, and any predicted animal states. An animal's speed
may be
determined and included in the report from their movement in the homecage. If
a novel
object has been introduced into the homecage, it's position, movement, and
animal's
interactions may be included in the report. The animal's food and water supply
levels
may be included in the report. The status of the bedding in the homecage may
be
included in the report.
CA 3045478 2019-06-06

[176] Depending on experimental design, an actuator may be connected to the
homecage monitoring system, and attached to the homecage (or positioned
proximate
to the homecage). From a predicted animal state, the actuator may be
activated. The
report may include information relating to the actuator activations.
[177] The prediction of pose estimate may be performed using a DNN as
described in
FIG. 10. The prediction of animal state may be performed using an RNN as
described
in FIG. 10.
[178] Referring to FIG. 14, there is a method diagram 1500 for automated
homecage
monitoring.
[179] At 1501, a plurality of sensor inputs are provided at a memory.
[180] At 1502, a first predictive model for predicting animal pose from a
sensor input is
provided at the memory.
[181] At 1504, a plurality of predicted animal poses are associated with the
plurality of
the sensor inputs is generated at a processor, by, for each sensor input in
the plurality
of sensor inputs, at 1506, using the first predictive model to predict a
predicted animal
pose from the sensor input, and at 1508 associating the predicted animal pose
with the
sensor input.
[182] At 1510, a plurality of behavior labels associated with the plurality of
the sensor
inputs are generated at the processor, by, for each sensor input in the
plurality of sensor
inputs: at 1512, associating a behavior label with the sensor input.
Optionally, the
sensor input is displayed to a user at a display device and the user submits
the behavior
label for the sensor input using an input device. This labelling by a user may
be
considered "supervised learning", where a human guides the training of a
predictive
model.
[183] At 1514, the second predictive model is generated based on the plurality
of
sensor inputs, the plurality of predicted animal poses, and the plurality of
behavior
labels.
[184] Various embodiments have been described herein by way of example only.
Various modification and variations may be made to these example embodiments
without departing from the spirit and scope of the invention, which is limited
only by the
appended claims. Also, in the various user interfaces illustrated in the
figures, it will be
31
CA 3045478 2019-06-06

understood that the illustrated user interface text and controls are provided
as examples
only and are not meant to be limiting. Other suitable user interface elements
may be
possible.
32
CA 3045478 2019-06-06

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2019-06-06
(41) Open to Public Inspection 2020-12-06
Examination Requested 2024-06-06

Abandonment History

There is no abandonment history.

Maintenance Fee

Last Payment of $277.00 was received on 2024-06-06


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if small entity fee 2025-06-06 $100.00
Next Payment if standard fee 2025-06-06 $277.00 if received in 2024
$289.19 if received in 2025

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $400.00 2019-06-06
Maintenance Fee - Application - New Act 2 2021-06-07 $100.00 2021-06-29
Late Fee for failure to pay Application Maintenance Fee 2021-06-29 $150.00 2021-06-29
Maintenance Fee - Application - New Act 3 2022-06-06 $100.00 2022-10-26
Late Fee for failure to pay Application Maintenance Fee 2022-10-26 $150.00 2022-10-26
Maintenance Fee - Application - New Act 4 2023-06-06 $100.00 2023-03-23
Registration of a document - section 124 $100.00 2023-12-11
Request for Examination 2024-06-06 $1,110.00 2024-06-06
Maintenance Fee - Application - New Act 5 2024-06-06 $277.00 2024-06-06
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
NEUROCAGE SYSTEMS LTD.
Past Owners on Record
BERMUDEZ CONTRERAS, EDGAR JOSUE
MOHAJERANI, MAJID
SINGH, SURJEET
SUTHERLAND, ROBERT JAMES
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative Drawing 2020-11-17 1 8
Cover Page 2020-11-17 2 45
Abstract 2019-06-06 1 19
Description 2019-06-06 32 1,658
Claims 2019-06-06 9 286
Drawings 2019-06-06 16 243
Maintenance Fee Payment 2024-06-06 1 33
Request for Examination / Amendment 2024-06-06 14 526
Claims 2024-06-06 3 132