Language selection

Search

Patent 2873657 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2873657
(54) English Title: FEEDBACK-BASED LIGHTPAINTING, USER-INTERFACE, DATA VISUALIZATION, SENSING, OR INTERACTIVE SYSTEM, MEANS, AND APPARATUS
(54) French Title: PEINTURE LUMINEUSE FONDEE SUR LA RETROACTION, INTERFACE UTILISATEUR, VISUALISATION DES DONNEES, DETECTION OU SYSTEME INTERACTIF, MECANISME ET APPAREIL
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • G01D 21/00 (2006.01)
  • H05B 47/105 (2020.01)
  • A61B 3/028 (2006.01)
  • B44D 2/00 (2006.01)
  • G01S 7/02 (2006.01)
  • G01V 8/10 (2006.01)
  • G03B 15/16 (2021.01)
  • G03B 43/00 (2021.01)
  • G06F 3/00 (2006.01)
  • G06T 11/00 (2006.01)
  • G08B 13/196 (2006.01)
(72) Inventors :
  • UNKNOWN (Not Available)
(73) Owners :
  • MANN, WILLIAM STEPHEN GEORGE (Canada)
  • JANZEN, RYAN EDWARD (Canada)
(71) Applicants :
  • MANN, WILLIAM STEPHEN GEORGE (Canada)
  • JANZEN, RYAN EDWARD (Canada)
(74) Agent:
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2014-12-08
(41) Open to Public Inspection: 2016-06-08
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data: None

Abstracts

English Abstract


A feedback-based metasensing system, means, and apparatus is proposed for data

visualization, data entry, visual art, sensing, or the like. In one
embodiment, the
invention comprises an implement that may be swept through a space to make
visible
a phenomenon that is affected by the act of sweeping the implement through the

space. For example, the implement may cause changes in a surveillance camera
in
the space, and these changes in the surveillance camera may in-turn affect the
color or
intensity of light emitted from the implement. In some embodiments the
implement
consists of an array of light sources, so that sweeping the implement through
the
space makes visible the otherwise invisible sightfield of the surveillance
camera. Other
embodiments are useful for studying or visualizing other phenomenology,
especially
metaphenomenology (e.g. to sense sensing) as well as for simply creating
visual art.


Claims

Note: Claims are shown in the official language in which they were submitted.


WHAT IS CLAIMED IS:
1. A feedback-based metasensing system for capturing and making visible a phe-
nomenon, said system comprising:
.cndot. a metasensor for capturing or synthesizing at least one time-
exposed metamea-
surement;
.cndot. at least one of: a phenomenon sensor; or an abakographic receiver;
.cndot. an abakographic transmitter. said transmitter responsive to an
output of
said receiver or said sensor.
2. The system of claim 1, wherein said system further includes a processor,
said
processor responsive to an input from said sensor or said receiver.
3. The system of claim 2, wherein said system further includes a display for
dis-
playing said time exposure while the time exposure is being accumulated.
4. The system of claim 3, wherein said display is for being touched by an
imple-
ment, said transmitter borne by said implement, said sensor being a proximity
sensor responsive to proximity of said implement to said display.
5. The system of claim 4, wherein said display is a virtual display in a
digital eye
glass, said glass including a sensor for a position of said implement in
relation
to said virtual display.
6. A veillance visualization system including the features of claim 2, wherein
said
phenomenon is visual veillance, the transmitter of said visualization system
including an array of light sources, said array of light sources responsive to
a
field of said camera.
7. A feedback-based metasensing system for capturing and making visible the
eye-
sightfield of a human test subject. said metasensing system including:
.cndot. a metasensor for capturing or synthesizing at least one time-
exposed metamea-
surement;
.cndot. at least one of: a phenomenon sensor: or an abakographic receiver;
29

.cndot. an abakographic transmitter, said transmitter responsive to an
output of
said receiver or said sensor.
8. A feedback-based metasensing system for capturing and making visible the
eye-
sightfield of a human test subject, said metasensing system including:
.cndot. a metasensor for capturing or synthesizing at least one time-
exposed metamea-
surement of a human eyesightfield, said metasensor for sensing, or knowing
by position (e.g. by robotic means), a position of a light transmitter;
.cndot. a phenomenon sensor comprising an eye test input unit;
.cndot. an abakographic transmitter, said transmitter outputting a position
of said
light transmitter.
9. A feedback-based metasensing system for capturing and making visible the
eye-
sightfield of a human test subject, said metasensing system including:
.cndot. a central visual acuity tester to test a central field of visual
acuity;
.cndot. a peripheral acuity tester for testing a peripheral visual acuity,
simultane-
ously or nearly simultaneously with said test of said central field of visual
acuity;
.cndot. a user-input for determining a user response to said central visual
acuity
tester and said peripheral visual acuity tester;
.cndot. an aggregator for accumulating an agreement or confluence between a
re-
sult of said central visual acuity tester and said peripheral acuity tester;
.cndot. an outputter for outputting a visual acuity sightfield of said test
subject.
10. A sensing system for capturing and making visible the sightfield of a
surveillance
camera, said sensing system including:
.cndot. an implement for visualization of said sightfield, said implement
having
a plurality of cells each cell including a sensor and effector, said sensor
responsive to infrared light, and said effector emissive of visible light;
.cndot. a metasensor for sensing said visible light;

.cndot. an outputter for outputting time-exposed sensory data from said
metasen-
sor.
11. A children's or artist's lightpainting device including the features of
claim 1.
said phenomenon being drawing, said sensor being a drawing sensor.
12. The device of claim 11. where said sensor is an electrical contact sensor
that
senses when a drawing is made by way of completion of an electrical circuit to

the drawing.
13. The device of claim 11, where said sensor is an acoustic sensor attached
to
a drawing implement, said sensor sensing when a drawing is made by way of
listening to the sound of said drawing.
14. The device of claim 13, where said transmitter is modulated in color and
inten-
sity in response to the sound of said sensor, such as to impart a texture to
the
lightpainting that matches the expression of the drawing.
15. The device of claim 11, where said sensor is a pressure sensor in or on a
drawing
stylus.
16. A veillance sweeper for visualizing a field of a first camera, said
veillance sweeper
including:
.cndot. a second camera, said second camera for capturing or synthesizing a
time
exposure;
.cndot. a veillance field sensor;
.cndot. an abakographic transmitter, said transmitter responsive to an
output of
said sensor.
17. The veillance sweeper of claim 16 where said transmitter is an array of
light
sources.
18. The veillance sweeper of claim 17 where said veillance sweeper further
includes
a processor, said processor responsive to said field sensor, said array of
light
sources responsive to an output of said processor.
31

19. The veillance sweeper of claim 16 where said field sensor is said first
camera.
20. The veillance sweeper of claim 19 where said veillance sweeper further
includes
a signals intelligence unit, said processor responsive to an output of said
signals
intelligence unit.
21. A means for making lightpaintings or sightpaintings that express a
phenomenol-
ogy. said means including the steps of:
.cndot. sensing a phenomenology;
.cndot. adjusting a light source in response to the sensed phenomenology:
.cndot. causing the light source to be moved, while the adjusting is taking
place;
.cndot. making a long exposure or synthesized long exposure photographic
record
of the light source.
22. The means of claim 21, further including the step of displaying the long
exposure
while it is being accumulated.
23. The means of claim 21, further including the step of causing the
phenomenology
to be responsive to an output of said light source.
24. The means for making lightpaintings or sightpaintings of claim 21, where
said
phenomenology is a degree of surveillance, said means of sensing said surveil-
lance including the steps of reading data from a surveillance camera.

32

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02873657 2014-12-08
BUREAU REGIONAL DE L'OPIC
TORONTO
CIPO REGIONAL OFFICE
DEC 08 2014
Patent Application
of 59 =
Steve Mann and Ryan Janzen
for
Feedback-based lightpainting, user-interface, data visualization, sensing,
or interactive system, means, and apparatus
of which the following is a specification...
This application is related to and claims priority benefits under 35 USC
119(e), or
Canadian equivalent. from U.S. Provisional Patent Application EFS ID:
18935530,
Application Number: 61988290, Title of Invention: "Feedback-based
Lightpainting
User-interface, Data Visualization. Or Interactive System, Means, And
Apparatus",
Inventor: Steve Mann, Patent attorney: Michael John Ries, filed on 04-MAY-
2014,
and the entire contents of the aforementioned application is expressly and
hereby
incorporated hereinto by reference.
FIELD OF THE INVENTION
The present invention pertains generally to a feedback-based sensing system,
means,
apparatus, or the like, for use as a user-interface, for data visualization,
for data entry,
measurement, system analysis. system characterization, visual art, or the
like.
BACKGROUND OF THE INVENTION
Various sensors, such as cameras, in which one or more measurements, such as
expo-
sures from the camera. comprise a time-integrated sensing of one or more
phenomena,
observable for measurement, data visualization, visual art, or the like.
Various tools, devices, and situations may be constructed to take advantage of
the
use of feedback-based sensing.
SUMMARY OF THE INVENTION
The following briefly describes the new invention.
The present invention is a sensing system that uses feedback to affect a
sensor in
response to a sensed or effected quantity, parameter, effect, or the like,
thus sensing
a capacity for sensing.
1

CA 02873657 2014-12-08
In some embodiments, more than one user can interact with cygraphsTm
(Cybernetic
Data Graphs) through computational sensorgraphy and data visualization so that
the
one or more people can share these cygraphs.
The invention can be used with HDR (High Dynamic Range) imaging, i.e., the
combining of differently exposed abakographs of the same light vectors. HDR
and
multiple exposure computational photographic compositing was invented by S.
Mann:
"The first report of digitally combining multiple pictures of the same scene
to improve dynamic range appears to be Mann["Compositing Multiple
Pictures of the Same Scene", by S. Mann, Proc. 46th Annual Imaging
Science & Technology Conference, May 9-14, Cambridge, Massachusetts,
1993]."
¨ "Estimation-theoretic approach to dynamic range enhancement using multiple
ex-
posures" by Robertson etal, JEI 12(2), p. 220, right column, line 26, herein
both
incorporated by reference. as non-patent publications, dictated by 37 C.F.R.
1.57.
Rule 57(d).
See also U.S. Patents 5,828,793, entitled "Method and apparatus for producing
digital images having extended dynamic ranges" and 5,706,416, entitled "Method
and
apparatus for relating and combining multiple images of the same scene or
object (s)" .
herein both incorporated by reference, as patent publications, dictated by 37
C.F.R.
1.57, Rule 57(b).
A general description of realtime HDR video as a seeing aid, along with visual
and
video examples, may be found in the following popular press article describing
the
work of S. Mann: "Quantigraphic camera promises HDR eyesight from Father of
AR"
by Chris Davies. Slashgear, Sept. 12th 2012,
http://www.slashgear.com/quantigraphic-
camera-promises-hdr-eyesight-from- father-of- ar-12246941/
BRIEF DESCRIPTION OF THE DRAWINGS:
The invention will now be described in more detail, by way of examples which
in no way are meant to limit the scope of the invention, but, rather, these
examples
will serve to illustrate the invention with reference to the accompanying
drawings, in
which:
FIG. 1 illustrates a feedback-based metasensory system, means, or apparatus.
2

CA 02873657 2014-12-08
FIG. 2 illustrates the use of the feedback-based metasensory system, means, or
apparatus for surveillometry, in which a bank owner or employee can sense,
measure,
or visualize the degree of coverage of one or more of the bank's surveillance
cameras.
FIG. 3 illustrates an extramissive embodiment of a surveillometer in which an
extramissive phenomenizer is used to sense, measure, and capture for
visualization,
the surveillance coverage of an existing or future-planned surveillance
camera.
FIG. 4 depicts a BugbroomTmbug sweeper for finding hidden (sur)veillance cam-
eras and making their veillance visible.
FIG. 4a depicts a simplified analog vacuum tube-based and tungsten bulb em-
bodiment of the BugbroomTmbug sweeper with only a single lamp.
FIG. 4b depicts a BugbroomTmbug sweeper being used to generate an ayinographTm
or
ayinogramTM
FIG. 4c depicts an ayinogramTmof Fig. 4b.
FIG. 5 illustrates the use of the feedback-based lightpainting system, as a
chil-
ls dren's toy in which a child can draw patterns in light.
FIG. 6 illustrates an abakographic visualizer for visualizing radio waves as
stand-
ing waves.
FIG. 7 illustrates an abakographic display visualization system as a visual
art
generation medium.
FIG. 8a illustrates a system and process to measure the concentration of
information-
bearing sensitivity from a sensor, occurring at a remote location being sensed
by that
sensor. i.e. this system accurately measures veillance.
FIG. 8b depicts the final step of the veillance measurement system: scanning-
vixel principal component emission analysis.
FIG. 9a illustrates asymptotic sensory emission testing, as a new type of
vision
test and hearing test for human patients, and for manmade sensors, to
accurately
detect and render vision fields and hearing fields in 3D space.
FIG. 9b depicts visual field stimuli used in asymptotic sensory emission
testing.
FIG. 10 depicts a system to visualize sensing emissions in 3D augmediated
reality.
to "see sight.' and "visualize vision".
FIG. 11 depicts a veillance field dosimeter, implementated with an electronic
circuit, to measure exposure to inverse light: that is, measuring how much a
user has
3

CA 02873657 2014-12-08
"been seen".
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS:
While the invention shall now be described with reference to the preferred em-
bodiments shown in the drawings, it should be understood that the intention is
not
to limit the invention only to the particular embodiments shown but rather to
cover
all alterations, modifications and equivalent arrangements possible within the
scope
of the appended claims.
In various aspects of the present invention, references to "microphone" can
mean
any device or collection of devices capable of determining pressure, or
changes in
pressure, or flow, or changes in flow, in any medium, be it solid, liquid, or
gas.
Likewise the term "geophone" describes any of a variety of pressure
transducers,
pressure sensors, velocity sensors, or flow sensors that convert changes in
pressure
or velocity or movement or compression and rarefaction in solid matter to
electri-
cal signals. Geophones may include differential pressure sensors, as well as
absolute
pressure sensors, strain gauges, flex sensors on solid surfaces like
tabletops, and the
like. Thus a geophone may have a single "listening" port or dual ports. one on
each
side of a glass or ceramic plate, stainless steel diaphragm, or the like, or
may also
include pressure sensors that respond only to discrete changes in pressure,
such as
a pressure switch which may be regarded as a 1-bit geophone. Moreover, the
term
"geophone" can also describe devices that only respond to changes in pressure
or
pressure difference, i.e. to devices that cannot convey a static pressure or
static pres-
sure differences. More particularly, the term "geophone" is used to describe
pressure
sensors that, sense pressure or pressure changes in any frequency range
whether or
not the frequency range is within the range of human hearing, or subsonic
(including
all the way down to zero cycles per second) or ultrasonic.
Moreover, the term "geophone" is used to describe any kind of "contact micro-
phone" or similar transducer that senses or can sense vibrations or pressure
or pressure
changes in solid matter. Thus the term "geophone" describes contact
microphones
that work in audible frequency ranges as well as other pressure sensors that
work
in any frequency range, not just audible frequencies. A geophone can sense
sound
vibrations in a tabletop, "scratching", pressing downward pressure, weight on
the
4

CA 02873657 2014-12-08
table, i.e. "DC (Direct Current) offset", as well as small-signal vibrations,
i.e. AC
(Alternating Current) signals.
FIG. 1 illustrates the feedback-based lightpainting system, means, or appara-
tus. A phenomenology 100 is either a pre-existing phenomenon, or is generated
by a
phenomenon generator as part of the system, means, or apparatus of the
invention.
The phenomenology 100 has a field 101 which may be a field of influence if the
phe-
nomenology is an input phenomenon, or a field of outfluence if the
phenomenology
is an output phenomenon. For example, the field 101 may be a field-of-view
("view-
field") of a camera, a field-of-coverage of a light source, or a lightfield,
sound field,
electromagnetic field, or the like.
An abakographic implement 110 is moved through the space in which phenomenon-
logy 100 exists or is generated. The implement 110 can be moved by a robotic
arm,
by a machine, or by human effort. In the latter case. implement 110 and the
system,
means, or apparatus of the invention may take the form of a user interface. In
this
user-interface example, implement 110 is designed with a nice ergonomic
handle, as
a tool for human use, in conjunction with a display means fed from an
abakographic
camera 190. The abakographic camera 190 can be fixed on a tripod, on the
ceiling
above a workspace or study space, on an easel or copystand. or the camera 190
may
also be part of the user's DEG (Digital Eye Glass). The camera 190 has a
coverage
field 191 (e.g. a field of view) that includes at least some of the space in
which the
phenomenon of phenomenology 100 exists or is generated.
The abakographic camera 190 records one or more abakographs such as abako-
graph 199, traced out by an abakographic transmitter 114. The transmitter 114
is
typically a light source such as a light bulb, LED (Light Emitting Diode), or
array
of light bulbs, LEDs, or the like, attached to the implement 110 or borne by
the
implement 110, or the implement 110 itself, illuminated, in response to a
processor
150, by way of one or more control signals such as control signal 159.
Processor 150 is responsive to an input from an abakographic receiver 113 or a

phenomenology sensor 111, or a combination thereof.
In some embodiments, the phenomenology is generated by an interplay between
sensor 111 and effector 112, whereas in other emboditnents it is by an
interplay
between receiver 113 and transmitter 114, or a combination thereof, as may be
pro-
5

CA 02873657 2014-12-08
grammed in, caused by, or measured with, processor 150.
FIG. 2 illustrates a use case for the invention. Suppose that a bank manager
either recently installed a surveillance camera system. such as camera 200, in
ATM
(Automatic Teller Machine) that gives rise to a surveillance phenomenology
100, or
is contemplating the installation of a surveillance camera system, and wishes
to pre-
visualize what the surveillance coverage might look like if surveillance
camera 200
were present. In the latter case, the pre-visualization is achieved through
temporary
installation of a phenomenizer 201, which may be connected (e.g. wirelessly)
to
processor 150. A satisfactory phenomenizer 201 is itself a small camera
temporarily
io affixed to the ATM in a position and orientation roughly equal to that of
where the
proposed camera would later be installed. The phenomenizer 201 may
alternatively
be a light source that mimicks the same field-of-view as camera 200 or
proposed
camera 200.
Alternatively, even if a camera 200 already exists. the phenomenizer 201 may
still be used, e.g. if the camera 200 is not working, the video feed from it
is not
conveniently accessible, or is otherwise less suitable for the desired
purpose.
With the invention, a bank employee or contractor can sense, measure,
visualize,
display, and communicate (e.g. by way of visual imagery and other
deliverables)
surveillometric data to a Board of Directors (e.g. at a board meeting or other
pre-
sentation), or to an insurance company. or to a court room. or even to the
general
public (e.g. to discourage robbery by reminding would-be theives that the bank
has
excellent and total surveillance coverage).
Such surveillometric data can include photographic renderings that visually
show
the extent of coverage of the bank's surveillance camera system.
FIG. 3 illustrates an embodiment of the invention in which phenomenizer 201
is a light source such as a programmable lock-in data projector that may be
affixed
to, on, in, or next to an existing surveillance camera. or at a proposed
surveillance
camera location, so that a field-of-view and an extent of coverage of the
surveillance
camera may be visualized.
In the embodiment depicted in Fig. 3, the phenomenology effector 112 is the
phenomenizer 201, and the phenonenology sensor 111 is the abakographic
receiver
113. The abakographic receiver 113 is comprised of a linear array of 64 light
sensors
6

CA 02873657 2014-12-08
(receivers), each comprising one pixel of the linear array of receivers 301R,
302R,
303R, ..., 364R, affixed to the implement 110.
Implement 110 also contains processor 150, which has a 64 channel analog to
digital converter or other means of reading from receivers 301R, ... 364R.
In some embodiments of the invention, the processor is distributed, some parts
of
the processor residing in the implement 110. some parts of it residing in the
abako-
graphic camera 190, some parts of it residing in the phenomenizer 201, etc..
A satisfactory processor 150 is an Atinel AVRTmlocated in the implement 110,
together with an ARMTmCortexTmprocessor located in the phenomenizer 201, the
two parts of processor 150 being linked by a wireless communications protocol.
Processor 150 is responsive to receivers 301R, ..., 364R. An abakographic
trans-
mitter 114 is comprised of transmitters 301T. 302T, 303T, ..., 363T, and 364T,
which
form a 64-pixel linear array of light sources.
In a simple embodiment of the invention, there is no portion of processor 150
residing in phenomenizer 201, and phenomenizer 201 is merely an infrared light
source
having the same field of view as proposed or existing camera 200. In this
simple
embodiment, a satisfactory phenomizer 201 is a theatre light, such as an
ellipsoidal
reflector spotlight (ERS), with a rectangular gobo matching the aspect ratio
of the
proposed or existing camera 200, or where a series of four shutters are moved
in place
to match the field of view of a proposed or existing camera 200. A slide
projector or
the like may also be used for phenomenizer 201, where a blank slide is used to
project
a rectangular shape as a cone of light, matching the cone of light visible by
camera
200 or that would be visible by camera 200 when later present.
In this simple embodiment. an infrared filter or gel is placed over the stage
light or
projector, so that an infrared light source is projected to the space where
implement
110 is to be used. Here the receiver 113 of implement 110 comprises an array
of
64 cadmium sulfide photocells, i.e. Light Dependent Resistor (LDR) units, each

connected to the gate of a Field Effect Transistor (FET). There are 64
separate
FETs, each responsive to one of the LDRs, and each supplying current to one of
the
transmitters 301T, ..., 364T. The transmitters 301T. 364T
emit light in only the
visible spectrum, and not in the infrared spectrum. A satisfactory light
source for
each of the transmitters 301T, ..., 364T is an LED. A satisfactory light
source of
7

CA 02873657 2014-12-08
phenomenizer 201 is a tungsten filament light bulb rich in the infrared
spectrum.
As depicted in Fig. 3. receivers 301R and 364R are outside the field of
coverage,
denoted as field 101, whereas receivers in between these, e.g. receivers 302R
and
303R, are inside the field 101. In this sense, a certain range of receivers
are illumi-
nated by infrared light, and these corresponding LDRs become more conductive,
and,
in proportion to the amount of light received, conduct electricty to the
correspond-
ing transmitters 302T through 363T, but not to transmitters 301T and 364T that

correspond to a field of zero phenomenology.
As implement 110 is swept through the space it "paints" a picture of the
surveil-
lance coverage in the space. due to the fact that the transmitters emit
visible light
when and where there is phenomenology (surveillance).
The result is a long-exposure photograph that conveys some sense of the
surveil-
lance coverage of existing or proposed camera 200.
Since transmitter 114 transmits visible light to the abakographic camera 190,
and
the light received by receiver 113 is infrared light, the two kinds of light
do not
interfere with one another. and the visible lightpainting is made in some
sense of
infrared surveillance rays repeated through implement 110.
The resulting long-exposure photograph (abakograph) thus conveys a user-
selectable
sense of the surveillance coverage of the existing or proposed camera 200, in
which
the user can sweep out various slices within the surveillance field 101, to
highlight
various aspects of the surveillance.
In an alternate embodiment of the invention, phenomenizer 201 is a visible-
light
projector, and projects light in the same spectral band to which camera 190 is
sen-
sitive. Regarding certain artistic and visualization frameworks, such overlap
may be
acceptable, e.g. through the use of a designator test color such as green,
emitted by
phenomenizer 201, and a separate veillance field color such as red or white
emitted
by transmitter 114.
The result is a lightpainting of the surveillance field of phenomenizer 201 in
which
all "brush strokes" of implement 110 are visible, but where the color denotes
the
veillance field (e.g. brush strokes in green and white, but where the white
denotes
surveillance and the green denotes lack thereof).
Alternatively, a time-division multiplexer is used, in which, in one
embodiment,
8

CA 02873657 2014-12-08
alternate abakographic receive frames, and abakographic transmit frames, are
inter-
laced sequentially. During even-numbered (scotonic) gettings, a surveillance
field is
received. During odd-numbered (photonic) gettings, the surveillance field is
trans-
mitted (to camera 190).
In this sense. the scotonic (i.e. darkfield) gettings sense scotons emitted by
the
surveillance camera, and the photonic gettings "paint out" this sensed
scotonic cap-
ture, making visible the otherwise invisible surveillance field. Thus we might
think
of this process as "painting with darkness" or "darkpaintingTm", i.e. the
opposite of
lightpainting.
In still another embodiment. receiver 113 and tramsitter 114 are the same
element.
It is well known that LEDs can both generate and measure light. In thise sense
an
LED can function as both a light meter and a light. Thus the array of transmit
pixels
made by transmitters 301T, 302T, ..., can be the receive elements 301R,
302R.....
During the scotonic getting, implement 110 is a one-dimensional lenseless
camera
swept through space to measure surveillance flux, or surveillance field of
view, or the
like. During the photonic getting, implement 110 is a lenseless data projector
of sorts,
"painting out- that which it detected or sensed.
FIG. 4 illustrates a Bugbroona'bug sweeper for finding hidden surveillance or
sousveillance cameras and, more generally, for finding hidden surveillance or
sousveil-
lance and making the veillance visible.
A PixStix AbakographerTmis used as implement 110. It comprises a linear array
of LED lamps as transmitters 301T, 302T, 303T, ..., 307T, and 308T.
Transmitters
303T through 306T happen to fall within field 101. Additionally, there happens
to
be a mirror on the floor or a mirrorlike portion of the floor (e.g. perhaps a
puddle
of spilled water), which reflects some of the surveillance field 101 back to
the area of
transmitter 3071.
Therefore we wish to illuminate transmitters 3031 through 3071, and extinguish

transmitters 301T, 3021, and 3081.
This is done by sequentially illuminating transmitter elements 301T, 302T, ...
of
transmitter 114. The transmitter elements 301T, ... are illuminated one-at-a-
time,
with a test color such as green.
Phenomenology receiver 111 includes a signals intelligence ("Sig. Int.) unit
411.
9

CA 02873657 2014-12-08
Unit 411 comprises a large number of receivers that receive a variety of
different kinds
of signals, together with a learning algorithm, and various "lock-in-
amplifiers" or the
like. Signals Intelligence is a well-known field of research, and is taken as
a given
background prior-art. In this sense the invention may be practiced and used
within
existing signals intelligence frameworks.
When a test color is provided or not provided, we have a differenial signals
intel-
ligence, i.e. to determine whether or not the camera 200 is responsive to an
input
from transmitter 114. When surveillance camera 200 is resonsive to an input
from
transmitter 114. we set a register in processor 150 within the implement 110.
Unit
411 may thus be part of implement 110.
Transmitters 301T, 302T, ..., etc., can be addressed as to their RGB (Red,
Green,
Blue) value. Initially transmitter 301T is set to emit green light, and a test
is made
regarding camera 200 as for response. If camera 200 responds to a transmitted
signal
of transmitter 301T, then a first register is set with regards to the signal
strength
of response, which may be binary, integer, or float. In the simplest case, if
we have
a binary test. we simply have one byte that records the bitwise results of
tests for
transmitter 301T. 302T, ... 308T. Once determined, this byte is written to
transmitter
114, in the byte pattern, i.e. the binary pattern 00111110 (i.e. off, off,
white. white,
white, white. white. off).
In this way. another camera (not shown in Fig. 4) can record, through long
exposure photography. this pattern of lights from transmitter 114, as
implement 110
is waved back and forth throughout the space to "sweep for" video bugs.
FIG. 4a depicts a very simple vacuum-tube-based embodiment of the invention
that includes a single tungsten light bulb L1, driven by a push-pull
electronic amplifier
having a matched pair of 6BQ5 electronic amplification valves: electronic
amplifying
device 6BQ5-1. and device 6BQ5-2, depicted in Fig. 4a.
Unit 411 is simply an analog NTSC (National Television Standards Commitee)
television receiver with "rabbit ears" style antenna 410, feeding into a 6CB6
radio fre-
quency amplifier. 6J6 type converter, 6CB6 intermediate frequency amplifiers.
12AU7
type video detector and vertical synchronization separator, and the like.
Ultimately
this signal is fed to differential driver 430 which drives devices 6BQ5-1 and
6BQ5-2
differentially (i.e. 180 degrees out-of-phase with each other).

CA 02873657 2014-12-08
A hidden television camera as might be concealed inside a stuffed animal or
clock
radio or smoke detector often broadcasts an NTSC signal for low cost
simplicity and
miniature design. In this case, unit 411 has simply a television receiver
circuit that
picks up this signal and the signal is amplified and fed to autotransformer Ti
which
drives lamp Li in proportion to the strength of the received signal. In this
case a
received wireless video transmission 420 from television transmitter antenna
412 will
drive lamp Li. By adjusting a gain control on unit 411 or driver 430, lamp Li
can
be made to glow a dull red colour when not visible to camera 200.
Due to video feedback, when lamp Li is brought into a space where it is
visible
by camera 200, the lamp will glow brilliantly whenever the camera can "see
it". This
phenomenon, "surveilluminescenceTm", or veilluminescenceTM. can be observed by

the naked eye, or by a long exposure photograph.
A suitable long-exposure photograph can be captured on a sheet of film. in
which
case dim traces of light will be visible from the dull red glow, and these
will show as
bright traces where the camera can "see" the lamp Li. Trace 440U is a trace
from
an upward sweep of the lamp Li while a user sweeps the lamp up and down around

a room in which a hidden camera is suspected of being present. We can see that
the
video feedback is not intant, but, rather, it takes a fraction of a second to
"kick in"
before the lamp reaches full brightness about midway into the cone of the
sightfield,
of field 101. Almost as soon as the lamp exits field 101 it goes dark again,
back
to the dull red glow. Then when it is swept back down toward the floor, during
a
downsweep trace 440D, there is a brief hesitation before it "kicks in" to full
brightness
about halfway into the field 101, and finally extinguishes quickly when
exiting field
101 at the bottom.
Preferably lamp L1 is a high voltage and low wattage tungsten light bulb, so
that,
due to the resulting high impedance, Z, of Z = V2/P, where V is the voltage
and P
is the power, the lamp filament responds quickly, especially in the sense that
it has
minimum "afterglow". High impedance lamps have very thin filaments which means

that they respond and "despond" quickly to input voltage.
The brightness of a tungsten lamp is generally proportional to the voltage
raised
to the exponent of 3.5, so there is a continuous response that works well in
this surveil-
luminescent video feedback effect. LEDs (Light Emitting Diodes), neon
indicators,
11

CA 02873657 2014-12-08
or fluorescent lights can also be used, but there is additional difficulty
overcoming
the threshold they have, whereas tungsten lamps behave more continuously. In
using
LEDs, a threshold is provided to linearize or at least continuize the response
by way
of a computer system with a LUT (LookUp Table) to provide a dim glow that is
not
zero, under the threshold input level.
This basically amounts to a form of dynamic range management that optimizes
the dynamic range control for the particular lamp chosen.
Let us, for the time being, consider the simple case of a tungsten lamp
operating
at the standard European 240 volt voltage, rather than the lower North
American 120
volt voltage. Comparatively speaking, a European 5 watt lamp has an impedance
of 11520 ohms, as compared with a North American 5 watt lamp which has the
impedance of 2880 ohms, both when operating at their intended design voltages.

(When cool the impedances are less.) Preferably a high voltage lamp of even
lower
wattage, such as a one-watt indicator lamp (which has an operating impedance
of
approximately 57600 ohms), works even better, and is still plently bright
enough to
show up clearly in an abakograph or abakogram or sensing and 3D tracking
system.
As the lamp is moved further from the surveillance camera 200, there comes a
point where it does not reach full brightness, e.g. distant farfield trace
440F shows a
"sketchy" weak trace. Eventually, very far from the surveillance camera 200,
we have
not much more than the dull red glow of the filament.
We might wish to also ignore the dull red glow of the filament. and only
record
the video feedback. To achieve this effect we can use an orthochromatic film
in the
abakographic camera. Orthochromatic films are designed so that they do not
respond
to red light. This is normally done so that they can be handled in darkrooms
using
red "safelights" just like photographic print papers are handled.
The dull red glow usually arises because we have turned the amplifier gain up
so
high that the noise ("hiss" or "snow" in the TV signal) begins to light up the
lamp.
Thus we refer to an abakograph that ignores this background level as being
comprised
of "noise-gated" traces like trace 440N which only show when the lamp Li is
glowing
white.
In a more modern version of the invention, the abakographic camera is an elec-
tronic camera, and the noise-gate is implemented computationally. More
generally,
12

CA 02873657 2014-12-08
the surveillance camera 200 is traced in one getting and the abakographic
camera is
exposed in another getting. For example, an infrared lamp Li operates a
feedback
loop with the surveillance camera 200 and its 3D position is tracked by a 3D
vision
system, and this the abakograph or abakogram is rendered in a computer
graphics
environment such as UnityTM.
In another embodiment of the invention, a single lamp Li is used, with a
diffuser
over the lamp so that its view in surveillance camera 200 subtends a larger
angle, thus
extending the range over which the surveilluminescent phenomenon will occur,
while
at the same time, holding the lamp and diffuser in such a way that the bare
(preferably
clear transparent) bulb is visible to the abakographic camera, thus ensuring
accuracy
and precision combined with further range away from surveillance camera 200.
In other embodiments, a linear array of lamps like lamp Li is used, and
stepped
through sequentially, using a stepping relay to select lamps and direct the
signal to
each lamp in succession. In another embodiment, a robotic arm moves the array
of
lamps and the lamps are computer controlled, so that the space is swept
automatically.
In some embodiments, bug-sweeping robots are fitted with the apparatus. In
other
embodiments, the apparatus is affixed to vehicles such as autonomous vehicles
(for
example helicopters, quadcopters, or "drones").
In situations where the video transmission is scrambled or encrypted, it is
not nec-
essary to fully descramble or decrypt the video transmission, but, merely, to
provide
some form of differential decryption that will result in some form of
feedback.
Take for example a very simple case in which the television transmitter of the

surveillance camera 200 is an FM (Frequency Modulation) transmitter, and the
unit
411 is an AM (Amplitude Modulation) receiver. The receiver need merely respond
in
some proportional way, for the surveilluminescent phenomenon to occur.
Sweeping
the lamp Li throughout a room while sweeping a tuning dial on unit 411 through

various frequencies will result in a pickup of a signal when the receiving
frequency is
slightly off to one side of the transmit frequency and therefore the FM
transmitter
goes stronger or weaker into the band of interest and therefore video feedback
will
still occur on one particular side of the transmit frequency (the side that
results in
positive feedback).
More generally, when a TV signal is scrambled (e.g. by removing or inverting
13

CA 02873657 2014-12-08
sync pulses), the invention still works quite well, since it is not necessary
to decode
the video signal in order to get feedback to occur.
Within the world of digital TV, and encryption, the problem of differential
cryp-
tography is often a simpler problem to solve, and therefore solving bug
sweeping
surveilluminescence is simpler than solving the more general signal decryption
prob-
lem.
FIG. 4b depicts an embodiment of the invention used to generate an
ayinographTmor
ayinogramTM (i.e. an abakograph or abakograna of a biological eye such as a
human eye
and associated human visual system). Here Unit 411 is an eye test sensor
comprised
of one or more of the following:
= an EEG (Electroencephalogram) VEP (Visual Evoked Potentials) sensor for
sensing from a brain of a user or subject with associated biological eye 400
under test;
= a human interface unit for receiving input from a human subject under
test;
= an eyeshine sensor for sensing a retroreflective property of a biological
eye under
test.
Here field 101 is a sightfield of a biological eye. such as a human eye. An
ayinogram
is a recording of what a person can see. i.e. it allows people to see what
they can see.
Seeing sight itself is useful for various reasons. ranging from meta-artistic
curiosity
and research, to the practical elements of eyeglass design. to a new eye test
that is
more comprehensive than any other eye test known to humankind, and whose
results
are visible and comprehensible to nearly anyone. In contrast to a standard eye
test
result which only shows visual acuity in the foveal (central) region, the
ayinogram
and ayinograph show visual acuity over the entire field of sight, i.e. it
shows foveal
as well as well as peripheral visual acuity, and everywhere in between.
The ayinograrn or ayinograph are generated through the use of a metasensor. A
metasensor is a sensor that senses sensing, or a device that sees sight, or
that visualizes
visualization. An example of a metasensor is an abakographic (e.g. long-
exposure
or integrated exposure or multi-exposure or simulated multi-exposure) camera
that
captures surveilluminescence or veilluminescent data. The metasensor captures
the
veilluminescent data, i.e. the metasensor senses sensing and captures the
sensing of
14

CA 02873657 2014-12-08
the sensing. A metameasurement is such data pertaining to measuring
measurement,
sensing sensing, seeing sight, visualizing vision, or the like, and
metameasurement can
also include such measurements as seeing hearing, visualizing hearing, or
visualizing
other sensing, for example.
The ayinogram of the invention can be generated by various ways. In one embodi-

ment, a PixStix Abakographer ("Veillance WandTm") is moved by a robotic arm,
and
held a certain distance from the subject's eyes. Eyes can be tested one-at-a-
time. For
example, let us consider a test of a subject's oculus dexter (right eye). The
oculus
siniser (left eye) is covered, while the test is done at various distances
from the eye.
The robotic arm may be provided in an eye test booth set up in shopping malls,
or the
like, where a user can insert a coin, and do a quick self-test, for fun, or
for practical
use, or perhaps for an early warning of eye problems in which case a more
careful test
can be done in an eye doctor's office or optometrist's office, with and
without glasses,
etc., perhaps using a more sophisticated version of the invention.
In the photo booth-like apparatus. once the user is seated, and draws a black
curtain across the doorway, and inserts a coin, the robotic arm swings in
close to
the eye, and the lights "chase" from outwards to inwards. Initially lamps Li
and L8
light up, and if the user can see them, a button is pressed by the user by
pressing
a button on unit 411. If no button press is detected by unit 411, lamps L2 and
L7
are lit, and so-on, until the user can see, through peripheral vision, the
lamps. Then
when the lamps are visible, and the user presses the button on unit 411, the
lamps
between light. So for example, if the subject user sees first in the periphery
when
lamps L3 and L6 are lit, then a processor illuminates a bar of light by
turning on also
the lights in between. Whereas only lamps L3 and L6 were lit during test,
during the
next stage, which we call the abakographic stage, lamps L3, L4, L5, and L6 are
lit,
to make a bar of light, showing as a horizontal white bar L3 to L6.
An abakographic camera, not shown in this drawing, then records the strip of
four lamps L3-L6 forming the white horizontal bar of light, which is also thus
the
bar of sight. The robotic arm moves to a new position, i.e. further from the
eye 400.
The process repeats. So this time, further from the eye, the lights will
likely be seen
easier or earlier. At some further distance out. lamps Li and L8 might not be
visible
but lamps L2 and L7 might become visible, in which case six lamps L2 through
L7

CA 02873657 2014-12-08
are lit to mark the next bar of sight. Still further out, lamps Li and L8
become
visible. Alternatively, the robotic arm can display two particular lamps such
as L1
and L8, and move the bar away until those lamps first become visible, at which
point
the subject presses the button on unit 411 and a computer monitoring the
system
receives this input and lights up all 8 lights for the outermost bar of sight.
In some embodiments of the invention, the measurement process is decoupled
from the "dusting" (abakographic image generation) process, e.g. the robot mea-

sures at about six different distances from the subject, and then generates an
in-
terpolated sightfield which is either rendered in computer graphics, or,
preferably,
rendered abakographically, with a dense array of lights that sweep out a nice
smooth
abakograph, using sub-pixel precision in rendering with anti-aliasing (e.g.
controlling
the end pixels with continuously varying light levels).
In other embodiments the decoupling works with a hand-held implementation,
i.e.
the linear array of lamps is moved by hand and held at just a few different
distances to
measure the sightfield at those points and then with a radar or sonar or other
sensor
on the light stick, its position is determined and the pattern generated, and
then
swept out by hand to generate a more accurate or at least more precise
abakograph
with nice smootly-varying pattern. When done by hand this results in a nice
visually
appealing art form, which is called and marketed as a "Soul PortraitTm", i.e.
it is
often said that the eye is the "window to the soul".
In either case, whether as an accurate scientifically valid eye test or eye
map
or visual sightfield map, or as a new kind of artistic portrait, the ayinogram
and
ayinograph are not necessarily limited to black and white. The colour
sensitivity
of the eye is tested, measured, and also displayed in this manner, when
desired.
Alternatively, when it is found that all colours have roughly the same
sightfield (as
is often the case in subjects with normal vision), colours are used to encode
varying
sensitivities, in a pseudocolour scale which embodies an HDR (High Dynamic
Range)
ayinograph or ayinograni.
To do this, multiple measurements are taken. In one embodiment, a getting of
high contrast is made by turning off the room lights so the subject is in a
dark space.
and maximizing the light levels of the test lamps (initially Li and L8, then
L2 and
L7, and so on). Under these contitions, the eye can see the lamps even when
they are
16

CA 02873657 2014-12-08
way out on the periphery of the subject's sightfield.
Next the room lights are turned on, and the measurement process is repeated.
Finally the test lamps Li, etc., are dimmed way down, so that they are harder
to see,
thus rendering them only visible in the central visual field.
In this way, three complete sets of measurement are taken. This data can then
be
rendered or "dusted" (photographically generated) with the colour blue for
periphreal
vision, red for central (foveal) vision, and green to represent in-between
vision.
Finally the invention is also be used in critical applications like insurance,
driver
testing, pilot testing, and the like. In these kinds of applications it is
desired that
subjects not cheat, either deliberately or inadvertently. For example, when
asking
a subject to look straight ahead, fixating on an object midway between lamps
L4
and L5. and at the same time saying when the light Li or L8 is first visible
in a
periphery, many subjects tended to look directly at Li or L8. Thus it was
necessary
to invent a way to make the test foolproof. In one embodiment an eye tracker
is
used to detect cheating. But a better embodiment was developed to actually
make
cheating impossible. To do this, a central foveal display lamp LC is installed
in the
center of the light strip. This lamp is smaller and dimmer than the others,
and
requires the user to look right at it, in order to see it. In one embodiment
LC is a
seven-segment LED display, which displays a number. Then in the periphery,
lamp
Li is flashed a certain number of times. The user must indicate when the
number
displayed on LC is the same as the number of times that lamp Li is flashed. In

this way cheating is impossible, and also both sides (left and right) of the
peripheral
vision can be measured. Then the light stick can be rotated 90 degrees to
measure
vision top-to-bottom. as well as at other angles (diagonals, 30 degrees. or in
15 degree
increments, or the like).
In another embodiment, lamp LC is a single "pixel" RGB (Red, Green, Blue) LED
(Light Emitting Diode). Lamp Li is illuminated with a particular colour. This
is
done using a 144. 288, or 300 LED strip, with RGB addressible lights. The user
must
indicate when the colours match. In another embodiment a lamp central display.
LCD, is shown in the center of a subject's field-of-view, and the subject must
read off
small print in the form of a traditional eye test, while AT THE SAME TIME
seeing
something in the periphery.
17

CA 02873657 2014-12-08
FIG. 4c depicts an ayinogram made with a PixStixTmarray of 32 lamps. At close
range to eye 400, which is the oculus dexter (right eye) of the subject. only
one lamp
is visible when the array of lamps is right against the eye. So the user
presses a
button on unit 411 when the lights chase all the way to the center and only
one is
lit, thus building a database entry, and also exposing the first bar of the
ayinogram
which only has one light in it. As the array lf lamps is pulled out. two and
then
three lamps become visible, at each point, tracing out a sightfield in field
101, as the
subject presses the button on unit 411 each time.
When lamp L12 is visible at a certain distance out, with the lights chasing
from
io the subject's right-to-left, the subject presses the button on unit 411.
duly marking
this location in 3D space. Then the lights chase the other way from the
subject's left-
to-right, when eventually lamp L22 is visible. The subject then pushes the
button
again. Now the computer causes eleven lamps to be lit, from lamp L12 through
lamp
L22. making an exposure for the ayinograph, of 11 lamps lit.
Then as the light stick moves or is moved further out, chasing from lamp Li,
L2,
L3. etc., up to lamp L9, which finally the subject can now see, to the right
side of
the subject's field of view and the subject pushes the button. The lights next
chase
the other way. starting with lamp 32, 31, 30, and so on, until left side of
the subject's
field of view is next reached, which happens at lamp L24. The subject pushes
the
button at this point, to imprint the sightfield bar of 15 lamps.
The subject continues until the full 32 lamps are visible and then the process

terminates. Typically in modern ayinography, the number of lamps is 144 or 288

(two 144 pixel light strips end-to-end for a total of a two-metre long light
strip that
is 288 lights long).
The sensing embodiments and aspects of the invention may be referred the
ayinomneterTM,
giving rise to an eye test for the 21st Centry.
In some sense this renders traditional optometry obsolete! Ayinometry lets
people
actually see sight and visualize vision. Brand names for related goods and
services
may include: AyinographTm;, AyinogramTM, AyinometristTm, AyinometryTm.
'.r' "ayin" = "ayin" (eye) + "or" (light) + "med" (guage. measure. or
indicator),
which would be written right-to-left when carved into stone, i.e. caitY
These words, coined by S. Mann, derive from the letter "ayin" of many ancient
al-
18

CA 02873657 2014-12-08
phabets which means "eye". The earliest known alphabet has 22 letters in it,
and each
letter was derived from a hieroglyph. Each letter, by itself. was a picture of
the object
it sounded like. In fact the word "alphabet" comes from the first two letters,
"alpha"
(or "aleph"), which means "ox", and "bet" which means "house" (and is actually
a
pictorial drawing of a house). Each letter had an important meaning, e.g. the
fourth
letter, "dalet" means "door", the 13th letter, "mem" means "water" (and
evolved into
our letter "M" which still looks a bit like the wavy line for the hieroglyph
of water),
and the 16th letter "ayin" means "eye". Each letter means something important
and
fundamental to all of human civilization. The letter "ayin- . in early
writing, looks
io like a human eye. This letter evolved into our letter "0". which still,
after thousands
of years, somewhat resembles the eye. To this day, "ayin- in Hebrew, Arabic
and
Maltese still means "eye" (Wikipedia, http://en.wikipedia.org/wiki/Ayin).
Summary explanation ¨ how it works: Traditional optometers measure how the
eye focuses, and, additionally, optometrists measure visual acuity in a
central (foveal)
visual field of view.
Optometry has remained constant for many years, with the modern computerized
machines giving much the same results as their early non-computerized
counterparts.
The ayinometer can totally revolutionize eye care. by making the ayinogram be
the new standard-of-care for eye testing. The ayinogram is constructed by
actually
measuring what a person (or other animal) can see, throughout the entire field
of
vision. Ayinometrics is a complete eye test for the entire field of vision,
not just the
central field of vision.
Moreover, the ayinogram can be used to generate an ayinograph. which is a
visual-
ization of what a person can see. Unlike the Latin words and numbers on a
traditional
eye test result or prescription, the ayinograph is easy for the layperson to
understand.
It is this easily comprehensible eye map that will revolutionize optometry.
Subjects
can overlay their ayinographs with earlier maps, or they can compare with
others,
e.g. to understand their relative eye map and how it has evolved over time. If
taken
regularly, the ayinograph becomes a movie or motion picture such as an
animated .gif
file that makes it easy for people to understand how their eyesight has
evolved over
time with age.
With the growing population of elderly, there is a need for a new eye test
that is
19

CA 02873657 2014-12-08
easy for people to see, visualize, and immediately understand.
Example commercial applications: This invention was originally created for
visual
art to picture the otherwise hidden world of eyesight and vision. As a
commercial
product the ayinometer can be used by eye doctors, optometrists, insurance
profes-
sionals, departments of motor vehicles, airline pilot testing bureaus, and
others with
a need to measure, quantify, and communicate human eyesight.
Another embodiment of the invention is an eye test booth, kind of like a photo

booth. The coin or credit card operated ayinometry booth is for being
installed in
shopping malls where people can pay a small fee (one or two dollars) to get a
fun
eye test they can easily understand. If they have further concern, they can go
to
a profesional to have their ayinogram and ayinograph done by a professional
eye
specialist. One way to market the product initially, is as fine-art. Since
ayinographs
are beautiful portraits, in some way, as art value, they can be marketed,
under a
name like "Soul PortraitsTm" . since the eye is often said to be the "window
to the
soul". In this way, regulatory issues can be dealt with later. while making
quick initial
profit from the mere art value of the invention, before it becomes widely
tested and
accredited and verified in the scientific community. A quick informal eye test
that is
fun, playful, as visual art, will be an immediate point-of-entry into the
marketplace.
Combining the ayinograph with biometrics, like iris or retinal scan, can
provide a
very detailed eye test that can be fully automated and self-logging, so a
person can
scan in one booth and "connect" with their previous ayinographs even if
remaining
totally anonymous. For example, paying cash in one booth. and not entering any
iden-
tifying information, a subject may scan in another booth elsewhere, even in
another
country, and can be alerted to a change in eyesight, by way of the product or
service
making the connection between a present ayinograph. and one taken previously
of
the same person elsewhere.
Other potential markets include gaming. e.g. "sight games- in which people
need
to see and recognize faces without getting caught staring. There is a useful
training
and eyesight development aspect to fast action games like t his where staring
too long
results in a penalty. This game teaches people visual memory skills and helps
them
develop the capacity to aggregate large quantities of visual information in a
very brief
instant. These are the kinds of games that can help autistic children develop
social

CA 02873657 2014-12-08
skills, as well as help those with visual memory impairment.
Ayinography combined with eye-tracking gives realtime visualization of what
peo-
ple are looking at. In 3D models such as UnityTm:, we can not only see and be
seen,
but we can also see seeing and be seen seeing.
In other embodiments, a passive sensing wand has an array of sensors and
emitters,
either separate, or that the sensor is the emitter. The sensor senses in one
"getting",
such as an infrared getting, and emits in another "giving", such as a visible
giving.
The getting and the giving are space division multiplexed, time-division
multiplexed,
frequency-division multiplexed, or the like. In one embodiment, the getting is
in the
io infrared, and the giving is in the visible spectrum of light. In this way,
the wand
senses the infrared light given off by a surveillance camera, and makes that
infrared
light visible by emitting visible light to an abakographic camera whereever
the wand
senses infrared light. In one embodiment, an array of 32 infrared sensors on
the wand
is interleaved with 32 visible light emmitters. so that each infrared sensor
controls a
corresponding visible light emitter. In one embodiment, the emitters are LEDs
(Light
Emitting Diodes) that have near zero emission in the infrared part of the
spectrum
to which the sensors are responsive. In this way. the wand will not self-
feedback, and
instead responds to the infrared illumination present in many surveillance
cameras.
In another embodiment, the wand emitters ARE emitting in both gettings, first
in an infrared getting to which the wand's sensors are also sensitive. In this
way, the
emitters and detectors comprise a form of eye detection or eye tracking or
eyeshine
(retroflective) sensing that senses human or animal vision, and traces this
vision out
in visible light during a long exposure photograph or the like.
In some embodiments, the invention is made using nano or bio materials, or
holography. The ayinometer may be built as a holographic video HUD (Head Up
Display) for use in eyeglasses and automobile windshields, etc., as shown:
21

CA 02873657 2014-12-08
MannGlass f3LovocvoAoypoupia (Bionanoholography)
proof-of-concept test-strip for Digital Eye Glass
Infrared sensors = = =
volts 5 volts wiring to
next
input strip in sequence.
Blue emitters . = =
1 element per mm, N=256 total length = 25.5cm
FIG. 5 depicts an embodiment of the invention used as a chilren's toy. An
abako-
graphic surface 500 is a surface that can be written on using an abakographic
imple-
ment 110. Implement 110 is a form of stylus or similar writing instrument.
Attached
5 to the implement is an abakographic transmitter 114. A satisfactory
abakographic
transmitter is a tungsten light bulb, such as a 6-volt light bulb, with one
electrical
end (terminal) connected to a battery 520 and the other electrical end
(terminal) of
the light bulb connected to the implement 110, wherein implement 110 is made
of
electrically conductive material.
In one embodiment the implement 110 is a graphite pencil or graphite rod, and
the surface 500 is a brushed aluminium plate upon which the user can write
with
the implement 110. In another embodiment, the surface 500 is a sandbox filled
with
electrically conductive sand (such as wet sand, wetted with salt water), or
with metal
powder, such that the user can use implement 110 to draw in the sand or powder
or
is other dust. A combination of these is also possible, e.g. an aluminium
plate covered
with conductive dust.
The writing experience thus resembles the ancient writings of Archimedes and
other Greek mathematicians who used dust or sand upon a floor as their writing

surface.
The writing surface 500 itself is also grounded, by ground connection 510, so
that
when the stylus such as implement 110 touches the writing surface, an electric
circuit
is completed through the tip of the stylus to illuminate the light bulb.
In this way, whatever is drawn on the dust is also "painted" with light to
abako-
grpahic camera 190.
22

CA 02873657 2014-12-08
The gap between the conductive stylus and the surface 500 comprises both phe-
nomenon sensor 111 and phenonmenon effector 112, in the sense that the stylus
generates the phenernonon (i.e. drawing in the sand) and senses the phenomenon

(i.e. when the drawing is taking place).
In this way the lightpainting will mimic what is drawn, such that the user can
see
what has already been captured in the lightpainting.
FIG. 6 illustrates an abakographic visualizer for visualizing radio waves,
such as
RADAR (RAdio Direction And Ranging) waves. Phenomenology 100 is electromagnic
waves from a RADAR device. A radar unit 600 emits radio waves toward sensor
111. Processor 150 receives the radio waves and retransmits a signal through
effector
112. In this sense sensor 111 and effector 112 comprise a transponder which
reflects
the radar waves back to radar 600. The transponder is attached to implement
110
such that moving implement 110 through the space "paints" out the radar waves
as
standing waves made visible in the space. Preferably transmitter 114 is an
array of
LEDs such as a linear array of LEDs that spatializes the RADAR wave to make it

visible to camera 190. For example, a simple bargraph display is shown in a
simple
embodiment of the invention to make visible the actual radio wave (not merely
its
envelope).
In another embodiment. the transponder is replaced with any other object such
as the user's own body, or a housing of implement 110. In this simpler
embodiment,
the RADAR signal comes from the RADAR unit 600 to the processor 150.
A satisfactory RADAR unit is a Gunnplexer radar made from a Gunn diode,
transmitting at 24.360 GHz. Separate real and imaginary (in-phase and
quadrature)
signals may be sent from unit 600 to processor 150, which then displays these
signals
on a one-dimensional display comprised of a linear array of pixels in
transimtter 114
for "painting" across the space seen by camera 190.
FIG. 7 illustrates an abakographic display visualization system using abako-
graphic display 700, which may be a real or virtual display. In some
embodiments
display 700 is a real display such as a projection screen upon which a data
projector
790, in tandem with camera 190. displays the actual synthesized long exposure
from
camera 190, so that the user can see the lightpainting as it is being made.
A proximity sensor in implement 110 adjusts transmitter 114, in proportion to
an
23

CA 02873657 2014-12-08
aspect of the proximity.
A noise-gate feature displays frames of live updated video of the running
total
(photoquantimetric sum) of the abakographic exposure, in a way that is
interleaved
with capture from camera 190. For example, camera 190 captures in non-
overlapping
gettings (times of exposure sensitivity) and the display is pulsed between
these get-
tings, so as to reduce video feedback.
In other embodiments, the display 700 is a virtual display, and the user
"paints
with light" by moving around in 3D (3 dimensional) space, while viewing the
ligth-
paintings on a display 700 implemented in a digital eye glass. The digital eye
glass
io may therefore include the proximity sensing function of implement 110 to
modulate
light source 114 in proportion to a proximity to a virtual plane in the 3D
world.
From the foregoing description, it will thus be evident that the present inven-

tion provides a design for feedback-based lightpainting data entry, data
visualization,
sensing, measurement. and visual art system. As various changes can be made in

the above embodiments and operating methods without departing from the spirit
or
scope of the invention, it is intended that all matter contained in the above
descrip-
tion or shown in the accompanying drawings should be interpreted as
illustrative and
not in a limiting sense.
Variations or modifications to the design and construction of this invention,
within
the scope of the invention, may occur to those skilled in the art upon
reviewing
the disclosure herein. Such variations or modifications, if within the spirit
of this
invention, are intended to be encompassed within the scope of any claims to
patent
protection issuing upon this invention.
FIG. 8a illustrates a system and process to measure the concentration of
information-
bearing sensitivity from a sensor, occurring at a remote location being sensed
by that
sensor. This system is called scanning vixel principal component emission
analysis.
A stimulus signal is introduced at a sequence of points in the region of space
being
tested (8a-01) of arbitrary shape. For example, to test optical sensing
specifically, we
use a LASER (Light Amplification by Stimulated Emission of Radition) device.
LED
light source, light bulb, or other light emitter which illuminates an area of
radius
(8a-02) smaller than the expected distance between pixels. A sequence of
points
is illuminated. This sequence can be along a track, as shown by (Ti) and (T2),
24

CA 02873657 2014-12-08
as a subset of entire surface region under test (Si). A recording is
simultaneously
made of the sensor-under-test's response to the sequence of stimuli (8a-10,8a-
11) The
recording is then fed through a background noise subtracter (8a-12), and the
resulting
de-noised signal is fed into an eigen-analysis system, device, or process (8a-
13), that,
for example, performs PCA (principal component analysis). The output of the
PCA
is then fed to a classifier (8a-14) to determine the number of salient PCs
(principal
components) in the signal.
This process is replicated for the number of orthogonal dimensions in the
space
being tested. For example, when testing two-dimensional veillance, a second
sequence
of tests is performed (8a-15). Finally, the salient PC output metrics are
combined (8a-
16) to identify the number of salient linearly independent (non-degenerate)
sensor-
element (e.g. pixel) vectors activated by the region being tested that is,
loosely
speaking. the amount of information expressed in the sensor sensitivity
impinging
the region. This final step is illustrated in FIG. 8b.
FIG. 8b depicts the final step of scanning-vixel principal component emission
analysis. From the PC outputs (8b-01,8b-02), the salient PC output metrics
(8b04.8b05)
determined from a threshold (8b-03), and are combined to identify and estimate
the
number of salient linearly independent (non-degenerate) sensor-element (e.g.
pixel)
vectors activated by the region being tested (8b-06) ¨ that is, loosely
speaking. the
amount of information expressed in the sensor sensitivity impinging the region-
under-
test.
FIG. 9a illustrates a system for asymptotic sensory emission testing. which is

a new type of vision test and hearing test for human subjects. and for manmade

sensors. to accurately detect and render vision fields and hearing fields in
3D space.
This system, for example, can test the human eye's concentration of resolution
across
the entire field of view.
In particular, by testing emissions of information-sensitivity over a spatial
field,
this system can also test how a subject's visual resolution changes while
wearing
glasses. contact lenses, or while looking through a variety of optical devices
that may
blur. reflect, refract, distort in various patterns, and can render that
emission field in
3D space.
Similarly this system forms a new type of hearing test, which can determine

CA 02873657 2014-12-08
and render (e.g. display or visualize) a hearing emission field in 3D space,
even
when hearing is disturbed by devices that attenuate. reflect, refract. distort
sound.
For example, the system can determine how the 3D sensory hearing emission
field
changes when a patient starts wearing a hearing aid. or changes hearing aids,
or
wears attenuating ear plugs, as compared to wearing nothing at all.
In a vision test, parameters varied include:
= spatial position of stimuli
= spatial separation of stimuli
= size of stimuli
= shape of stimuli
= brightness of stimuli
= frequency of oscillation and waveform of oscillation of stimuli.
In a hearing test, parameters varied include:
= spatial position of stimuli
= time-separation of stimuli
= duration of stimuli
= frequency-separation of stimuli
= fundamental frequency of stimuli
= bandwidth of stimuli
= modulation of stimuli (including amplitude modulation and frequency modula-
tion).
FIG. 9b illustrates visual field stimuli, when the asymptotic sensory emission

test is implemented as a visual field test. A test subject (human patient,
other
organism, other naturally occurring sensory process, or manmade sensor) (96-
01) is
positioned in front of a display (stimulus device) (9b-03). The test subject's
field
26

= CA 02873657 2014-12-08
of view (9b-02) is positioned such that it either directly faces the display,
or some
portion of it is reflected or refracted in order to fall on the display. A
crosshair or
other alignment symbol (9b-04) appears and the test subject is directed to
duly focus
on the centering point. A test symbol (9b-05) appears to determine through the
test-subject's response whether the center of the field-of-view was actually
centered
at (9b-04). A field-testing stimulus area is activated (9b-06), with stimulus
symbol
(9b-07). An additional stimulus symbol (9b-08) may be activated by the system
according to random seed, and the test subject is directed to duly indicate
into the
response interface, what features of the stimulus region were observed.
Additional
io features of the stimulus are varied, as listed above.
FIG. 10 depicts a system to visualize sensing emissions in 3D augmediated
reality,
to "see sight" and "visualize vision".
Generalized signal flow of Veillance AR (Augmediated Reality) is based on vi-
sion and IMU (inertial measurement unit) data, to be able to track cameras and

participants bodies for perspective rendering.
INITIAL CALIBRATION OF THE AR ENVIRONMENT: AR setup involves
the placement of cameras and the initial test and measurement of their
veillance flux.
We employ a combination of dome-enclosure cameras, bracket-mounted cameras,
and
handheld cameras operated by users.
To first detect veillance flux emitted by those cameras, we use a combination
of
veillametrics and field-of-view detec- tion based on an array of LEDs with a
video
feedback loop, in a "video bug sweeper", analogous to the audio bug sweepers
used to
detect hidden microphones. Abakography is then used to render a 2D
visualization if
desired (e.g. Fig. la). Finally, to prepare for AR rendering. veillance flux
emitted by
each camera-under-test is vectorized, by marking its edges using a handheld
marker
beacon and a 3D depth sensor.
EGOGRAPHIC AND EXTROGRAPHIC AUGMEDIATED REALITY TO VI-
SUALIZE THE VEILLANCE FIELD: Rendering veillance flux in 3D, as well as mark-
ing and tracking it spatially, is performed in our system using both
egographic (body-
mounted, outward facing) depth sensors, and extrographic (environment-mounted)

sensors. For an egographic sensor we use the 3D depth-sensing Meta 1 glasses
(worn
on the head to track camera/beacon positions), which also serve as a see-
through AR
27

CA 02873657 2014-12-08
display. For an extrographic sensor, we have two implementations: one design
uses a
tablet computer programmed to optically recognize and track the motion of
cameras-
under-test directly for deducing the motion of veillance (Fig. 3); the other
design
uses a stationary 3D camera to track users bodies in absolute position. The
relative
position vector from the egographic sensor is added to the absolute position
vector
from the extrographic body sensor. to give a final absolute position, during
the cali-
bration stage and during the real-time AR experience when tracking stationary
and
moving cameras. Once veillance-field calibration is complete, the AR ex-
perience can
begin. Veillance emissions from cameras are visualized along with markup
statistics
(Fig. 3). Users can also point their own head-worn cameras at others to
photograph
them (i.e. to shoot a photo and emit veillance flux). The system tracks the
position
of the head-worn cameras using the inertial measurement unit (IMU) in each set
of
Meta glasses. Augmediated reality (AR) veillance flux is rendered though each
users
AR display from the perspective of his/her current position, rendered
stereoscopically
using Unity3D, orienting in space in real-time through a combination of IMU
readings
and optical tracking.
FIG. 11 depicts a veillance field dosimeter. implementated with an electronic
circuit, to measure exposure to inverse light: that is. measuring how much a
user has
"been seen".
The dosimeter measures the time-integrated veillance field ¨ that is, the
concen-
tration of sensitivity falling on the user. from various sensors including
surveillance
cameras, sound recording devices. etc..
28

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2014-12-08
(41) Open to Public Inspection 2016-06-08
Dead Application 2016-12-08

Abandonment History

Abandonment Date Reason Reinstatement Date
2015-12-08 Failure to respond to sec. 37

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $200.00 2014-12-08
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
MANN, WILLIAM STEPHEN GEORGE
JANZEN, RYAN EDWARD
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Abstract 2014-12-08 1 23
Description 2014-12-08 28 1,521
Claims 2014-12-08 4 146
Drawings 2014-12-08 18 420
Representative Drawing 2016-05-11 1 13
Representative Drawing 2016-06-16 1 12
Cover Page 2016-06-16 2 54
Assignment 2014-12-08 2 118
Correspondence 2014-12-18 1 32
Correspondence Related to Formalities 2016-04-15 2 102