Sélection de la langue

Search

Sommaire du brevet 3028749 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 3028749
(54) Titre français: SYSTEME, MECANISME ET APPAREIL DE SANTE, DE BIEN-ETRE ET DE CONDITIONNEMENT PHYSIQUE FONDES SUR LA KINESIOLOGIE INTEGREE, OU SIMILAIRE
(54) Titre anglais: HEALTH, WELLNESS, AND FITNESS SYSTEM, MEANS, AND APPARTAUS BASED ON INTEGRAL KINESIOLOGY, OR THE LIKE
Statut: Réputée abandonnée et au-delà du délai pour le rétablissement - en attente de la réponse à l’avis de communication rejetée
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • A63B 69/00 (2006.01)
  • A61H 9/00 (2006.01)
  • A63F 13/46 (2014.01)
  • A63F 13/80 (2014.01)
  • E4H 4/00 (2006.01)
  • G6F 3/033 (2013.01)
  • G6N 20/00 (2019.01)
  • G16Z 99/00 (2019.01)
  • H4W 4/30 (2018.01)
  • H4W 88/02 (2009.01)
(72) Inventeurs :
  • MANN, STEVE (Canada)
(73) Titulaires :
  • STEVE MANN
(71) Demandeurs :
  • STEVE MANN (Canada)
(74) Agent:
(74) Co-agent:
(45) Délivré:
(22) Date de dépôt: 2018-12-31
(41) Mise à la disponibilité du public: 2020-06-30
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Non

(30) Données de priorité de la demande: S.O.

Abrégés

Désolé, les abrégés concernant le document de brevet no 3028749 sont introuvables.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.

Désolé, les revendications concernant le document de brevet no 3028749 sont introuvables.
Les textes ne sont pas disponibles pour tous les documents de brevet. L'étendue des dates couvertes est disponible sur la section Actualité de l'information .

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


BUREAU REGIONAL DE L'OPIC
TORONTO
CIPO REGIONAL OFFICE
DEC 3 1 2018,
Health, wellness, and fitness system, means, and appartaus
based on integral kinesiology, or the like
Steve Mann
WearTechTm MannlabTM, 135 Churchill Avenue, Palo Alto, 94301
http://www.mannlab.com
http://www.weartech.org
2017 December 6
Abstract
A health, wellness, and fitness system, means, appartaus, or the like is
described. In one embodiment, a spa or
waterpark facility is provided in which participants receive wellness or
fitness training, as well as education and
understanding of the world in terms of hydraulic phenomena such as hydraulic
head. In another emobiment, an
immersive virtual reality system is used to provide biofeedback within an
imm,ersive environment such as a pool,
spa, bath, or flotation tank. In some embodiments, the spa is designed so that
the user does not need to undress,
such as by way of a membrane separating the user from the hydraulic fluid. In
some embodiments, a fluid is
supplied under pressure, separated from the user, such as to provide massage
or pressure or therapy for autism,
such as, for example, an autistic user receiving a pressurized sequeezement
while entrained in biofeedback-based
meditation. In other embodiments, an optionally robotic fitness training
means, apparatus, and method embodies
principles of integral kinesiology to dramatically improve fitness. A non-
robotic version is also disclosed. Both
versions work similarly and are therefore interchangeable, so as to allow
widespread use. A user of the non-
robotic version can upgrade to the robotic version, or, while traveling, a
person accustomed to the robotic version
can temporarily downgrade to the non-robotic version without substantial loss
of integrinessm(integral fitness, i.e.
integral kinesioloyical fitness). Moreover, the robotic version will fallback
to the non-robotic version in the event of
power loss or malfunction, so that it can continue to operate and continue to
provide integral kinesiological fitness
training.
1
CA 3028749 2018-12-31

BUREAU REGIONAL t.,b
TORON ro
CIPO REGIONAL OFFICE
o= DEC 3 1
2018
õPrioritalaim: The.author wishes to claim priority in regards to a United
States p visional patent application
=entitleci-"Undigital CY,borg Craft: A Manifesto on Sousveillant Systems for
Machine In grity" filed 2016 December
.01, or "12/02/2016", Application number 62/497,780, and CIPO (Canadian
Intellecti l Property Office) applica-
tion entitled Means, apparatus, and method for Humanistic Intelligence,
Undigital Cyborg ra t, an ousve
Systems for Machine Ingegrity, Follow-up Number 16-12-1638, filed 2016
December 29th, as well as United States
provisional patent application (eFiler) entitled "Computational Seeing Aid,
and Sensory Computational Means,
Apparatus, and Method", EFS ID 29863521, Application Number 62535977
Confirmation Number 1400, inventor
Steve WILLIAM STEPHEN GEORGE MANN, submission at 05:44:34 EST on 24-JUL-2017.
1. Seeing and understanding the world
Many of my inventions pertain to helping people, i.e. in support of the IEEE
slogan "Advancing technology for
humanity".
The goal of Mannlab is to make the world a better place, and help people be,
become, and remain fit both
physically and mentally. For example, we can help people see and understand
their world, and this is one step
toward physical and mental fitness, wellbeing, independence, and quality-of-
life.
= A good seeing aid helps people see, understand, remember, and share their
surroundings, and navigate the
realities around them.
Advances in sensing technology, combined with Al (Artificial Intelligence) and
machine learning, have have
created devices and systems that sense our every movement in intricate detail.
Society is evolving toward a world
of surveillant systems ¨ ¨ machines that sense us more intimately, yet reveal
less about their internal states and
functions. The modern "user-friendly" trend is to hide functionality and
machine state variables to "simplify"
operation for end-users. While this "dumbed down" (and "dumbing down")
technology trend is supported by
most of society, there is a risk that it undermines the natural scientific
curiosity and comprehensibility of some
end-users, leading them away from trying to understand our technological
world, toward a world of reduced fitness,
and reduced capacity to think independently, and to make great new
discoveries.
Surveillant systems work against users of intellect, excluding them from
particpating fully in technological
progress, and possibly driving some users away from logical thinking and
toward technopaganist "witchcraft" and
insanity.
To reverse this harmful trend, Minsky, Kurzweil, and Mann proposed the use of
HI (Humanistic Intelligence)
to create a "Society of Intelligent Veillance". Accordingly, Sousveillant
Systems are systems that are designed to
reveal their internal states to end-users intelligently, to complete an HI
feedback loop.
Technologies like self-driving cars have the danger of reducing rather than
increasing human intellect and fitness.
Similarly, smart devices like automobiles in general, elevators ("electric
ladders"), escalators (electric stairs) and
the like reduce our physical fitness as well, making us weak, and unable to
survive in emergency conditions. For
example, in modern society, many people are too weak to climb a rope to escape
from a burning building in a
fire or other emergency. Many people are too weak mentally, to fix their own
automobiles or smartphones in an
emergency when their car breaks down in a remote area, or when their
smartphone breaks down and their is an
emergency.
With the reduction in both mental and physical fitness, we have a reduced
quality of life.
My invention aims to create better technology that makes us smarter and
fitter, i.e. both mental and physical
fitness, rather than stupider and weaker. Here is disclosed a number of
embodiments of an invention related to the
new emerging field of Sousveillant Systems.
The invention is based on technology that makes its internal state apparent to
the end user, as well as the
internal states of other systems, without appreciable delay or omission
("Feedback delayed is feedback denied").
In one embodiment, a Haptic Augmented Reality Computer Aided Design system
allows the user to create
content using a special kind of lock-in amplifier. In another embodiment, the
user's body, especially their core (e.g.
transversus abdominis, obliques, rectus abdominis, erector spinae, etc.)
facilitates a pointing device or cursor, in
the feedback loop of an interactive computational process such as a game. This
"PlankPointm" or "CorePointTM"
technology gives rise to a new form of fitness based on Integral Kinesiology
such as absement or absangle with
respect to a target path or goal trajectory through a multidimensional virtual
reality or augmented reality space.
In another embodiment, the device helps the wearer see and understand the
internal state of other systems,
such as electric circuits, where the thermal field of conduction is
superimposed on temperature fields and electric
fields.
CA 3028749 2018-12-31

BUREAU REGIONAL OE L'OPIC
TORONTO
CIPO RNIONAL OFFICE
DEC 3 1 2018 4
= =
In another embodiment, a safetiness field (safetyfieldTM) or dangerfield is
superimpos L th.Qw how safv or
dangerous a path or space or trajetory, or the like, is.
. =
In another embodiment, a person such as a doctor or surgeon can wear the
seeing aid to see e e ca -
potentials of the nerve conduction in a patient that the doctor is viewing,
with physiology and neurophysiology
overlaid together with other medical imaging data such as ultrasound images
and live video feeds.
In another embodiment, users can see in new ways, using sound, sonar, or the
like, combined with radar or
other imaging modalities.
2. Humanistic Intelligence: Surveillance AND Sousveillance together
2.1. Surveillance and Sousveillance
The word "surveillance" is a word coined during the French Revolution from the
prefix "sur" (meaning "over"
or "from above") and the postfix "veillance" (meaning "sight" or "watching")
[77] The closest pure English word
is "oversight". See Table 1.
English French
to watch veiller
watching (sensing in general) veillance
watching over (oversight) surveillance
over (from above) sur
under (from below) sous
undersight (to watch from below) sousveillance
Table 1: English words with French translations.
Surveillance [118, 64, 30] (oversight) is not the only kind of veillance
(sight). Sousveillance (undersight) has also
recently emerged as a new discipline [72, 103, 37, 127, 9, 8, 35, 126, 21, 6,
4, 142, 115, 124, 65, 137, 43].
These veillances (sur and sous veillance) are broad concepts that go beyond
visual (camera-based) sensing,
to include audio sur/sousveillance, dataveillance, and many other forms of
sensing. When we say "we're being
watched" we often mean it in a broader sense than just the visual. For
example, when police are listening in
on our phone conversations, we still call that surveillance, even though it
involves more of their ears than their
eyes. So, more generally, "surveillance" refers to the condition of being
sensed. We often don't know who or what
is doing the sensing. When a machine learning algorithm is sensing us, that is
also surveillance. AT (Artificial
Intelligence) often involves surveillance. Some surveillance can be harmful
("malveillance"). Some can be beneficial
("bienveillance"), like when a machine senses our presence to automate a task
(flushing a toilet, turning on a light,
or adjusting the position of a video game avatar).
Systems that sense us using detailed sensory apparatus are called surveillant
systems. Systems that reveal
themselves to us, and allow us to sense them and their internal state
variables are called Sousveillant Systems.
2.2. Winning at AI is losing
In the physical world, people used to walk or run long distances, and hunt for
food, and the like, and remain
relatively fit in both mind and body. More recently, the invention of the
automobile, elevator, electric ladder,
electric stairs (escalator), television remote controls, etc., have resulted
in a more fat and lazy population with a
reduced level of physical fitness.
While our bodies deteriorate, our minds are also rotting away in a similar
fashion, through the widespread
adoption of "smart" devices that have the danger of creating smart cities for
stupid people.
A common goal among AT (Artificial Intelligence) researchers is to replicate
human intelligence through com-
putation, and ultimately create another species having human rights and
responsibilities. This creates a possible
danger to humanity, through what many researchers refer to as the singularity.
There is a race to see who will be first to create a truly intelligent
machine. This highly competitive research
is, in many ways, like a game. But what will be the prize?
It is very possible that we can only win the Al game by losing (our humanity).
2.3. Humanistic Intelligence
HI (Humanistic Intelligence) is a new form of intelligence that harnesses
beneficial veillance in both directions
(surveillance and sousveillance, not just surveillance). HI is defined by
Minsky, Kurzweil, and Mann as follows:
CA 3028749 2018-12-31

1 .... ....m ------------------
00 ...,
/
,
i \
*
1 i
Senses
.L......¨.),..
1 .
Effectors' 2
1 ......,. Human >,%
&I I
I 0- CD ___
4.1 "
=... ....., - ... ..--..
= MN 0.) I
I
I = C MO 0
Ma (13 3 ft; mc 1
I co= _ ....
I >Q) A
0 =
I 1. > I. (1) I
I 0 = c 1¨
I
I XI 0 ________________________ 0 I
Machine f Icn
.... ....... 1
1 ...õ,...
5¨:-.7er=s¨* Actuators I '' 6
I
',Humanistic Intelligence ;
. (HI) ,
,
. ----------------------------------------------------- ,
.... - S. Mann, 1998
..
Figure 13. The Six Signal Flow Paths of HI: A human (denoted symbolically by
the circle) has senses and effectors
(informatic inputs and outputs). A machine (denoted by the square) has sensors
and actuators as its informatic inputs and
outputs. But most importantly, HI involves intertwining of human and machine
by the signal flow paths that they share in
common. Therefore, these two special paths of information flow are separated
out, giving a total of six signal
flow paths.
"Humanistic Intelligence [HI] is intelligence that arises because of a human
being in the feedback loop of a
computational process, where the human and computer are inextricably
intertwined. When a wearable computer
embodies HI and becomes so technologically advanced that its intelligence
matches our own biological brain,
something much more powerful emerges from this synergy that gives rise to
superhuman intelligence within the
single `cyborg' being." [112]
HI involves an intertwining of human and machine in a way that the human can
sense the machine and vice-versa,
as illustrated in Fig. 13.
HI is based on modern control-theory and cybernetics, and as such, requires
both controllability (being watched)
and observability (watching), in order to complete the feedback loop. In this
way, surveillance (being watched) and
sousveillance (watching) are both required in proper balance for the effective
functioning of the feedback between
human and machine. Thus Veillance (Surveillance AND Sousveillance) is at the
core of HI. (See Fig. 14.)
Poorly designed human-computer interaction systems often fail to provide
transparency and immediacy of user-
feedback, i.e. they fail to provide sousveillance. As an example of such a
"Machine of Malice", an art installation
was created by author S. Mann to exemplify this common problem. The piece,
entitled "Digital Lightswitch"
consists of a single pushbutton lightswitch with push-on/push-off
functionality. The button is pressed once to turn
the light on, and again to turn the light off (each press toggles its state).
A random 3 to 5 second delay is added,
along with a random packet loss of about ten percent. Thus the button only
works 90 percent of the time, and,
combined with the delay, users would often press it once, see no immediate
effect, and then press it again (e.g.
turning it back off before it had time to come on). See Fig. 15.
CA 3028749 2018-12-31

-NeiHance is the core of Humanistic Intelligence
=
= Senses
XEffectors
____________________________________________________ ¨
Huma
= C
=
_0 =
¨
¨
>w = ANN
0 (1)
Q)
tn C
Ja 0 0
V)
0 V)
Machin.
Sensors I Actuators
liHumanistic Intelligence
(HI)
Figure 14. HI requires both Veillances: machines must be able to sense us, and
we must be able to sense them! Thus
veillance is at the core of HI. In this sense Surveillance is a half-truth
without sousveillance. Surveillance alone does
not serve humanity. Humans have senses and effectors at their informatic
inputs and outputs. Machines have sensors and
actuators at their informatic inputs and outputs. The signal flow paths that
connect them are surveillance (when we're
being watched or sensed by the machine) and sousveillance (when we're watching
or sensing the machine). HI (Humanistic
Intelligence) requires all six of these signal flow paths to be present [112].
When one of the six is blocked (most commonly,
Sousveillance), we have a breakdown in the feedback loop that allows for a
true synergy ("cyborg" state). To prevent this
breakdown, Sousveillant Systems mandate Observability (Sousveillance).
3. Why we need HI to undo the insanity of Al
Much has been written about equiveillance, i.e. the right to record while
being recorded [140, 141, 65, 87], but
here our focus is on the right to simply understand machines that understand
us, and not become stupider while
machines become smarter.
In the context of human-human interaction, the transition from surveillance to
veillance represents a "fair"
(French "Juste") sight and, more generally, fair and balanced sensing.
But our society is embracing a new kind of entity, brought on by Al
(Artificial Intelligence) and machine learning.
Whether we consider an "Al" as a social entity, e.g. through Actor Network
Theory [116, 60, 14, 145], or simply
as a device to interact with, there arises the question "Are smart things
making us stupid?"[114].
Past technologies were transparent, e.g. electronic valves ("vacuum tubes")
were typically housed in transparent
glass envelopes, into which we could look to see all of their internals
revealed. And early devices included schematic
diagrams - an effort by the manufacturer to help people undertand how things
worked.
In the present day of computer chips and closed-source software, manufacturers
take extra effort not to help
people undertand how things work, but to conceal functionality: (1) for
secrecy; and (2) because they (sometimes
incorrectly) assume that their users do not want to be bothered by detail,
i.e. that their users are looking for and
abstraction and actually want "bothersome" details hidden [80].
CA 3028749 2018-12-31

¨ ------------------------------------------------
S.
S.
=
,'Senses
)110i Effectors
2
Huma
__________________________________________________ =_
_
11
I Random
Delay
o=
G)
4'>
c s._
<41.
MachinE (i)
5-1¨Ise-717or Actuators ___________________________________ >6
Machines of Malice
= ("Digital Lightswitch")
---------------------------------------------------- -'S. Mann, 2013
Figure 15. Systems that fail to facilitate sousveillance are machines of
malice. Example: an art installation by author S.
Mann, consists of a pushbutton (push-on/push-off) light switch where a random
3 to 5 second delay is inserted along with a
percent packet loss. The parameters were adjusted to maximize frustration in
order to show a negative example of what
happens when we fail to properly implement a balanced veillance.
At the same time these technologies are being more concealing and secretive,
they are also being equipped
with sensory capacity, so that (in the ANT sense) these devices are evolving
toward knowing more about us while
revealing less about themselves (i.e. toward surveillance).
Our inability to understand our technological world, in part through secrecy
actions taken by manufacturers,
and in part through a general apathy, leads to the use of modern devices
through magic, witchcraft-like rituals
rather than science [63]. This technopaganism [134] leads people to strange
rituals rather than trying to understand
how things work. General wisdom from our experts tell us to "reboot" and try
again, rather than understand what
went wrong when something failed [128]. But this very act of doing the same
thing (e.g. rebooting) over and over
again, expecting a different result is the very definition of insanity:
"Insanity is doing the same thing, over and over again, but expecting
different results." ¨ Narcotics
Anonymous, 1981.
In this sense, not only do modern technologies drive us insane, they actually
require us to be insane in order
to function properly in the technopagan world that is being forced upon us by
manufacturers who conceal its
workings.
I propose as a solution, a prosthetic apparatus that embodies the insanity for
us, so that we don't have to. All
call this app "LUNATIC". LUNATIC is a virtual personal assistant. The user
places a request to LUNATIC and
it then "tries the same thing over and over again..." on behalf of the user so
that the user does not need to himself
or herself become insane. For example, when downloading files, LUNATIC starts
multiple downloads of the same
file, repeatedly, and notifies the user when the result is obtained. LUNATIC
determines the optimum number
of simultaneous downloads. Typically this number works out to 2 or 3. A single
download often stalls, and the
second one often completes before the first. If too many downloads of the same
file are initiated, the system slows
down. So LUNATIC uses machine learning to detect slowed connections and makes
a best guess as to the optimum
number of times to repeat the same tasks over and over again. This number is
called the "optimum insanity", and
CA 3028749 2018-12-31

is the level of insanity (number of repetitions) that leads to the most likely
successful outcome.
At times the optimum insanity increases without bound, typically when websites
or servers are unreliable or
erratic. LUNATIUC is not performing a denial of service attack, but, rather, a
"demand for service". A side effect
is that when large numbers of people use LUNATIC, erratic websites will
experience massive download traffic, such
that LUNATIC disincentivises insanity.
In this sense, LUNATIC is a temporary solution to technopagan insanity, and
ultimately will hopefully become
unnecessary, as we transition to the age of Sousveillant Systems.
4. Early example of Sousveillant Systems from 1974: Sequential Wave Imprinting
Machine
Here I will explain SWIM in two different variations:
5. SWIM
SWIM (Sequential Wave Imprinting Machine) is an invention that makes for
visual art as well as scientific
discovery of otherwise invisible physical phenomenology around us, such as
sound waves, radio waves, etc.. It
uses multimediated reality (sensing, computation, and display) to turn
phenomena such as interference patterns
between multiple sound sources, into pictures "painted" by nature itself
(rather than from computer graphics).
This gives us a glimpse into the nature of the real world arouond us, i.e.
phenomena arising from physics (natural
philosophy).
SWIM also reveals the otherwise invisible capacity of a microphone or
microphone array to "hear", by "painting"
a picture of its metasensory (sensing of sensors) wave functions.
SWIM can also be a robotic mechanism for the precise scientific sensing of
sensors and the sensing of their
capacity to sense.
6. introduction
The Latin phrase "Quis custodiet ipsos custodes?", by Roman satirist Juvenal
[57], translates to English as "Who
watches the watchers?". Juvenal's belief is that ethical surveillance is
impossible when the surveillers (custodes)
are corruptible.
In this paper we focus more on the appartaus (i.e. sensor technology) of
"watching", rather than on the
people/politics. Thus we don't care whether "watching" is surveillance
(overslight) [64, 49], or sousveillance
(undersight) [103, 127, 55, 74, 110, 42, 44, 143, 137, 6, 124, 65]. We treat
both veillances equally.
We also examine metaveillance (the sight of sight itself) [81]. Meta is a
Greek prefix that means "beyond".
For example, a meta conversation is a conversation about conversations, and
meta data is data about data.
Metaveillance is the veillance of veillance, and more generally, metaveillance
is the sensing of sensors and the
sensing of their capacity to sense.
Thus we might ask:
"Quis sensum ipsos sensorem?"
i.e. "Who senses the sensors?", or more generally, "How can we sense sensors,
and sense their capacity to sense?",
and how and why might this ability be useful?
"Bug-sweeping", i.e. the finding of (sur)veillance devices is a well-developed
field of study, also known as
Technical surveillance counter-measures (TSCM) [146, 47, 129]. However, to the
best of our knowledge, none of
this prior work reveals a spatial pattern of a bug's ability to sense.
6.1. Metaveillance and metaveillography
Metaveillance (e.g. the photography of cameras and microphones to reveal their
capacity to sense) was first
proposed by Mann in the 1970s [84, 69, 1] (see Fig. 16 and 17). Metaveillance
was envisioned as a form of visual
art [83] and scientific discourse [84], and further developed by Mann, Janzen,
and others [53, 85] as a form of
scientific measurement and analysis.
SWIM for phenomenological augmented reality using a linear array of light
sources, sequentialized through a
wearable computer system with a lock-in amplifier, was a childhood invention
of S. Mann [69, 1]. See Fig. 18.
A more modern version of this apparatus appears in Fig. 19
CA 3028749 2018-12-31

"Rabbit ears"
.,
receive antenna 4--r'
. ,.,
1, :
! Transmitter MT 1 1
,, ; 1 i .....---
a)
ig i I i 1 . , i
...
I 1 k ., Surveillance i=
t ..c
. I ; c
Ligh.-E-'-----
trails
i
!
'-''---
, 11 ' camera Television i
, .
' .
:., receiver
q :
1 . , Experimental apparatus used
to take the photograph to the left
Figure 16. Early example of metaveillography (metaveillance photographs) using
the feedbackographic techinque [88, 84].
Phenomenological augmented reality from the 1970s using video feedback with a
black and white television screen. Mann
observed that when a television was tuned to the frequency of a surveillance
camera's transmitter, it would glow more
brightly when visible to the camera. Thus waving the TV back and forth in
front of the camera in a dark room would trace
out the camera's metaveillograph, visible to the human eye or photographic
film (by way of a second camera loaded with
film). Modifying the video amplifier for various gain levels and photographing
the moving TV through color filters showed
spatial variation in the quantity of metaveillance: blackness indicates zero
metaveillance; dark blue indicates moderate
metaveillance, and red indicates strong metaveillance. Green indicates a
quantity between that of red and blue.
Transmitter
7`e-1 Light bulb
camera
.,..õ
brightens when
__..
( i õ
..,
i
} "Rabbit ears" /
receive antenna '
0 4
,
,, .,..,_
.
/ Wearable ,
^ 7.7.7..... . . 1
computer and -","'" i
4 I
, phenomena.- ' : '1, : 4 s.,.
lock-in . . .
amplifier .... -- '
Light bulb
Figure 17. Improved version of the apparatus of Fig. 16, using a light bulb
instead of an entire television display [88, 84].
A wearable computer and videographic lock-in amplifier was designed
specifically to lock in on extremely weak television
signals. The light bulb goes from a dim red glow to a brilliant white whenever
it enters the camera's field-of-view, and then
the bulb brightness drops off again when it exits the camera's field of view.
Waving it back and forth in a dark room reveals
to the human eye, as well as to photographic film (picture at left) the
camera's metaveillance field. The glow from the light
bulb lags behind the actual physical phenomenon. Thus as we sweep back-and-
forth, odd numbered sweeps (1st, 3rd, 5th,
and 7th, and 9th) appear near the top of the sightfield, whereas even sweeps
(2, 4, 6, 8) appear near the bottom.
6.2. Veillance games
A number of games have been built around the concept of metaveillance and
metaveillogrammetry, as illustrated
in Fig. 20 Veillance games are based on sensing of sensors, such as cameras or
microphones. Game themes such as
"spy versus spy" are played out in the realm of veillance, counterveillance,
and metaveillance. Some games play
directly to human vision whereas others use an eyeglass-based device to
capture and freeze the exposures into a
3D virtual world or the like. Some games use photographic media, which also
creates a new visual art form in and
CA 3028749 2018-12-31

. = . - ,,
-
=
''= *"*%'4%,,, ==*'' ' õ/ i , -` i , = , ' "
t , *a= 4 AA 4, õA
f i = : '
õI , *=44,444zelistõ ,
......õii.
it .
' ' ' . . f " = , s , . õ t , ..". -
",z.A,... 144.,..._ ..s...0400-00100=14,7
. , .= 4-
1 ' # ; ,, , i : i ' 41
4: NI ;4;2*. 611' ft. 4. & '14N4'''....46===,õ.,I-.41*"....."-
' ' . a . ' . l= .1 4, = 04 ...
4% .,:z. 11# =9
' õ = = i # 4 .f, 1 44 # 4 µA. =414,4p..` µ144,f4
,,
, ..õõ 0.= / 04 . 0 y # = .. l = . :.
i = 4 . .' t . = 4 4.,.. 44,-,... ,
, = -= = - = - 4 mrie - w ct = , .
,
' e .I. . .
- a, d- # = . , d =,,..r . Ali*, ...Z. . 4. 46,, -
."-.
. = .... === = i = .4 = .4.6 W. VA.....*== .10"..,
= - ...= AI e , = . 6 = 6I,b, 4"... a= ir
41,4." , 04,....k.
ir* ,... ,,=4õ., ...... 4'4 .õ ,, ,,,,, --%.
* ' ' ' 4*40,S*. 4,44.**74õ4. =,.''
' :.p' 41 = 4 - ' 4;4* ..,
, i -4-44744p
.(=1/4'` ilt 4. 4 4.7. õ, 41,4, 4 o ,
. = -ti:611kVo.'",....404k0:47i446,4:4,
VI Nalor.7414,10:4,4:44%,,,Nliiõõ ,
=
1111144%41111114114111114441
NiSifiziko764:
New .
Nib
Nitioh,
= ,,
. -ouppod--
/ do
o d .
le
" = 4 1¨
1 ,
.50 '
4, :. t'' . %. 3 ' = ' 11....71 ti;
v. . j
= k*- 4. =
, v._ `'4';4=4(; 414: ...., .144, . f
µ ... ,.
., . 91.." =.:. . ,
.. 's '.'4,= Cle ,
V ''= -- :41.: =
; r
. ,- to = ..(;. 1 <
*'`,1, q: k ,..= :A.1:
P =7,,.2 .
' ...,,,71 -..=,- .4) , ,
'4 .-..' ,..-sr2" 4/./ ,- . ., 1 1.
T..42,..r. .
, _ g . ''. = .001.,:,õ..r-s '4.1-1-4-
=k= - / -.;"'",,- - 41 ' '. :. ,,,i=
1
-----==.;,446"-= - - 0.7 44r."==== õ..-
- ..,..- - -f: 7,' _ '=,s., = ', ,_
1 4: ' .. . - - 4 =.,,' -4,-
. ,.,,:, ----,-,...,-;
4.
õ, ,-= 9.7'4.: = ii ''T' .
- - µ,..
,..........
..,.0, õ. 3:
'llik 0 ' ' t .1 !..., , 6.../. c õA.. .
1.....r,
,,,......_," ,
.41 g \ t..;'.14..'
0 f = 4
4i , \ - ' = == .. ,,, / ... ,
,===
i ^ ... (!.;:: :' ' = ,..^-:V
i.- .
f +
=',=' ' 5.' ,j..'.. r., 4; jor4i#4,
, 0 .1'',11 i
Figure 18. Sequential Wave Imprinting Machine (SWIM) consisting of a linear
array of electric light bulbs connected to a
wearable computer and wearable lock-in amplifier. This functioned like a giant
"paintbrush" to create an augmented reality
world from the physical phenomenology of metaveillance [81, 69], "painting"
with the light to expose the human eye or
photographic film to the camera's metaveillance field. Rightmost: World's
first wearable augmented reality computer (built
by S. Mann in 1974) on exhibit at National Gallery in 2O1..
CA 3028749 2018-12-31

r a V c
= = Ø.
' (,,wslail Mei
µ
,, ..
(
v.= ,-.. :i It .4 .. = . . .
... ., = .. . = = . .
ftlik
. . ..41.= .... . 0 4 . .
1..;;t1 :; I:: f : ., 0: : : = - ' 1 4 I 1 ' ,
i a: I A t 7 I k ne_.- . = -.
'
A_A
se
-. 0----- -
4.
t' :. : :' t : L * * : . ' I :.',. ' : --' -- i t
-- " -7,.= .- ¨
, : t :.' ;I :
t : * 1 : t : I : i
. . . .
: : : :
. _,...w....-....-..... _
. . *
:
Figure 19. Modern LED-based version of SWIM. Waving the wand back and forth
makes the sightfield (metaveillance) of the
surveillance camera or other vision sensor visible, using the methodology of
[78]. This allows us to see and better understand
the otherwise invisible Internet of Things (IoT) around us. The Internet of
Things has grown tremendously in recent years.
For a good summary of this development, see [25] and the earlier version of
the paper on arXiv [26]. In this figure, we
see the metaveillograph of a surveillance camera (left) as well as three
sensor-operated handwash faucets (right) that each
contain a 1024 pixel camera and vision system. Many washroom fixtures contain
low-resolution cameras [52, 51, 12] that
can be better understood by way of metaveillography.
,Irm.. ON iiinly ....1111110,""
.,, ,MI=
-11µti
. A . i. '''' ' .
MO
' ' -=.g.
. N;k1 ....
' 1 ...,. , =
.. A ..
\ . i 1. ' ' . . =. A V i
1411122:4õ. . IN1 421 ,1
. __
Elig ''.:-ii. . -= ''.f.: ei : - .- -I .1:t : ,, e ^
-. ,..
i '1r Pi'
Figure 20. Veillance games played with a camera mounted onto a toy gun. There
are at least two players: the shooter and
the defender. The shooter tries to shoot a recognizable picture of the
defender and wins points for doing so. The defender
enforces a "no photography" policy. The defender wins the game by catching the
shooter taking pictures, which is done by
waving the SWIM (Sequential Wave Imprinting Machine) back and forth in front
of the camera, to make its picture-taking
activity visible. The SWIM is visible in the middle picture. It is 48 cm long,
runs on three AA batteries.
of itself.
Note that the pictures in Fig. 1 to 20 are photographs, not computer graphics.
The word "photography" is a
Greek word that means "drawing or painting" ("graphy") with "light" ("photos"
or "phos"). Thus the Greek word
"photography" means "lightpainting" if we translate the word directly into
English. In this way, photography has
been regarded as "nature's pencil", as evident in the following quote:
"The plates of the present work are impressed by the agency of Light alone,
without any aid whatever
from the artist's pencil." [40, 144]
In a similar way, we aim to create new computational media that arise directly
from nature itself, using com-
puters to reveal natural philosophy (the physics of waves, sensing, etc.) and
thus make visible otherwise hidden
phenomenology.
6.3. Seeing and photographing radio waves and sound waves
In addition to seeing sight itself (i.e. metaveillance), SWIM has also been
used to see and photograph radio
waves and sound waves in near-perfect alignment with their actual situated
existence (unlike an oscilloscope, for
CA 3028749 2018-12-31

I
! fel
'f* S =
= . j 7
. tr.
-;
34 r Itt I
=
44111.1;
=
Figure 21. Photographs of radio waves and sound waves taken with SWIM. (left)
Custom-modifications to a smartphone
were made, and it was desired to see and understand the radio waves from the
phone, and how they propagate through
space. Waving the wand back and forth allows the waves to be seen by the naked
eye, as well as be photographed. (right)
In the design of musical instruments it is helpful to be able to see the sound
waves from an instrument and see how
they propagate through space. Here a robotic mechanism was built to excite the
violin at various frequencies and their
harmonics, using a Pasco Fourier Synthesizer driving a robotic actuator that
keeps the strings vibrating continuously. A
robotic SWIM moves back-and-forth on a 10 foot long (approx. 3m long) optical
rail. The SWIM includes 1200 LEDs (Light
Emitting Diodes), that make visible the complex-valued waveform (real, i.e. in-
phase component in red, and imaginary, i.e.
quadrature component in green).
example, which does not display waveforms situated at their natural physical
scale and position). See Fig. 21.
6.4. Grasping radio waves and sound waves
In addition to merely seeing radio waves and sound waves, we can also reach
out and touch and feel and grasp
these otherwise intangible waves. This is done using a mechanical form of the
SWIM, as shown in Fig. 22.
See Fig. 105.
6.5. Representing complex-valued electric waves using color
A method of representing spatially varying complex-valued electric waves was
proposed by Mann [69], in which
the color at each point encodes the phase in a perceptually uniform Munsell
colorspace, and the amplitude as the
overall quantity of light. An example of Mann's method also appeared as cover
art for the book, depicting the
Fourier operator (i.e. the integral operator of the Fourier transform as a two-
dimensional function in which one
dimension is time and the other dimension is frequency). See Fig. 24, and the
following Matlab fragment:
% fourieroperator2dat.m Steve Mann 1992
Jan 20
% creates the fourieroperator W = exp(j2pilf><t1)
f = (-(M-1)/2:(M-1)/2)*frac; % time span 1 second: (-.5,.5) second
t = (n - (N+1)/2)/N; % freq range for the given block
W = exp(j*2*pi*f(:)*t(:).'); % faster as a one-liner
mag = abs(W); % change rowvec to colvec
o = pi/180; % the degree symbol
p = 180 - angle(W)/o; % angles per degree shifted to all positive
g_to_r_indices = find (p < 144 & p >= 0); % red to green crossfade
r(g_to_r_indices) = p(g_to_r_indices)/144; % RED
g(g_to_r_indices) = (144-p(g_to_r_indices))/144; % GREEN
CA 3028749 2018-12-31

-- -
r
'' /' 0=011100}r,i=y , 1. v, I :',.,
"I:õ" ,`,,",,` "...
$.,'W-1- ' . VI% If, µIt':,:lh. .;=1
I i
r''' -
r 1 -
1 ,
,
,
( . , . ..
1
'a".,:',6,1q4,., 1 '1 :
-,-...-p- , r-,t , .
... - '
, I
.:r, P1'10. -"X = 1
'
. '.0V=L'' ,
0 I i IRO =
= I
i
/ I
I _11L--.4 -=:,;õ-,-,,,,
-- L.;..õ.-,..
I 1.1 li
'4E44
.1t - ;.,:.....
W
,
Figure 22. Mann's early 1970s apparatus for seeing, touching, grasping, and
feeling electromagnetic radio waves. An XY
plotter was arranged to freely move from left-to-right ("X"), while the up-
down movement ("Y") was driven by the output
of a lock-in amplifier tuned to a desired radio frequency of interest. A light
bulb was placed where the pen of the XY plotter
would mormally go. An antenna was placed on the moving part of the XY plotter.
In this way the antenna moves together
with the light bulb to trace out and make visible the otherwise invisible
electromagnetic wave, while also allowing the user
to grasp and hold the pen holder that also houses the light bulb.
-V , " ItIlti
1 , '
/,
4,
"414 :....4 = - .'.7''' .
.,
=
1? = 0'2.. = ',. ' (10 '' Ai
i.
I,,,29. . -- .., . ..
.- .... fir
A.-
4`";
Ip
;
i 4444 .
,
ai
4.:1Elk E.
4..
Figure 23. Tactile Sequential Wave Imprinting Machine (TSWIM) uses a
mechanical actuator to move a single light source
up-and-down while the device is waves side-to-side. The user can feel the
mechanical movement and thus feel the wave.
Here we can see and touch and grasp and hold electromagnetic radio waves as
they pass through various media. Leftmost:
wave propagation in air. Second from left: radio wave propagation through thin
wood. Third: radio wave propagation
through thick wood. Fourth: through copper foil. Fifth: through flesh. Note
the differences in amplitude which can also be
felt as well as seen.
CA 3028749 2018-12-31

Sine and Cosine Waves
... .. - ...... . ..
6
. ' ........... .
A ................................. ¨ .....
____./..''.
6) .2 _______:----¨ ' - .. -------------____
0..)
0
Cr
2 -.2' -==== ............................. '''''''
4.1
- .........................................
= ........................................ ' .. =- ' .. - '' = =
= ..... . ....... =
'
-Tn o +T/2
Time
The Fourier Operator as a Time-Frequency Distribution
. ________________________________________________ s
4- 5
. ,µ ..\.,.
e
,,,//
.64
N... ..
iT
1
\1 ? . 1P:0
- time +
Real part Imaginary part
i -070L=ye/ifj \112
,a)....0, _____________________________
N .6444....444.4,1 .=
X
_______________________________________________ r Time Time
Figure 24. Fourier operator (integral operator of the Fourier transform) as a
spatially and temporally varying complex-
valued wave function. Here color is used to represent the complex-valued
electric waves, allowing us to see both the real
and imaginary parts superimposed together. In the corresponding plots, solid
lines indicate the real parts, and dotted lines
indicate the imaginary parts. Reproduced from [69].
b(g_to_r_indices) = zeros(size(g_to_r_indices)); % BLUE
r2b_ind = find (p >= 144 & p < 288); % r to b crossfade
r(r2b_ind) = (288-p(r2b_ind))/144;
CA 3028749 2018-12-31

g(r2b_ind) = zeros(size(r2b_ind));
b(r2b_ind) = (p(r2b_ind) - 144)/144;
b2g_ind = find ( p >= 288 & p < 360 ); % b to g crossfade
r(b2g_ind) = zeros(size(b2g_ind));
g(b2g_ind) = (p(b2g_ind) - 288)/72;
b(b2g_ind) = (360-p(b2g_ind))/72;
R = floor(r.*mag*255.99); V. scale by magnitudes; from black to red
G = floor(g.*mag*255.99); % scale by magnitudes; from black to green
B = floor(b.*mag*255.99); % scale by magnitudes; from black to blue
In what follows, we use this method to show the spatially varying complex-
valued electric waves from a transducer
moved through space to sample a complex-valued wave function or meta wave
function.
7. Veillogrammetry versus Metaveillogrammetry
It is useful to define the following basic concepts. Thus we proffer the
following veillance taxonomy:
= Surveillance is the purposeful sensing by an entity in a position of
authority (typically a government or a
an organization within their own space, such as a convenience store monitoring
their own premises);
= Sousveillance is the purposeful sensing of an entity not in a position of
authority (typically an individual
or small group);
= Veillance is purposeful sensing. It may be sur-veillance or sous-
veillance. For the purposes of this paper,
we focus on the mathematics, physics, and visual art of veillance, and thus
make no distinction between
surveillance and sousveillance. Thus we use the term "veillance" rather than
"surveillance" when we wish
to ignore the political elements of sensing, and concentrate exclusively on
the mathematics and physics of
sensing.
= Veillography is the photography (i.e. capture) by way of purposeful
sensing, such as the use of surveillance
or sousveillance cameras to capture images, or such as the photography of
radio waves and sound waves and
similar phenomena as illustrated in Fig. 21. Our experimental setup for this
is shown in Fig. 64.
= Veillogrammetry is quantified sensing (e.g. measurement) performed by
purposeful sensing. For example,
video from a bank robbery may be used to determine the exact height of a bank
robber, through the use of
photogrammetry performed on the surveillance video. Likewise, veillogrammetry
with a microphone moved
through space can be used to quantify the sound field distribution around a
musical instrument in order to
study the instrument's sound wave propagation.
= Metaveillance is the veillance of veillance (sensing of sensors). For
example, police often use radar devices
for surveillance of roadways to measure speed of vehicles so that they can
apprehend motorists exceeding
a speed limit. Some motorists use radar detectors. Police then sometimes use
radar detector detectors to
find out if people are using radar detectors. Radar detectors and radar
detector detectors are examples of
metaveillance, i.e. the sensing (or metasensing) of surveillance by radar.
= Metaveillography is the photography of purposeful sensing, e.g.
photography of a sensor's capacity to
sense, as illustrated in Figures 1 to 5. Our experimental setup for
metaveillography is shown in Fig. 64.
= Metaveillogrammetry is the mathematical and quantimetric analysis of the
data present in metaveillog-
raphy.
Comparing the setup of Fig. 64 with that of Fig. 64, the difference is that in
Fig. 64, a signal sensor (receiver)
moves with the SWIM, and the reference to the lock-in amplifier remains fixed
at a stationary location, whereas
with Fig. 64 the reverse is true: a transmitter that feeds the lock-in
amplifier reference moves with the SWIM, and
the signal input comes from a stationary sensor fixed in the environment.
CA 3028749 2018-12-31

___________________________________________________________ \i/
9.6
96
¨
¨96
9.6
SIGNAL REFERENCE 14--SWIM
ACTOR SENSOR -3-LIGHTS SWIM
WAND_
--SIGNAL
(( I I SENSOR
WAVEFORM
,
>NO. MOVEMENT
PATH
SIG. REF. SIG.
GEN. AMP. AMP.
= = =
LIA SWIM
OR QS OX , COMP. VEILLO-
A
GRAPHY
Figure 25. Here is the experimental setup that is used to generate the
photographs of radio waves and sound waves in Fig. 6.
A moving sensor (receive antenna for radio waves, or a microphone for sound
waves) is attached to the linear array of lights
(SWIM LIGHTS) and moves with it. This sensor feeds the signal input of a lock-
in amplifier. The reference input to the
lock-in amplifier comes from a reference sensor fixed in the environment (not
moving), near the radio signal source or sound
source.
We make the argument that veillography and metaveillography are inverses of
each other, and that veillogram-
metry and metaveillogrammetry are also inverses of each other.
7.1. Experimental comparison of veillography and metaveillography
Here we produce two photographs of acoustic intereference patterns due to two
transducers. The first photo-
graph is a picture (veillograph) of sound waves coming from two identical
fixed (non-moving) ultrasonic tranducers
transmitting at 40,000 cycles per second, captured by a third identical moving
transducer (used here as a micro-
phone) in a plane defined by the central axis of the speakers. A diagram
showing the experimental apparatus is
shown in Fig. 63.
The second photograph is a picture (this time a metaveillograph) in which the
roles of the transmitters (speakers)
and receiver (microphone) are reversed.
These two photographs are shown in Fig 28, directly above one-another for easy
comparison (since the sound
waves travel left-to-right or right-to-left).
We chose to use ultrasonic transducers (the exact transducers used in most
burglar alarms and ultrasonic
rangefinders) because they work equally well as microphones or speakers.
What we discovered is that the two pictures are visually indistinguishable
from one-another. We see the same
interference fringes (interference patterns) from the pair of transducers
whether they function as speakers or
microphones. As an array of speakers we see the sound waves coming from them.
As an array of microphones, we
CA 3028749 2018-12-31

_____________________________________________________________ -=--
<t16_.
SWIM
S66
SIGNAL :LIGHTS
SENSOR
WAVEFORM
010
915 SWIM 01
TRANS-
M ITTER
WAND
_______________________________________________________________________________
_ - )00 MOVEMENT
PATH
BUG Rx REF.
GEN.
. = =
LIA SWIM META-
() R 95 OX COMP.
A VEILLO-
_
_________________________________________________________________ GRAPHY
Figure 26. Here is the experimental setup that is used to generate the
photographs of Figures 1 to 5. It functions much like
a "bug sweeper" but in a much more precise way, driving the linear array of
light sources (SWIM LIGHTS) that is waved
back-and-forth. For Figures 1 and 2, the array is a single element (just one
light source). For Figures 3 to 5, the transmitter
is the light source itself. Alternatively, as we shall see, the TRANSMITTER
can be the light source itself, or a loudspeaker
(for audio "bug sweeping"), or a transmit antenna (to detect and map out
receive antennae).
see the metaveillance wavefunctions [81], and both appear identical.
Moreover, storing the data from the lock-in amplifier into an array, using a
24-bit analog to digital converter,
allowed us to compare precise numerical quantities, and to conclude
experimentally that veillogrammetry and
metaveillogrammetry are inverses of one-another, i.e. that the two image
arrays give precisely the same quantities.
Fig. 29 shows a comparison between these two experimental setups:
1. a transmitter array sending out sound waves that are sensed with a single
receiver (veillogrammetry), and;
2. a receiver array (microphone array) metasensed [81] with a single
transmitter (metaveillogrammetry).
Here the coefficients of correlation between sensing and metasensing were
found to be 0.9969 for the real parts,
and 0.9973 for the imaginary parts.
We also tested the situation of just one transmitter and one receiver (i.e.
array size of 1 element). With single
transmit and single receive, the correlation coefficients were found to be
0.9988 for the real part and 0.9964 for the
imaginary part.
7.2. Using SWIM for engineering design
In one of our "spy versus spy" game scenarios we wished to design a microphone
array. Being able to see the
metaveillance of the microphone array helped us design it better. Fig. 30
shows a metaveillograph of an example
CA 3028749 2018-12-31

Experimental Setup for Veillography
4 __________________________________
I
0 __________________________ 0
Ili
40 kHz Transducer
(Receiver) ________________
RGB LED
40kHz(TrrrnslitetreAzay --31 __ 0..
____________________________ r , _____
e ______________________________________ Signal0 4 0000 Generatorcp s
I
Ref Signal LI.A. %
Re "Hi to
lt,l,
Video display _______________
. ct) 9 y___, CI) i %
11....-----111-0
Control 8. Data Logging
.¨....¨.,
,p, ...)/,.......
R,G,B Coordinates TT- r¨Jags.
Plotter 4
control X,Y,Z Coordinates
SIG. GEN. TRANSDUCER ARRAY (STATIONARY) TRANSDUCER (MOVING)
01 i Transducer Model MA40S4R
= I
To lock-in
iii
amplifier,
U 300pF 7¨ 300pF 300pF 300pF
ampli
set to 10ms
o 2200
2200pF and 10mV
o I
o -- C5 ..... C6 ....... C7
6 2200pF R2=2200pF R3
pF R4 ...... C7 1320
R4 :
cl- 320 320 = = = 32050
ai Ln? .--1 N Z 0
> 5 ,_
CO cC CU ..11 4)D t)
ll L1 U L2
58m 58m u L3
m U
7 L3
m
c 58
-o 58
V
c VI L$1 Ill VI 1/1
ro
C Clc c
i7) 6 (0 (13 ro
i'= I.- l'= IL- , =:.
------------------------------------- ¨ ----------
200 ___________________________________ '
g
SI
00
at
a
-200 .........................................................
co
<
-1dB
. ,
20 30 40 50 60 80 100
Frequency / kCPS
Figure 27. Apparatus (with equivalent circuit schematic and frequency
response) for the experimental comparison between
veillography and metaveillography. Here is shown the apparatus connected for
veillography, with a stationary array of trans-
mitters (speakers) and a moving receiver (microphone). This corresponds to the
top picture in Fig. 63. For metayeillography
(bottom picture of Fig. 63), the connections between the stationary transducer
array and the moving transducer are simply
swapped.
CA 3028749 2018-12-31

4.,,,..
1 ' k
Ar
I
Tran5mitter=Iti.
1 oi 1 11.7 Receiver on
Transmitter:0 j j moving platform
,
1 I
-wasimil. mt.,
III
t
____________________________________________________ litires
Veillograph / Veillogrammetry
4 , ,,,i-
-itlik / .... ,
1 .
i
mutmainummonw
I
Receiver = o- Transmitter on
1
Receiver 'o. a moving platform
L
v
,
,
_____________________________________________________ 4...4 i
w ___________________________________________________
Metaveillograph / Metaveillogrammetry
Figure 28. Top: photograph of a sound wave emanating from two speakers.
Bottom: photograph of a veillance wave function
"emanating" from two microphones (i.e. a photograph of the capability of two
microphones to "hear").
CA 3028749 2018-12-31

Comparison of veillogrammetry and metaveillogrammetry
0.4 ________________________________________ " 0.4
____________________________
CO
-o
co
0
> 0
0
-0.2- ....,1kReceiver array from _
single transmitter
Transmitter array to R 0.0069
si ng le receiver 0_
To
a)
-0.4 _______________________________________ cC -0.4
-0.2 0 0.2
0.4
Relative sample index down column 892 of 10,000 columns
Real part from two transmitters and one receiver
Figure 29. Comparison between double transmit and single receive
(veillogrammetry) and double receive and single transmit
(metaveillogrammetry).
microphone array of six microphones.
7.3. The Art of Phenomenological Reality
We have, in some sense, proposed a new medium of human creative expression
that is built upon nature itself,
i.e. natural philosophy (physics). In this new medium, nature itself "paints"
a picture of an otherwise invisible
reality.
For example, consider a microphone like we often use when we sing or speak at
a public event. There is an
inherent beauty in its capacity to "hear", and in that beauty there is a truth
in the physical reality inherent in it.
Its capacity to "listen" is something that we can photograph, as its veillance
wave function [81], which is a
complex-valued function. See Fig. 31 and 32.
As this function evolves over time, the veillance waves move outwards from the
microphone as the sound waves
move inwards towards it. The two move in opposite directions, i.e. in the same
way that holes and electrons move
in opposite directions in semiconductors.
This movement is merely a phase change, and therefore when we capture a number
of photographs over time, we
can animate the phase change, to produce a new kind of visual art that also
forms the basis for scientific exploration,
as well as practical engineering. For example, we discovered that there was a
defect in the microphone, as can be
seen in Fig. 31. There is visible a dark band in the colored rings. The dark
band emanates outwards, pointing
up and to the right, at about a 2-o-clock angle, i.e. about 30 degrees up from
the central axis. Thus we can
see the immediate usefulness of this new form of visual art and scientific
sensing. Consider, for example, use in
quality control, testing, sound engineering, and diagnostics, to name but a
few of the many possible uses of SWIM.
Visualization of sound is commonly used in virtual environments [125, 58, 10],
and with SWIM, we can directly
visualize actual measurements of sound waves.
In our case, we were able to find defects in the microphones we were using,
and replace them with new mi-
crophones that did not have the defect. Fig. 33 is a metaveillograph of two
new Shure 5M58 microphones we
purchased and tested. The 5M58 microphone is free of defects that were visible
in some of the other brands we
tested.
7.4. SWIM summary
We have presented the SWIM (Sequential Wave Imprinting Machine) as a form of
visual art and scientific
discourse.
CA 3028749 2018-12-31

- ____________________________________________
itiltil
74,,,...,t(C4411t4f
i'whossw4
, (
'.D*11#=µ
=
_____________________________________________________ roma abo
_____________________________________________________ 11 = 1
IS
.
4.
t4
.411/4 , =
, =
,,741 haw,
la* 11"44,;ttlii
Figure 30. Using SWIM to assist in sound engineering design. For a game we
wished to design an ultrasonic listening device
that was very directional. By being able to visualize the metaveillance
function (capacity of the microphone array to listen)
spatially, we were able to come up with an optimum number of array elements
and optimum spacing between them. Here
we see a preference for an odd number of elements, i.e. that 5 elements
perform better than six, especially in the near-field
where the central beam emerges stronger and sooner (sometimes "less is more").
CA 3028749 2018-12-31

re=
r =
eo = = = _______________________________________ `,
C
h4"`", =
=
110'
' it.;=Al=
44"
µ.
Figure 31. Metaveillograph of a microphone's capacity to listen to a 7040
cycles per second tone from the speaker at the
right: Visualizing hidden defects in sensors. This microphone has a defect in
its phase response, as well as a weakness in
a particular specific direction. Here the MR (Multimediated Reality) eyeglass
is worn by a participant able to see in three
dimensions the relationship between cause and effect in real time, even though
the photograph only shows a 2D slice through
the 3D space.
As a form of visual art, it can be used for Games, Entertainment, and Media.
As a scientific tool, it can be used
for engineering, design, testing, and understanding the world around us.
We have shown examples of veillance and metaveillance, as well as also shown
that they are, in some sense,
inverses of each other (i.e. when we swap roles of transmitter and receiver),
and we determined experimentally
that this reciprocity holds true to a correlation of better than .995 for the
specific cases of a transducer array of
length 1, 2, 5, and 6.
Thus SWIM, and phenomenological augmented reality, can be used for
engineering, design, testing, art, science,
games, entertainment, and media.
8. Revisiting Sousveillant Systems from 1974: Sequential Wave Imprinting
Machine
revisited
This section describes some unpublished aspects of a wearable computing and
augmented reality invention by
author S. Mann for making visible various otherwise invisible physical
phenomena, and displaying the phenomena
in near-perfect alignment with the reality to which they pertain. As an
embodiment of HI (Humanistic Intelli-
gence), the alignment between displayed content and physical reality occurs in
the feedback loop of a computational
or electric process. In this way, alignment errors approach zero as the
feedforward gain increases without bound.
In practice, extremely high gain is possible with a special kind of
phenomenological amplifier (ALIA = Alethio-
scopic/Arbitrary Lock-In Amplifier / "PHENOMENAmphfierTm") designed and built
by the author to visualize
veillance.
An example use-case is for measuring the speed of wave propagation (e.g. the
speed of light, speed of sound,
etc.), and, more importantly, for canceling the propagatory effects of waves
by sampling them in physical space
CA 3028749 2018-12-31

v .
_ _____________________________________
Figure 32. Metaveillograph of a microphone, where we can see its capacity to
hear a speaker (at the right) emitting a 3520
cycles per second tone.
with an apparatus to which there is affixed an augmented reality display.
Whereas standing waves, as proposed
by Melde in 1860, are well-known, and can be modeled as a sum of waves
traveling in opposite directions, we shall
now come to understand a new concept that the author calls "sitting waves",
arising from a product of waves
traveling in the same direction (Fig. 34), as observed through a
phenomenological augmented reality amplifier, in
a time-integrated yet sparsely-sampled spacetime continuum. See Fig 35.
8.1. Metawaves: Veillance Wave Functions
In quantum mechanics, a wavefunction is a complex-valued function whose
magnitude indicates the probability
of an observable. Although the function itself can depict negative energy or
negative probability, we accept this as
a conceptual framework for understanding the observables (magnitude of the
wavefunction).
In veillance theory, consider a metawavefunction, //),,, as a complex-valued
function whose magnitude indicates
the probability of being observed. For example,
(Op IIPA) = f OiL0m*dt,
(1)
(where * indicates complex-conjugation) grows stronger when we get closer to a
camera or microphone or other
sensor that is sensing (e.g. watching or listening to) us. Note that the
complex metawavefunction itself can be
negative and it can even be (and usually is) complex! This is different from
the veillance flux concept we reported
elsewhere in the literature [55, 54, 53], which is a real-valued vector
quantity, indicating the capacity to sense.
At first, the metawavefunction may seem like a strange entity, because it is
not directly measureable, nor is its
amplitude, i.e. it does not depict a quantum field, or any kind of energy
field for that matter.
CA 3028749 2018-12-31

õ."'"----'--\
,
-""=*...,,,
44., ,...õ ........ . .
P't - 11 1\41.11011111N ii V
- , __... Mit i %%NV4
= - - il i , tit%
, e4 i ii. t 6 IVW
,..,..: = a II it it 4 - .
It ' \Vt./11140 ti
I
,
...- , . .
t 4 . 4 -
.
e 4
1 1
\ t
4
. 1 '
¨
r
, . . . = .. .
,
4 .
,
,
1.9'
= illkir ,
---- ' .. ri =,
.:....,..4......
L.4 '..÷ f:i11100 = =
. --:S...,7?.'=-=----
Figure 33. Metaveillograph of two Shure SM58 microphones. Here we can clearly
see their metainterference pattern. Note
also the X-Y trace on the CRO (Cathode Ray Oscillograph) that is stacked on to
of the lock-in amplifier. The CRO trace
shows the real (in-phase) versus imaginary (quadrature) components of the lock-
in amplifier output during a portion of the
speaker print head's movement.
Cameras and microphones and other sensors don't EMIT energy, but, rather, they
sense energy. Cameras sense
light energy (photons). Microphones sense sound energy.
Thus (0i.,10p) does not correspond to any real or actual measurement of any
energy like sound or light, but,
rather, it is a metaquantity, i.e. a sensing of a sensor, or a sensing of the
capacity of a sensor to sense!
The word "meta" is a Greek word that means "beyond", and, by way of examples,
a meta-conversation is a
conversation about conversations. A meta joke is a joke about jokes. Metadata
(like the size of an image or the
date and time at which it was taken) is data about data. Likewise
metaveillance (metasensing) is the seeing of
sight, or, more generally, the sensing of sensing (e.g. sensing sensors and
sensing their capacity to sense).
Thus the space around a video surveillance (or sousveillance) camera, or a
hidden microphone, can have,
associated with it, a metawavefunction, Om, in which (Cbm Om) increases as we
get closer to the camera or microphone,
and, for a fixed distance from the camera or microphone, (7,bt,I) typically
increases when we're right in front of
it, and falls off toward the edges (e.g. as many cameras have lens aberrations
near the edges of their fields of view,
CA 3028749 2018-12-31

STAND WAVES
STTINC WAVES
...._
,.----
,
0000
,
Me I cit4g0
Ma-m.4574
Figure 34. Sitting waves as compared with standing waves.
_....: - - - - lii. - - - = - --_40,..,____.i _....:,.. ,..1 ___ ____ : - - -
- - --__ , 4'...,.õ . . . . .
A standing wave A Sitting wave and its
photographs
H $10A ------------------ õ _____
LI- 4111-
'3,4----
--AP.
Hol Willi
..--..-
-
a) 4, 14 111111111
--11.-
E
iz-7,011101 ----0.---------
¨,.. _
4
StoPe=1.446,
...ii,...õ0_,______õ,:,.______
,IIIII ,, 740: -111 -----_.
IA 116,4 4._
Xo X 1 X2 X0 X1 X2 Xo XI
X2
Space Space Space
Figure 35. Left: a standing wave at four points in time. Middle and Right: a
sitting wave at four points in time. Whereas
the standing wave stands still only at the nodal points, (e.g. elsewhere
varying in amplitude between -1 and +1), the
sitting wave remains approximately fixed throughout its entire spatial
dimension, due to a sheared spacetime continuum
with time-axis at slope 1/c. The effect is as if we're moving along at the
speed, c, of the wave propagation, causing the
wave to, in effect, "sit" still in our moving reference frame. Right: four
frames, F1 ... F4 from a 36-exposure film strip of
a 35-lamp Sequential Wave Imprinting Machine, S. Mann, 1974. Each of these
frames arose from sparse sampling of the
spacetime continuum after it was averaged over millions of periods of a
periodic electromagnetic wave.
and microphones "hear" best when facing directly toward their subject).
If we are a long way away from a camera, our face may occupy less than 1 pixel
of its resolution, and be
unrecognized by it. By this, I mean that a person looking through the camera
remotely, or a machine learning
algorithm, may not be able to recognize the subject or perhaps even identify
it is human. As we get closer to the
extent that we occupy a few pixels, the camera may begin to recognize that
there is a human present, and as we
get even closer still, there may be a point where the camera can identify the
subject, and aspects of the subject's
activities.
CA 3028749 2018-12-31

Receive antenna
Receiver/("rabbit ears")
Transmitter
Linear array of ilfr
computer-
Photographic imprint, or imprint on
controlled light ,, .,
retina by Persistence of Exposure
i- --------
Transmit antenna
, -----immint------------------------------
. __________________________ ,
-----
4\
...6.-
,
¨
¨ .....-
-
¨
________________________________ .., . ,
=
r , ,
P '
V i Jur
¨
/ ...... .......
-c-
.......
E ____________________________________________________________________________
'
; AMP
role..
Wearabl - . -iillIE------------------____Anz...t.74:1' --
4, õõA
se¨ computer ." ,-.* ,
, 4 , VI.
411
_
___ ., .........
_
,
_______________________ ANL .. = f
" I t -
' ........ _______
Oscilloscope ¨ ' = ¨
# ---
..nolommlir..
........õ
to monitor ...................= .rerrrilegrom. r
...._........_. .. ....I ........,
I.P. PoE --,..... , 14 4
........,..,
to
.........,,
(Persistence of Exposure) ---""1"--
Figure 36. Photo illustration of SWIM, visualization of electromagnetic radio
waves as "sitting waves".
Likewise with a microphone. From far away it might not be able to "hear" us.
By this I mean that a remote
person or AT listening to a recording or live feed from the microphone might
not be able to hear us through the
microphone.
Thus api,100 gives us a probability of being recognized or heard, or the like.
Let's begin with the simplest example of a metaveillance wave function, namely
that from a microphone, in the
one-dimensional case, where we move further from, or closer to, the microphone
along one degree-of-freedom.
The subscript, pt, is dropped when it is clear by context that we are
referring to a metawavefunction rather than
an ordinary wavefunction as we might find in quantum mechanics, or the like.
Consider an arbitrary traveling metawave function '(x, t) whose shape remains
constant as it travels to the
right or left in one spatial dimension (analogous to the BCCE of optical
flow[501). The constancy-of-shape simply
means that at some future time, t + At, the wave has moved some distance
along, say, to x + Ax. Thus:
tp(x,t) = 0(x + Ax, t + At).
(2)
Expanding the right hand side in a Taylor series, we have:
71)(x + Ax, t -I- At) = 0(x,t)+ OxAx + otAt + h.o.t.,
(3)
where h.o.t. denotes (higher order terms). Putting the above two equations
together, we have:
ti,a,Ax + 7PtAt + h.o.t. = 0.
(4)
If we neglect higher order terms, we have:
Ax
(5)
where the change in distance, divided by the change in time, t: is the speed,
c of the traveling wave.
In the case of a surveillance camera, or a microwave motion sensor (microwave
burglar alarm), c is the speed of
light. In the case of a microphone (or hydrophone), c is the speed of sound in
air (or water).
More generally, waves may travel to the left, or to the right, so we have:
(6)
CA 3028749 2018-12-31

Multiplying these solutions together, we have:
( a ¨ c a 0 ) \ ( a + a \
= o,
(7)
- - e-a-- " ) "cb
which gives:
020
2 820
at2 = c ax2.
(8)
This is the wave equation in one spatial dimension, as discovered by Jean-
Baptiste le Rond d'Alembert in 1746,
due to his fascination with stringed musical instruments such as the
harpsichord[28], which Euler generalized to
multiple dimensions:
1 020
(9)
e2 at 2
where V2 is the Laplacian (Laplace operator, named after Pierre-Simon de
Laplace, who applied it to studing
gravitational potential, much like earlier work by Euler on velocity
potentials of fluids [34]). This further-generalizes
to the Klein-Gordon generalization of the Schrodinger wave equation equation:
1 821P mc) 2
(10)
e2 at2 h
for a particle of mass m, where h = h / 27 is Planck's constant.
More generally, we can apply a wide range of wave theories, wave mechanics,
wave analysis, and other con-
temporary mathematical tools, to metawaves and veillance, and in particular,
to understanding veillance through
phenomenological augmented reality [84].
8.2. Broken timebase leads to spacebase
Waves in electrical systems are commonly viewed on a device called an
"oscillograph"[45, 59]. The word originates
from the Latin word "oscillare" which means "to swing" (oscillate), and the
Greek word "graph" which means
drawing or painting. A more modern word for such an apparatus is
"oscilloscope"[61, 39] from the Latin word
"scopium" which derives from the Greek word "skopion" which means "to look at
or view carefully" (as in the
English word "skeptic" or "skeptical"). The oscillograph or oscilloscope is a
device for displaying electric waves
such as periodic electrical alternating current signals.
In 1974 author S. Mann came into possession of an RCA Cathode Ray
Oscillograph, Type TMV-122, which
was, at the time, approximately 40 years old, and had a defective sweep
generator (timebase oscillator). Since it
had no timebase, the dot on the screen only moved up-and-down, not left-to-
right, thus it could not draw a graph
of any electrical signal, but for the fact that Mann decided to wave the
oscillograph back and forth left-to-right to
be able to see a two-dimensional graph. In certain situations, this proved to
be a very useful way of viewing certain
kinds of physical phenomena, when the phenomena could be associated with the
position of the oscilloscope. This
was done by mounting a sensor or effector to the oscilloscope. In one such
experiment, a microphone was mounted
to the oscilloscope while it was waved back and forth in front of a speaker,
or vice-versa. In another experiment,
an antenna was mounted to the oscilloscope while it was waved back and forth
toward and away from another
antenna. With the appropriate electrical circuit, something very interesting
happened: traveling electric waves
appeared to "sit still". The circuit, sketched out in Fig. 37, is very simple:
a simple superheterodyne receiver is
implemented by frequency mixing with the carrier wave, e.g. cos(wt) of the
transmitter. In one embodiment the
frequency mixer comprises four diodes in a ring configuration, and two center-
tapped transformers, as is commonly
used in frequency mixers. When one of the two antennae (either one) is
attached to an oscilloscope with no sweep
(no timebase), while the other remains stationary, the oscilloscope traces out
the radio wave as a function of space
rather than of time.
If the transmitted wave is a pure unmodulated carrier, the situation is very
simple, and we can visualize the
carrier as if "sitting" still, i.e. as if we're moving at the speed of light,
in our coordinate frame of reference, and
the wave becomes a function of only space, not time. The wave begins as a
function of spacetime:
IP (x, t) = cos(cut ¨ kx); wavenumber k = co / c.
(11)
In this case the received signal, r (x , t) is given by:
CA 3028749 2018-12-31

NIMIN
it' 4i
fl
r"
a s
"PM
t74
=ft'
EV!
F / 4r.v
4 '
:
;
gpepra, .;,!" lq
&moo. AWN. Ape;
=
Figure 37. Chalkboard sketch of a simple experiment: a transmitter Tx is shown
at the left, transmitting a wave, cos(wt¨kx)
in the spacetime continuum. A receiver, Rx, shown further to the right, picks
up the wave and feeds it to one or more
mixers. In this case, two are shown, one for the in-phase component, i.e. the
real part of the signal, and the other for the
quadrature component, i.e. the imaginary part of the signal. A local
oscillator supplies cos(wt) to the mixer for the real part,
and sin(wt) to the mixer for the imaginary part. Let us consider the real
part: The result (by simple trigonometric identity)
is shown as: cos(wt ¨ kx)cos(wt) = (cos(2wt ¨ kx) 1cos(kx)). This output from
the mixer is then fed to a lowpass filter,
LPF, from which emerges -1-cos(kx). Transmitter Tx and receiver Rx may be
antennae, or they may be transducers such
as a speaker and microphone. For example, transmitter Tx may be a loudspeaker
transmitting a periodic waveform, such
as a tone at 5000 CPS (Cycles Per Second), with receiver Rx being a
microphone. In other embodiments, transmitter and
receiver Tx and Rx are hydrophones, for an underwater SWIM, in which case
underwater sound waves, sonar, or the like,
is visualized using a waterproof underwater SWIM wand.
1
cos(wt - kx) cos(cut) = cos(2wt - kx) - cos(kx).
(12)
2 2
Half the received signal, r, comes out at about twice the carrier frequency,
and the other half comes out in the
neighbourhood of DC (near zero frequency). The signal we're interested in is
the one that is not a function of
time, i.e. the "sitting wave", which we can recover by lowpass filtering the
received signal to get:
s(x) = -1 cos(kx).
(13)
2
This operation of multiplication by a wave function was performed at audio
frequencies using a General Radio
GR736A wave analyzer, and at other times, using a lock-in amplifier, and at
radio frequencies using four diodes in
a ring configuration, and two center-tapped transformers, as is commonly done,
and at other times using modified
superheterodyne radio receiving equipment.
A drawback of some of these methods is their inability to visualize more than
one frequency component of the
transmitted wave.
8.3. Metawaves
When the transmitter is stationary (whether it be an antenna, or a speaker, or
the like) and the receiver (e.g. a
receiving antenna, or a microphone) is attached to the oscilloscope, the
device merely makes visible the otherwise
CA 3028749 2018-12-31

invisible sound waves or radio waves. But when these two roles are reversed,
something very interesting happens:
the apparatus becomes a device that senses sensors, and makes visible their
sensory receptive fields. In the audio
case, this functions like a bug sweeper, in which a speaker is moved through
the space to sense microphones,
but unlike other bug sweepers the apparatus returns the actual underlying
veillance wavefunction, as a form of
augmented reality sensory field, and not just an indication that a bug is
present.
Now consider the case in which the transmitted signal is being modulated, or
is otherwise a signal other than a
pure wave cos(wt). As an example, let's consideq 0(x, t) = cos(wt ¨ x) +
cos(5(wt ¨ x)), so the received signal is:
r(x,t) = ¨2 cos(x) + ¨2 cos (x ¨ 14.4)
1 1
+ ¨2cos (5x ¨ 4wt) -I- ¨2 cos(5x ¨ 6wt),
(14)
which when lowpass filtered, only gives us the fundamental. Thus a wave
analyzer or modern lock-in amplifier
such as Stanford Research Systems SR510 cannot be used to visualize such a
wave. A more traditional lock-in
amplifier, such as Princeton Applied Research PAR124A, will visualize
harmonics, but in the wrong proportion,
i.e. since the reference signal is a square wave, higher harmonics are under-
represented (note that the Fourier series
of a square wave falls off as 1/n, e.g. the fifth harmonic comes in at only 20
percent of its proper strength).
Thus existing lock-in amplifiers are not ideal for this kind of visualization
in general.
8.4. A Lock-in amplifier designed for Metaveillance
The approach of Mann was therefore to invent a new kind of lock-in amplifier
specifically designed for augmented
reality visualizations of waves and metawaves.
Whereas a common ideal of lock-in amplifier design is the ability to ignore
harmonics, in our application we wish
to not only embrace harmonics, but to embrace them equally. If we were to turn
on our sensing of harmonics, one
at a time, we would be witnessing a buildup of the Fourier series of our
reference signal. For the square wave, each
harmonic we add to our reference signal, allows more and more of the measured
signal harmonics through, but
colored by the coefficients of the Fourier series representation of the square
wave. Figure 38 illustrates a comparison
of the the reference signal waveforms of the PAR124A lock-in amplifier with
the modified lock-in amplifier of the
Sequential Wave Imprinting Machine (SWIM)[69].
9. A new kind of Lock-in Amplifier
9.1. System architecture
The system architecture for SWIM displaying sound waves, radio waves, etc., is
illustrated in Figure 39, where
the SWIM COMP (SWIM computer) shown in Figure 39 is implemented by way of a
ladder of comparators as
illusrated in Fig. 40 for up to 10 light sources, and Fig. 41 beyond (e.g.
more typically on the order of 1000 light
sources).
An embodiment specific to radio waves is illustrated in Figure 42. An
alterntive embodiment is illustrated in
Figure 43.
This system depicted in Figs. 34, 35, 36, 37, 38, 39, 40, 41, 42, and 43,
allows us to see with the naked eye, or on
film or video or capture on a sensor array, sound waveforms coming from
musical instruments, radio waves coming
from cellphones, motion sensors, and various other things that produce fields
such as eletromagnetic radiation
fields, soundfields, etc..
SWIM can also be used as a new kind of bug-sweeper, e.g. to see not just
fields, but to see also the capacity to
sense fields. In this way SWIM can sense field sensors, and sense their
capacity to sense. Figure 44 shows SWIM
as a bug sweeper or the like, doing metasensing. Other bug sweepers of the
prior art can find hidden microphones,
hidden cameras, hidden sensors, etc, but do not let us see their soundfields
or lightfields or otherwise show the
intricate nature of their sensory capacity. SWIM provides an augmented reality
overlay of the capacity of a sensor
to sense.
Another innovation of SWIM is the capacity to visualize not just a wave, but
an entire Fourier series of a wave,
i.e. to see and visualize the harmnonic nature and structure of a wave. For
example, SWIM can trace out the
waveform of a trumpet playing a note such as A440, and then trace out the
waveform of a flute playing the same
note. These two waveforms will appear different due to their different
harmonic structure. The ability of SWIM to
do this is based on the invention of a new kind of lock-in amplifier. This new
kind of lock-in amplifier is illustrated
in Figure 45.
CA 3028749 2018-12-31

1 Fourier coefficient 1 SWIMref.
coefficient
1 = . .
. =
. 1 =
______________________
. : = . = = = . =
= . . . .
.
. .
. .
0 0
. . = . . .
= . = . .
. = . = . .
. . = . . = _____
-1 -1
0 1 2 3 4 0 1 2 3
4
2 Fourier coefficients 2 SWIMref.
coefficients
1 = . = . ___ ,
. 2 = = . ___ = .
= = = = =
= =
=
. .
0
= . = . .
. . . . .
-1 ; ; = 0 1 2 3 4 0
1 2 3 4
3 Fourier coefficients 3 SWIMref.
coefficients
1
1 : ; =
= _______________ . = . = . 3 ' . = .
:
= = = . .
0 0 ..............................
ive\m
. . . . .
-
.
LJ
0 1 2 3 4 0 1 2 3
4
8 Fourier coefficients 8 SWIMref.
coefficients
1 k.A.NonNt Uftwo,
60 UreimUlow4UrevolliVioSiti
0 . .................... [ . . . . . . . . . .....
...._
L. _____________________ vp.....,01Mõ," 1 I 1 I n
_1 -6
0 1 2 3 4 0 1 2 3
4
Space Space
Figure 38. Left: A modern LIA (Lock In Amplifier) ignores all but the
fundamental. Older LIAs use polarity reversal and
are thus sensitive to increasing harmonics on a 1/n basis where n = 1,3,5,
.... This is why older LIAs often work better
with the SWIM (Sequential Wave Imprinting Machine)[69], as long as they're
modified to compensate for weaker higher
frequency components of the waveform being visualized. Right: Reference
waveforms of Mann's "Alethioscope" have equal
weightings of all harmonics. As we include more harmonics, instead of
approaching a square wave, we approach a pulse
train. Early SWIM used a pulse train as its reference signal. This made time
"sit still" (like a strobe light on a fan blade)
for a true and accurate AR (Augmented Reality) visualization without
distortion.
The embodiment shown here is functionally equivalent to the Mann-modified
PAR124A. There are three IN-
Strumentation AMPlifiers, indicated as "INS. AMP.". Each of these is an
instrumentation op amp with selectable
input impedance: 109 ohms to match the "Hi Z" on the PAR124A which is 1 Gigohm
input impedance, and a
"Lo Z" setting suitable for use with a standard 600 ohm microphone (e.g. to be
able to feel the capacity of a
microphone to listen). The PAR124A input impedance was selected by switches
and by replaceable modules. Four
input impedances of the embodiment shown here are: 1 Gigohm; 1 Megohm; 600
ohms; and 50 ohms.
The reference input has one amplifier for a gain that is adjustable from *1 to
*10,000 in a 1, 2, 5 sequence,
with a calibrated vernier for gains in between. The signal input has two
amplifiers, cascaded (on either side of an
equalizer) for a gain that is adjustable from *1 to *100,000,000, almost
matching the original 23-position rotary
switch of the PAR124A adjustability from nanovolts to millivolts. The AD8429
from Analog Devices, with a
CA 3028749 2018-12-31

_________________________________________________________ ,...---
9vs
-
'A
,
A
_......A.,-
Ye
-36,k"
,
SIGNAL REFERENCE .Y.6-
,ivs.--SWIM
(ACTOR)
SENSOR ¨ 6 LIGHTS
SWIM
_ -
0 )) 0 WAVEFORM
WAND
---= ,
MOVEMENT
_________________________________________________________ )14.
PATH
_________________________________________________________ ) _
SIG. REF. SIG.
GEN. AMP. AMP.
__________________________ -
= ..
LIA SWIM
SWIM
0 R OS OX ___,., COMP
(Sequential
Wave
n A ___________
_
Imprinting
Machine)
Figure 39. Mann's SWIM (Sequential Wave Imprinting Machine) for making radio
waves, sound waves, etc.,
visible by shearing the spacetime continuum at the exact angle that presents
the speed of light or speed
of sound at exactly zero: SWIM works with a wide variety of phenomena, to
provide a phenomenological augmented
reality. A signal source might comprise some kind of signal generator, which
produces signal directly, or the signal generator
might drive a signal actor such as an actuator (e.g. loudspeaker) or signal
transducer (light source) or signal conveyor like
an electrode or antenna. Such a signal source, such as a radio transmitter,
sound source such as a musical instrument, or
other wave source that emits a periodic waveform or periodic disturbance of
some kind, sends waves outwards at a speed
of wave propagation, c, which is the speed of light in the case of radio
waves, or the speed of sound in the case of sound
waves (e.g. speed of sound in saltwater if we're visualizing underwater sound
waves from a hydraulophone in the ocean,
or speed of sound in air if we're visualizing the sound waves from a trupet or
clarinet). A reference sensor or reference
signal is derived from the sound source. If the sound source is a radio
transmitter or loudspeaker, we might connect to it
directly. But if we don't have access to it (e.g. if it is a surveillance
system that is sealed against our access, or if it is an
instrument that doesn't have an electrical connection), we simply place a
reference sensor near it. The SWIM apparatus
typically or often uses two inputs, one being a reference input from a
stationary reference sensor, and the other being a signal
input from a moving signal sensor. The signal sensor is moved back and forth
together with a linear array of light sources.
For example, the signal sensor may be a microphone or an antenna attached to a
linear array of lights. The reference
sensor supplies the reference input of a LIA (Lock-In Amplifier or Lock-In
Analyzer). In some embodiments, the individual
sensors may also have associated or built-in amplifiers. The LIA has one or
more outputs such as "X" representing the real
part of a demodulated homodyne wave, "Y" representing the imaginary part (i.e.
in "quadrature" with "X") of the wave,
"R" representing N/X2 + Y-2, and 0 representing arctan Y/X. One or more of
these outputs is connected to a SWIM comp
(computer) or system that drives a sequence of lights such as a long row of
LEDs (Light Emitting Diodes). A typical number
of such LEDs is 1000 or 2000, driven sequentially in proportion to the voltage
at "X" or the like. A simple embodiment of
the SWIM comp is a ladder network of comparators, such as one or more LM3914
chips set to "dot" mode. Typically the
bottom LED comes on when X -= 0 volts, and the top LED comes on when X .--- 5
volts, and for 1000 LEDs we have 5
millivolts per next LED up the ladder. For each 1000 LEDs, 100 LM3914 chips
are used. Preferably, though, SWIM comp is
FPGA-based or ASIC based or a general-purpose computer that can also generate
axis labels, alphanumerics, and the like,
so that a WAVEFORM can be plotted together with tick-marks, and indicia
overlaid thereupon. The row of lights is waved
back-and forth along a MOVEMENT PATH, either by hand, or by robot, so that
people can see the WAVEFORM with
the naked eye, or photograph it with a camera, onto film or a computer image,
or capture it into a VR (Virtual Reality)
world for viewing and exploring therein.
CA 3028749 2018-12-31

' - - . . , --.. ''., .,--...,,,..';'..,i ,
, , ,,,,,,,,,,,,:,.- ..--i-
1;..:,:-.1.õ'.r, ;,. h t,
. =,....,,,,,;:-..-v211.c.4.40:t:t1,0-'= 0.',;IL-10,,, ..-,,=- ..
- 7... ; '.. .1. ,
t 1.101111Y1D
...I.. -Ad. ..!....-4-talx-0, = i ,
1
0
-
N If
`..
,, == , . _ 4-4- ---=
- C 1
i = IS
h o 1
. ,
i
Lii i
1 . 1
.
,
.
,
ki224-G 7'./.
,
. .
......õ
,. ,
,,
, .
õ
, ........ di cto
.., ...
. ,
' A
.: .
_________________________________________________ ,
3903k
, IN .....
_
...... =
,..,..,.... .....
..........., . _....
. ''.= .t.39 =ir
' . ONet4.1. =v",,-.4.4.<-.P.4, , .., ,,,,,,..,,..... .
"s.',A
:i.1..?"
i-
' ' ''''' ' , ,,,, 1. = I : = dri ."-.1 7 e , , 1 1 inlItigitP "gat Atklf
:07 ..,, ;I':
. L' a. = .i. =11...C,14.1. .er `õ I.. ' .. '
" = = . === . = = 4. , .., .
K:
.4 IA 10,14.* i
Figure 40. In one embodiment of the SWIM, the output of a lock-in amplifier is
supplied to an LM3914 chip connected to
LEDs which are attached to the receiver, and waved back and forth as a unit.
In one embodiment, a baseband output
of a frequency mixer in a homodyne Doppler radar set is connected to the LED
bargraph formed using the LM3914 chip.
CA 3028749 2018-12-31

_
I
,
IV OSI, si = = i fit ti , cLi. [
i I : -..'
14
t¨ A
r. /".' V\ VA4,e,' .= 1 L-- ttA, ---;) 14 -4-...,õ 1 1-
,v,'-',--- i [ - -47; ., ..v
..i .,..õ (./ f - L, lOr ,t.V.n...,, I
i 1 'e tlY" if
i
f .,) z C. 0 4 rcx vt.,--- - Y. 1 _ ..1 ...,- , 2
,..- -...
... A ',!,..
-ler .
....1 . õN.? co, c+ -1...
I

4,
1
_:,- .., i ..µ Ci { -....
I
. ximmosponahiskiMEM,
"I'". :0) V
¨5,-------- -....-_, ,..,
;I ' :-./ f * 1
i.'
,
; I"
4
1
i
I
t i
,
Figure 41. More typically the output of the mixer, homodyne receiver, lock-in
amplifier, or the like, is supplied to an ladder
network of LM3914 chips connected to hundreds of LEDs are attached to the
receiver, and waved back and forth as a unit.
Typically the SWIM wand is modular, each module comprising ten LM3914 chips,
each chip driving 10 LEDs for a total
of 100 LEDs per module. The modules are joined together with connectors on
each end, so that, for example, 10 modules
joined together provide 1000 LEDs, giving an augmented reality display of 1000
by infinity pixels.
noise level of 1nV*sqrt(sec) was found to perform quite well for this purpose.
Each amplifier has rudimentary
equalization: a switchable gentle high-cut filter, by way of a 3 position
switch: wide-open (to 200k CPS), 50k CPS,
and 5k CPS.
9.2. NARLIA Circuit Components
Again, with reference to Figure 45, the EQUALIZER helps to clean up the signal
by allowing the user to filter
out any 60 CPS or 120 CPS powerline hum or buzz, as well as introduce further
low-cut and high-cut filters.
The PURIFIER helps with the generation of a reference signal from the
reference input ("REF. INPUT."). The
reference input often comes from a noisy signal. The PURIFIER is not merely a
lowpass filter, but, rather, it
determines the pitch period of the input, and provides a pure sine wave having
the same frequency and strength
(e.g. amplitude) and relative phase as the input.
The REFERATOR (reference waveform generator) generates a reference waveform in
which all of the harmonics
are exactly equal in strength. It has an 11-position rotary switch on the
front of it, and when the switch is set to
1, it operates in the identity mode, i.e. its input is the same as its output.
When the switch is set to 2, its output
is the superposition of two sine waves, or cosine waves, one that is at twice
the frequency of the other. When it is
set to a number, it outputs a sum of sine waves or cosine waves at each
frequency. When it is set to INF. (infinity),
what we get is just a stream of short pulses, one pulse for each period.
The REFERATOR must produce two outputs, one which is the Hilbert transform of
the other. This means,
of course, that one output is a sum of sines, and the other output is a sum of
cosines. By convention, let us say
that the first one, call it g(t), is for the cosines and is the one that is
fed to the upper mixer, and the second one,
call it h(t), is for the sum of sines and is the one fed to the lower mixer.
The first one is buffered output "REF.
MON. I." and the second one is buffered output "REF. MON Q.". Some of these
connections are shown in blue,
but for simplicity in the diagram, not all are shown. A reference signal is
derived or generated, and ideally not
CA 3028749 2018-12-31

A,Afi
is CO 5 6.4-4e)
kt)4 CO 5 (¨
Cfr
111 1 cos (k
Co3(40-6)
Figure 42. SWIM used to visualize radio waves from a transmitter Tx emitting a
radio wave of the form cos(wt ¨ kx), at
frequency c.,.) and wavenumber k as a function of space x and time t. A
receiver Rx feeds a mixer such as a four-diode ring
modulator, which is also supplied by a local oscillator L.O. that generates a
good strong signal cos(wt) strong enough to
keep the diodes conducting even in the presence of a weak signal at Rx. The
output of the mixer is fed to the linear array
of light sources in the SWIM, which includes a lowpass filter to filter out
the high sum frequency, and admit only the low
difference frequency cos(¨kx) = cos(kx). This difference frequency is a
function only of space, not time.
merely a pure sine wave, but, rather, a signal that contains all the
frequencies of interest, up to a certain number
of harmonics. In the mode of receiving and displaying radio waves, the
reference signal is sensed and derived from
a carrier frequency of interest, through a PLL (phase-locked loop). In the
metaveillance mode, the reference signal
is generated rather than derived. A generated reference signal is transmitted,
and a highly sensitive scanner is
used to detect minute changes that are due to that reference signal. The most
sensitive apparatus for doing so is
a lock-in amplifier. Such amplifiers can typically amplify up to a billion
times gain. The PAR124A amplifier has
a full-scale range from 1 volt down to 1 nanovolt, and with a further
transimpedance stage can sense down to the
femtoamp range.
9.3. Historical context
This work is based on work by S. Mann, referred to as Phenomenological
Augmented Reality [84], and the
NARLIA (Naturally Augmented Reality through a modified Lock-In Amplifier)
project, conducted in the 1970s,
based on a specially modified PAR124A/126 LIA (Lock In Amplifier). The
standard PAR124A amplifier was
originally manufactured by Princeton Applied Research (PAR) in the early
1960s.
The PAR124A is widely regarded as the best lock-in amplifier ever made, and
others have attempted (with
various degrees of success) to imitate it.
"Since the invention of the lock-in amplifier, none has been more revered and
trusted than the PAR124A
by Princeton Applied Research. With over 50 years of experience, Signal
Recovery (formerly Princeton
Applied Research) introduces the 7124. The only lock-in amplifier that has an
all analog front end
separated, via fiber, from the DSP main unit." [SignalRecovery.com main page
of website, accessed
June 2016]
Signal Recovery states that the PAR124A from more than 50 years ago is still
the most revered and trusted
CA 3028749 2018-12-31

`Iir 44,
III100 525 G ke }A 0 vhx.iivi
Auv YYVIA/00
________ ) =
=
Rx =
=
=
_________________________________________________________ rtti' v __________
_______________________ co,
\r)
1-\-1412-6aSe P4 C--e c1,51.32
(
Figure 43. Doppler-shift reflection bounce embodiment: Here the transmitter Tx
and receiver Rx are both stationary.
The SWIM itself (including the wearer's body in a wearable embodiment of the
invention) act as an antenna of sorts, in
the sense that waves from the transmitter Tx are emitted in various
directions, including directions toward the SWIM wand
of lights. Those waves bounce off the SWIM wand and scatter in various
directions, including directions back toward the
receiver RX. Some of the transmitted wave from the transmitter (in this
example transmitting at 10.525 gigahertz), is used
as the reference signal in a form of homodyne detector, lock-in amplifier,
analyzer, mixer, or the like, so as to produce a
baseband signal that is a function of space rather than time. In this case the
spatial frequency is twice that of the setup
depicted in Fig 42.
amplifier, and they go on to claim that their new 7124 product matches the
performance of the 124A. The 7124
presently sells for approximately $13,000 US.
In a paper published just last year, a comparison was made between the older
and newer amplifier, which found
that the older amplifier performed better. The paper begins:
"The noise of photoconductive detector is so weak that the PAR 124A lock-
amplifier is main test facility
despite of discontinuation by long-gone manufacturer for decades. The paper
uses 124A and 7124 lock-in
amplifier system to test noise and response signal of several photoconductive
detectors..." [139]
Whereas modern lock-in amplifiers (including the SR124 and SR7124) work by
sine wave multiplication, the
PAR124A worked by rapidly reversing the polarity of the input signal. This is
equivalent to multiplying the
input signal by a square wave. This allows odd harmonics through, thus
creating a kind of comb-filter in the
frequency domain. The PAR124A has a symmetry adjustment to make the square
wave perfectly symmetrical. In
the Mann modification to the PAR124A, the symmetry is deliberately offset to
one extreme or the other, so that
the square wave is highly asymmetrical, thus allowing even harmonics to come
through as well. This modification
unfortunately also creates a strong DC offset that must be corrected
immediately after the mixer stage.
Once performed, the result is a lock-in amplifier in which the reference
signal is essentially a stream of short
pulses, thus allowing all harmonics (even and odd) of a signal to be
represented in the output, which is connected
to a linear array of light sources, or a haptic actuator (or both) that is/are
waved back and forth. Each light
source is connected to a comparator responsive to the output of the specially
modified PAR124A amplifier, and
the comparators of all the light sources are fed by a ladder network of
voltage dividers so that each light represents
CA 3028749 2018-12-31

_________________________________________________________ ,1
9,6
sAw
SIGNAL ¨96
SENSOR 1-SWIM
010 01 AV WEFORM ¨,c7CLIGHTS
SWIM
9s1
REFERENCE
. ACTOR
WAND
_________________________________________________________________ MOVEMENT
PATH
BUG SIG. REF.
AMP. GEN.
= = =
LIA SWIM
OR 9s ox COMP.
SWIM
_______________________________________________________________________________
_ SWEEPER
Figure 44. Metaveillance: SWIM as a bug sweeper or other device to sense
sensing, i.e. to sense sensors and
sense their capacity to sense. In this case, the reference signal comes from
something attached to the SWIM wand, such
as a loudspeaker or a transmitter. This "Reference actor" can be an actuator
(e.g. speaker), or a transducer that has no
moving parts, or a direct conveyance like an antenna that simply conveys
electricity directly without transduction. A SWIM
wand comprises the reference actor together with the SWIM lights, a typically
a 2 dimensional or 1 dimensional array of
lights such as LEDs. The SWIM wand may also bear various other forms of user-
interface such as keyer, tactile actuator,
etc., so the user can feel veillance waves pressing against the hand, for
example, while approaching a hidden microphone.
The reference actor is either self-oscillating, or is driven by a reference
generator. The reference generator (or a signal from
the self-oscillating reference actor) is connected to the reference input of
the LIA. The signal input from the LIA comes
from the bug itself, or from a stationary signal sensor placed near the bug's
suspected vicinity, its location being refined as
we learn more about where the bug might be. In some situations we have access
to the bug, e.g. we've found it, or it might
be of our own doing (e.g. simply our own microphone we wish to visualize). In
other cases we might not want to touch
it, e.g. if it needs to be dusted for fingerprints and we don't wish to
disturb it. In either case, the bug itself or the signal
sensor is connected to the signal input of the LIA.
a consecutive voltage range of the output. The phase-coherent detection
includes an in-phase ("real") output and
a quadrature ("imaginary") output, one of which drives the comparators, and
both of which control the overall
brightness of the lights for best visualization of the phenomenon being
studied, i.e. the light brightness varies in
proportion to the magnitude of an electromagnetic radio wave. In this way, the
lights get brighter when the signal
gets stronger. There is a control for this effect, i.e. when set to zero the
lights stay at full brightness always, and
when set to maximum the lights vary from full to zero.
9.4. A simple illustrative example using sound waves
The PAR124A (including the custom modifications) covers a frequency range up
to 200,000 cycles per second,
which works well for grasping, touching, holding, and exploring sound waves
(See Figure 46). For radio waves,
CA 3028749 2018-12-31

NARLIA (Naturally Augmented Reality Lock-In Amplifier)
r> ______________________________________________________________ >SIG. MON.
OVERDRIVE INDICATOR 0.--)o-REF.
MON. I.
¨P-31P-REF. MON. Q.
1-10k; 1,2,5 seq
7*50ohm
PARAMETRIC 1>--)P-Y buff.
out.
INS. EQUALIZER selectable DC oi AC
Al 1).--).-R to BNC+BBP
slow/fast '4' A2 C.-O.-PHASE
SIG. INPUT ihhh_a *MODE
¨
and switch = *FREQ X. LPF
to select A, -")"" *Q
INS. =
DISPLAY for
B, or A-B 1-10,000 = X,
Y, R, and
AMP.
PHASE ANGLE
in a 1,2,5 LPF
sequence HARMONIC
selectable SELECTOR
DC or AC, (11-pos'n
POSTAMP.
slow or fast switch.) g(t) h(t) is the Hilbert
Gainbias
transform of g(t)
PITCH _________________________________________________________________ and R
REF. INPUT_31,.. ,.. PERIOD 4 5 78
DETECTOR 3 Analog
9
INS. EVEN R derived voltages,
2
Amp, PURIFIER: 10 g BOTH fromX 0 to 5
1
Generates INF. ODD volts
and Y,
a pure
REFERATOR: affects lamp
sinewave Generates reference
with period brightness,
waveform with the
and amplitude and is also
selected harmonics.
matching input. available
for other
SEE "MODEL 124A FUNCTIONAL BLOCK DIAGRAM", purposes.
FIG. 3-1 on page 12 of the PAR124A instruction manual, \
PRINCETION APPLIED RESEARCH CORPORATION, 1970.
S.W.I.M.
RF ADAPTER Linear array
of
LEDs and
RF FREQ. T.-S.W.I.M.,
MIXER tactile
actuator.
RP INPUT Voltages from
OUTPUT TO ABOVE NARLIA INPUT Lock-In
Amplifier
drive the DC
servomotor inputs.
RF SIG. GEN.
0-25 GHz
Figure 45. A functional equivalent to the NARLIA (Naturally Augmented Reality
through a modified Lock-In Amplifier)
based on Mann's modifications to the PAR124A lock-in amplifier.
which go beyond this frequency, a radio frequency signal generator and
frequency mixer are used to bring the radio
signals down into the audio range.
Figure 46 shows a simple teaching example with NARLIA, in which the speed of
sound can be calculated. Here
the goal is to make tangible a microphone's capacity to hear. An excitation
source (in this case, at a frequency of
5000 cycles per second) is generated by a speaker that travels together with
the SWIM or T-SWIM. This functions
as a tangible embodied "bug sweeper", in which the results of the bug sweeper
are experienced and spatialized
(perfect spatial alignment of the virtual overlay with reality) in real time.
The speed of sound has been canceled,
in effect, by the heterodyne nature of the lock-in amplifier, resulting in a
"frozen" wave that appears to sit still
rather than travel. We call this a "sitting wave" [81] (distinct from the
concept of a standing wave). This example
CA 3028749 2018-12-31

There are 21 cycles of this
__________________________________________ sound wave over its ________ )1110
1.5 metre distance of travel.
This moving speaker
emits a 5000 CPS Each cycle is
(Cycles Per Second) 150cm/21 = 7cm long.
tone, which this
microphone hears.
lv
=
'
c 4, E.-" v "t=-.7
õ .
-
4,-= = .--1 * NOW of green lights moves with speaker and
õ
" Ns. Alt AM 2 displays output of lock-in amplifier from
microphone.
-
-0 Lock-in
r amplifiers.
Measured speed of sound
= 0.07 metres/cycle *5000 cycles/second
= 350 m/s. (True value at 27deg. C is 347m/sec)
A
n A A rµ
I \ = A
Figure 46. Sensing sensors and their capacity to sense: SWIM (Sequential Wave
Imprinting Machine) used to show the
capacity of a microphone (hand-held here) to hear, by way of a speaker affixed
to a linear array of lights that waves back and
forth on a robotic arm. The waveform is visible by persistence-of-exposure,
either by the human eye, or by photographic
exposure. Here we can see and determine the speed of sound, since we have a
"sitting wave" [81], as distinct from the concept
of a standing wave. Since the wave is spatialized at a true and accurate
physical scale, we can simply count the cycles, and
divide the total distance by this number, then multiply by the frequency
(cycles per second) to compute the speed of sound.
The wave appears "frozen" in spacetime, as if the speed of sound were zero, so
we can see it.
shows simply a single frequency component of the wave, which is all that a
conventional lock-in amplifier can do.
But with NARLIA we can touch and hold and experience waves of various shapes
and compositions.
9.5. From timebase to spacebase
A more recent re-production of this early experiment is illusrated in Figure
47, with an oscilloscope-based
implementation. An LED-implementation is shown in Fig. 48. An Android-based
version was also created.
Fig. 48 shows the multicomponent nature of this embodiment of the invention,
where we see the fundamental
together with the third harmonic of the waveform.
The specialized augmented-reality lock-in amplifier aspect of the invention
can be made as an analog amplifier
using a stream of short pulses as the reference or it can be made as a digital
augmented reality lock-in amplifier or
CA 3028749 2018-12-31

,
=
,4\
I
=
, = 40.11,,-H=t` = *, = 4'=
;
' ...=Ff
= ,
Figure 47. We're often being watched by motion sensors like the microwave
sensor of Fig. 47. Left: When we try to look at
the received baseband signal from the sensor, as a function of time
(artificially spatialized), it is difficult to understand and
has little meaning other than a jumble of lines on the screen. Center: When we
shut off the timebase of the oscilloscope,
and wave it back and forth, we see the very same waveform but displayed
naturally as a function of space rather than time.
Right: Stephanie, Age 9, builds a robot to move SWIM back-and forth in front
of the sensor. As a function of space the
displayed overlay is now in perfect alignment with the reality that generated
it. This alignment makes physical phenomena
like electromagnetic fields more comprehensible and easy to see and
understand.
,
.Z.3
t-
'
- =
'
I =
1,:=1 I DP!
o
I
Figure 48. The SWIM's multicomponent/arbitrary waveform lock-in amplifier.
Square wave visualized using multicom-
ponent reference signal cos(ut) + cos(3wt) making visible the first two terms
of its Fourier series expansion, resulting in
phenomenological augmented reality display of cos(wt) + 1/3 cos(3wt) on 600
LEDs in a linear array rapidly swept back and
forth on the railcar of an optical table. This suggests expanding the
principles of compressed sensing[16, 33] to metaveillance.
Inset image: use beyond veillance, e.g. atlas of musical instruments (trumpet
pictured) and their waveforms. Amplifiers in
picture: SR865 (drifty with screen malfunction); SR510; and PAR124, not used
at this time.
as part of an all-encompassing augmented reality wearable computer system, in
which the reference is constructed
mathematically from samples of sine functions, cosine functions, exp(27rf---
1...) functions, etc..
In the case of an analog amplifier system, we can construct the reference from
a series of signal generators at
different frequencies. A suitable reference generator for demonstration
purposes is a Pasco Fourier Synthesizer.
When used as the reference input, we have the ability to adjust the spectral
properties (phases and magnitudes),
i.e. the relative sensitivity to each component of an input signal.
Suppose for example the input signal is a square wave. We can now adjust which
components of the square
CA 3028749 2018-12-31

'
21/4011SMI.
,
.4111.111111.
Irk
44
\
Figure 49. Phenomenologial augmented reality visualization of a waveform with
only the 8th and 9th harmonics present,
thus displaying a beat frequency phenomenon.
wave we wish to visualize. This is useful for teaching purposes, and forms
part of the augmented reality classroom
teaching experience.
Students are taught using phenomenological augmented reality which makes
fundamentals of physics exciting.
For example, we can also switch off the fundamental and see, for example, the
8th and 9th harmonic of a square
wave, as shown in Fig. 49.
In this example, a square wave signal generator was used as the input to the
special analog augmented reality
teaching lock-in amplifier, for showing to students in a very hands-on way,
the concept of beat frequencies. With
the Pasco Fourier Synthesizer various frequency components are adjusted with
various amplitudes and phases, to
each these fundamental principles writ large across the front of a large room.
This can be seen and appreciated by
large audiences, as well as large groups of students.
In a digital implementation, we can create a similar effect.
Now normally a cosine wave is expressed in the form cos(27r ¨
cos(27rwt ¨ 0), where 0 is the phase shift.
If we slowly vary the phase, the wave will "crawl" rather than sit still. This
is useful to simulate the slowed down
waves where the spacetime continuum is sheared slightly off the angle at which
the propagatory speed is exactly
zero. In such coordinates the speed of sound or speed of light, or the like,
is drastically reduced but not exactly
zero. This allows teaching princples of physics using augmented reality, in a
world where the speed of light or
sound is really low, thus making visible the otherwise invisible and exciting
world of physical phenomenology.
However, in this form, the waves change shape as they move through space, as
shown in Fig. 50.
This test was made using the following test procedure that I developed to test
and demonstrate and teach the
way a lock-in amplifier can be simulated to be operating within an augmented
reality environment: use two signal
generators, one to connect to the signal input, and another to connect to the
reference input, in the following steps:
1. set a signal generator to a specific frequency such as f=440 cps, and
connect the signal generator to the
CA 3028749 2018-12-31

, vt
-14 _ I
1.6
0", 1
r
ID "
.;" =
Figure 50. A digital array of lock-in amplifiers was implemented by way of
FPGA-based architecture. Here is the result of
a square wave of varying phase passing through the lock-in amplifier. The
phase angles (left-to-right) were 0, 45, 90, 135,
and 180 deg. Notice how the wave only looks good for 0 or 180 deg.
reference input;
2. set a signal generator to a slightly different frequency, such as f=439
cps, and connect it to the signal input;
3. observe the output on a CRO (cathode ray oscilloscope) or the like.
Alternatively, if the lock-in amplifier has an internal oscillator, set it to
"internal" and dial in a frequency that is
slightly different from the frequency of the input signal. In this way, the
amplifier will operate in a kind of unusual
way (not the way it is normally designed to operate, i.e. it will indicate a
warning as an "unlock" error), but will
trace an output similar to a SWIM effect.
As can be seen from the CRO photographs in Fig 50, with a square wave input,
and reference frequency
components, the reference frequency components don't move together in unison,
and thus the shape of the wave
is distorted.
To solve this problem, I propose something different than merely an array of
lock-in amplifiers. I propose a
special array of lock-in amplifiers in which there are references of ever
increasing frequency whos phase also moves
faster, i.e. references of the form: cos(27rfc(t ¨ q5)) = cos(27w(t ¨ 0)), so
that the higher frequency components
move along faster in phase to keep up with the larger number of cycles in
these higher components. In this way
the wave moves along together.
In other words, the lock-in amplifier has linear phase and functions as an LTI
(Linear Time Invariant) device
in which all frequency components of the amplifier are delayed equally by a
fixed constant amount.
Here is a GNU Octave ("a programming language for scientific computing")
script that creates a square wave
signal input and carries out the steps of generating an envelope to simulate
decaying strength as we move one
transducer further from another transducer (e.g. with one being a microphone
and the other being a speaker, or
other similar arrangement), and generating a reference comprised of cosines
that move together with the signal, so
as to be linear-time-invariant (LTI).
%swimulator_cosines.m
format compact
o=pi/180; %deg.
%p=90*0/2/pi; %p=1/4; %phase
p=0; %phase is zero
t=linspace(0,100,100000);
tm=linspace(0,98,100000); %moving frame of reference
distance=linspace(1,2,1ength(t)); % distance from sound source, or the like.
env=1../distance.-2; % envelope (decays further from source)
%refererence signal components: cosines tell the truth
rl=cos(2*pi*1*(t-p));
r2=cos(2*pi*2*(t-p));
r3=c0s(2*pi*3*(t-p));
r4=cos(2*pi*4*(t-p));
r5=cos(2*pi*5*(t-p));
r6=cos(2*pi*6*(t-p));
r7=cos(2*pi*7*(t-p));
r8=cos(2*pi*8*(t-p));
CA 3028749 2018-12-31

r9=c0s(2*pi*9*(t-p));
s1=cos(2*pi*1*tm);
s=sign(s1); % signal input is a square wave
%s=cumsum(s); % signal input is a triangle wave
s=s.*env; % include inverse square law falloff amplitude envelope
r=r1; % fundamenal of reference
subplot(611); plot(r); title("Fundamental of reference input");
subplot(612); plot(s); title("Signal input");
p=r.*s; %product
P=fft(p);
P(2+10:length(P)-10)=0;
pf=ifft(P);
pf=real(pf);
subplot (613); plot(pf);
title("SWIM with first harmonic only");
r=r1+r3;
p=r.*s; %product
P=Ift(p);
P(2+10:length(P)-10)0;
pf=ifft(P);
pf=real(pf);
subplot (614); plot(pf);
title("SWIM with first 2 odd harmonics");
r=r1+r3+r5;
7.r=r1+r2+r3+r4+r5; % same result if evens are included
p=r.*s; %product
P=fft(p);
P(2+10:length(P)-10)0;
pf=ifft(P);
pf=real(pf);
subplot(615); plot(pf);
title("SWIM with first 3 odd harmonics");
r=r1+r2+r3+r4+r5+r6+r7+r8+r9;
%r=sign(r1);
p=r.*s; %product
P=fft(p);
P(2+10 : length(P) -10) =0 ;
pf =if f t (P) ;
pf=real(pf);
subplot (616) ; plot (pf ) ;
title("SWIM with harmonics 1 to 9") ;
%title("SWIM with harmonics 1 to infinity") ;
The result of this Octave script is shown in Fig. 51.
This result works for any arbitrary phase of input signal. For example, let's
consider the case where the signal
is phase-shifted 90 degrees.
To do this, change:
CA 3028749 2018-12-31

Fundamental of reference input
i ............................
i I ..........................................
, I I
I I 1 ............
0 20000 40000 60000 80000
100000
Signal input
------------- v 1 _____________ 1
-
I- --------------
_
_
1
_
0 20000 40000 60000 80000
100000
SWIM with first harmonic only
=
.,'-'
- 1 ______________ 1 1 1
=
0 20000 40000 60000 80000
100000
SWIM with first 2 odd harmonics
'---
----------------r---H _____________________________________ 1
_----- 1 1
=
= _______________________________________________ - ________________
= 1 1 1
E
0 20000 40000 60000 80000
100000
SWIM with first 3 odd harmonics
_ 1 1 1 i
=
__.a'
---___ ___________________________________________________________ _.--- _____
- _
1 1
=
0 20000 40000 60000 80000
100000
SWIM with harmonics 1 to 9
¨
1 _____________________________ 1 1 1
=
0 20000 40000 60000 80000
100000
Figure 51. Multi-component lock-in amplifier with various frequency components
selectable to teach the principle of
operation of the augmented reality lock-in amplifier array.
sl=cos(2*pi*1*tm);
to
sl=sin(2*pi*1*tm);
in the above Octave script. The result appears in Fig. 52
Now if we use sines instead of cosines, as below:
format compact
o=pi/180; %deg.
p=0;
t=linspace(0,100,100000);
tm=linspace(0,98,100000); %moving frame of reference
distance=linspace(1,2,1ength(t)); % distance from sound source, or the like.
env=1../distance.-2; % envelope (decays further from source)
CA 3028749 2018-12-31

Fundamental of reference input
............. 1 _____________________________________________________________
i ............................ f i ......................................
1 1 i
I I
I ............
0 20000 40000 60000 80000 100000
Signal input
_ --------------------- I. JAMUL _
1 1
_
0 20000 40000 60000 80000 100000
SWIM with first harmonic only
_ 1 _
0 20000 40000 60000 80000 100000
SWIM with first 2 odd harmonics
1 ____________________________________________ 1
______________________________________________________________ ___-----1 ____ -
7.
0 20000 40000 60000 80000 100000
SWIM with first 3 odd harmonics
_ ________________________________________________________________________
_
0 20000 40000 60000 80000 100000
SWIM with harmonics 1 to 9
_ 1 1 ______________ 1 _________
_______________________________________________________ ,------- -
--...;
_
I I t
0 20000 40000 60000 80000 100000
Figure 52. Multi-component lock-in amplifier with phase-shifted signal input.
%refererence signal components: sines tell lies
r1=sin(2*pi*1*(t-p));
r2=sin(2*pi*2*(t-p));
r3=sin(2*pi*3*(t-p));
r4=sin(2*pi*4*(t-p));
r5=sin(2*pi*5*(t-p));
r6=sin(2*pi*6*(t-p));
r7=sin(2*pi*7*(t-p));
r8=sin(2*pi*8*(t-p));
r9=sin(2*pi*9*(t-p));
sl=cos(2*pi*1*tm);
s=sign(s1); % signal input is a square wave
s=s.*env; % include inverse square law falloff amplitude envelope
CA 3028749 2018-12-31

r=r1; fundamenal of reference
subplot(611); plot(r); title("Fundamental of reference input");
subplot (612) ; plot (s) ; title ("Signal input" ) ;
p=r.*s; %product
P=fft(p);
P(2+10:length(P)-10)=0;
pf=ifft(P);
pf=real(pf);
subplot (613); Plot(Pf);
title("SWIM with first harmonic only");
r=r1+r3;
p=r.*s; %product
P=fft(p);
P(2+10:length(P)-10)=0;
pf=ifft(P);
pf=real(pf);
subplot (614); Plot(Pf);
title("SWIM with first 2 odd harmonics");
r=r1+r3+r5;
p=r.*s; %product
P=fft(p);
P(2+10:length(P)-10)=0;
pf=ifft(P);
pf=real(pf);
subplot (615); Plot(Pf);
title("SWIM with first 3 odd harmonics");
r=r1+r2+r3+r4+r5+r6+r7+r8+r9;
p=r.*s; %product
P=fft(p);
P(2+10:length(P)-10)=0;
pf=ifft(P);
pf=real(pf);
subplot (616); Plot(Pf);
title("SWIM with harmonics 1 to 9");
we obtain a result that distorts the shape of the waveform.
See Fig. 53.
Thus I propose a reference signal comprised of the sum-of-cosines in the
previous Octave script, for the real-part,
and the same sum-of-cosines shifted 90 degrees with p=1/4, for the imaginary
part, thus completing the design of
a complex-valued lock-in amplifier that can wonderfully capture and display
augmented reality waveforms.
9.6. Wearable SWIM
Oscillographs were too heavy to swing back-and forth quickly (RCA Type TMV-122
weighs 40 pounds or approx.
18kg). So in 1974, Mann invented the SWIM (Sequential Wave Imprinting
Machine). The SWIM, waved back-
and-forth quickly by hand or robot, visualizes waves, wave packets (wavelets),
chirps, chirplets, and metawaves,
through PoE (Persistence of Exposure) [69]. See Fig. 54 and
http://wearcam.org/swim
10. Phenomenologial AR hots and drones
Constrained to linear travel, SWIM is useful as a measurement instrument (Fig.
55). Over the years the author
CA 3028749 2018-12-31

Fundamental of reference input
f, ''''' , .................. 14, ......... I.I 1, ......... 1,44µ44, ,
....... 41111.Plyiy,vw.
.................................... / .................. d ............
0 20000 40000 60000 80000
100000
Signal input
1 ____________________________________________________________ I
_
... _________________________________________________________________________
0 20000 40000 60000 80000
100000
SWIM with first harmonic only
- 0 20000 40000 60000
80000 100000
SWIM with first 2 odd harmonics
I ___________________________________________________________________________
______________________ --_______------- ___________________________________ =
1 I 1 i
_
0 20000 40000 60000 80000
100000
SWIM with first 3 odd harmonics
I I i I
=
__________ ______________________________________________________
-z=
________________________________ --____---- _______________________________ =
0 20000 40000 60000 80000
100000
SWIM with harmonics 1 to 9
________________________________ ------_____--
0 20000 40000 60000 80000
100000
Figure 53. Multi-component lock-in amplifier with multicomponent sine wave
reference.
built a variety of systems for phenomenological augmented reality, including
some complex-valued wave visualizers
using X-Y oscilloscope plots as well as X-Y plotters (X-Y recorders) replacing
the pens with light sources that
move through space. In one embodiment an X-Y plotter is connected to the real
and imaginary (in-phase and
quadrature) components of the author's special flatband lock-in amplifier and
pushed through space to trace out
a complex waveform in 3D while a light bulb is attached where the pen normally
would go on the plotter.
More recently, Mann and his students reproduced this result using a spinning
SWIM on a sliderail to reproduce
gravitational waves ¨ making visible an otherwise hidden world of physics. See
Fig. 56.
Camera metaveillance: Another variation on phenomenological augmented reality
is to map out veillance
from a surveillance camera, as metaveillance (seeing sight). For this, a light
source is connected to an amplifier to
receive television signals, and indicate the metaveillance by way of video
feedback. In one embodiment the light
source is a television display (see Fig. 57 (top row)). In another embodiment,
the light source is a light bulb (see
Fig. 57 (bottom row)).
ARbotics (AR robotics) can also be applied to vision (Fig. 58). Here we map
out the magnitude of the metawave
function, where the phase can be estimated using Phase Retrieval via Wirtinger
Flow [19].
CA 3028749 2018-12-31

anow ___________________ 1
p 4
.4. 4 =
ot,
I
; 4
d'1,14444144/244,14mmivvvv4
-.-
f-
-1
='/ P
40.
ces = ,
Figure 54. Miniaturized wristworn SWIM: Metaveillance for everyday life. Left:
Invented and worn by author S.
Mann. Wristworn SWIM makes visible the otherwise invisible electromagnetic
radio waves from a smartphone (heterodyned
4x/8x as if 20,000MCPS). Right: array of LEDs on circuitboard made in
collaboration with Sarang Nerkar. We find a
listening device concealed inside a toy stuffed animal (Okapi). Visualized
quantities are the real part of measured veillance
wave functions. Magnitudes of these indicate relative veillance probability
functions.
A-4
-
_õ.
t.,
Figure 55. Left: Sliderail SWIM to teach veillance wave principles. A speaker
emittins a 10050 cycles/sec. tone. The
microphone's metawave has 11 cycles in a 15 inch run. Teaching speed-of-sound
calculation: 15 in. * 10050 cycles/sec /
11 cyles = 13704.54... in./sec.= 348.09... m/s. At 25deg. C, theoretical speed
of sound = 346.23 m/s (0.5% measurement
err.). The real part of the veillance wavefunction is shown but SWIM can also
display magnidude (steady increase toward
microphone). Right: "Bugbot" (bug-sweeping robot) finds live microphone hidden
in bookshelf and visualizes its veil-
lance waves in a 7-dimensional (3 spatial + RGB color +time) spacetime
continuum. (Green=strongest; redshift=toward;
blues hi ft=-away) .
11. Storage SWIM
The combination of SWIM and Digital Eye Glass, such as EyeTap, can be used as
a phenomenological apparatus
to help people better, e.g. to be able to see radio waves, sound waves, and
other propagatory waves, in coordinates
in which the speed of propagation is zero or reduced so that the waves can be
seen sitting still or moving slowly
enough to see nicely.
An example of such propagatory wave visualization is nerve action potential
wave visualization. For example,
we can see a combined neuron action potential (CNAP) traveling along the arm
(e.g. ulnar nerve) of a human
subject. See Fig 59.
The SWIM can be used to visualize many phenomena which are repeatable. In one
embodiment, a pulse
generator such as the Grass SD9 nerve stimulator is used to deliver a stream
of electrical impulses to electrodes
connected to a portion of the body of a patient or subject, such as the arm of
a subject. At one or more different
places on the subject's body, receive electrodes receive the electrical signal
and convey the combined nerve action
potential as a spatially dependent voltage to a SWIM. In one embodiment, a
stream of pulses is delivered to
CA 3028749 2018-12-31

,
, .
....r
Arr4r \\\\
II;
r 4
# 4 et tiro 1) 144)1 , Iiii, ,
t 4
, =
\ \ 14\
. . .
- ----
-
LA.,.4..õ
-1,,,,,,,,,,( i /17.) / I/ 1 /-=\ , /a)
... ,,
I'Ll /VP.'
1 i
-,
Figure 56. Complex-valued "gravlet" wavefunction visualized on a robotic SWIM
that spins while moving back and forth.
Data[2] from LIGO[3] was used with its Hilbert transform, noting the result is
a chirplet[89, 90, 91, 92, 17, 113, 36, 18]
("gravitational signal" rather than "gravitational wave"). SWIM explores
periodic realtirne data at any scale from atomic
to cosmic, as well as displays arbitrary data.
the subject at a high enough rate so as to create a reasonable persistence-of-
exposure (PoE) in the human visual
system of one or more persons trying to see the phenomena. In this case the
limiting factor is the refractory period
of nerve action potentials, or also, the amount of pain that the subject can
withstand from the stream of pulses
(higher pulse frequency of pulses of otherwise equal individual strength and
duration result in more discomfort to
the subject).
To mitigate this effect, a storage-SWIM is preferable. A storage SWIM is to a
storage oscilloscope as a regular
SWIM is to a regular oscilloscope.
The storage SWIM system acquires and stores ("captures" the waveform of a CNAP
(Compound Nerve Action
Potential) and allows it to be visualized on the SWIM after it has already
been captured. The storage SWIM
system is useful for phenomenology that is difficult to repeat at high
frequency, or for phemenology that may be
sensitive (e.g. to pain in a subject).
In a storage SWIM, data are recorded. For example, data may be recorded by a
well-known method such as
an "inching" study, where nerve signals are recorded every inch along the arm
of a patient. The distance between
recordings is also often varied. Figs. 11 and 61 show the results of a typical
"inching" study of S. Mann's right
arm with data captured at 2 centimeter increments. Data were recorded by a
Natus recorder.
Here the data were recorded while moving the source of stimulus along the arm
in 2cm increments, gradually
increasing the voltage until the shape of the waveform remained roughly
constant. In areas where the nerve is more
CA 3028749 2018-12-31

II:Transmitter
1.,..v....,
Television camera
1
:
Irrit
Sightfield /
= J
1 i .
i 1
I 17
_ :
+ 1 =
_ . i = i . 1 ..
j ; ; 1
I
"Rabbit ears ill
1
i =
µ 1 1 I 1 ' 4
,I
i /
1
" : 1
, =
, e
I
receive antenna\ , 1 ,
=, :
. .
Television if: o .
:
receiver ,
i.
LTransmitter
A.11
z.....,Tlevision camera ,
. .;. Sightfield . 4:=411,4*p . i 04 h ill
1, I
"Rabbit ears" I \ / )
receive antenna =-= = ,..-
44.-',.::,.' = `-,
PHENOMENAmp ,...,,..;õ,
-..........õõ
/ .
= ' - '''
\...), ....,,,..
Figure 57. Repetition of Figures 16 and 17, said somewhat differently:
Metaveillance by video feedback: Phe-
nomenologial augmented reality overlay. Top row: a TV (TeleVision) receiver is
moved around in a dark room. A
back-and-forth sweeping motion works best. As the TV receiver moves in and out
of the camera's sightfield, the TV glows
brightly when in view of the camera, due to video feedback, leaving an
integrated exposure on photographic film. Here we
see multiple black and while recordings presented on a pseudocolor scale for
HDR (High Dynamic Range) of the overlay.
Bottom row: the TV receiver can be replaced with a single light bulb (center
image) or a linear array of light bulbs that
sweep out an overlay of the camera's sightfield on top of visual reality
(rightmost) giving a phenomenological augmented
reality.
superficial, less voltage is required to reach stady-state waveshape, wheraes
in areas where the nerve is deeper,
more voltage is needed, and since the maximum output level of the appratus was
400 volts, the pulses needed to
be lenghtened in duration to obtain full response.
Once the recordings are made, they can be visualized using the SWIM, and
therefore only a limited number
of electrical shocks need to be delivered, so that the result can be
visualized on the SWIM, as illustrated in
Fig. 62, without the subject needing to endure the continuous discomfort of a
steady stream of electrical pulses.
The position tracker in some embodiments is a standard linear potentiometer,
approximately 1 metre long, and
having a total resistance of approximately 13 ohms. The linear potentiometer
comprises a resistance wire made
of resistance metal such as "nichrome" (nickel and chromium), similar in some
ways to heating element wire or
stainless steel wire, or music wire as might be used on "strings" based
musical instruments such as guitar, violin,
piano, or the like ("piano wire").
Such wire is less conductive than commonly used copper wire, and is therefore
more resistive, thus forming the
basis for a variable resistor in which a slider attached to the SWIM moves
back and forth along the wire between
one end of the wire connected to 0 volts and the other end of the wire
connected to 5 volts.
The slider (e.g. the "center" wire of the potentiometer) is connected to an
analog input of the microcontroller.
An analog output of the microcontroller is connected to the SWIM. A suitable
analog output is the A14 pin of
a Teensy 3.1 or 3.2 for true analog output. Typically, the data are scaled
according to a simple affine scaling, of
CA 3028749 2018-12-31

s '
....
=
,
pi,/ 00 r = -
, /
. =01'" JO
t hillifill v= =
' .
. .
,
, ,, .
f:--'
d-Vi t
i=tr,;44-4
'WO
,i.,µ1=.,-,
,-,79,3riy,
-,-,
0
..0
õ.
.
,,,,,, . = = ' .4 i
5,t,-- - ; " 0 .-- - rõ,.õ- =
le--Nu =
. ....._
,
.I'''
= 'z''' - , % i, ;
''''' 1 i -
Figure 58. (Top image) Wearable camera system with augmented reality eyeglass
meets video surveillance.
(Bottom row) Drone meets video surveillance. Surveilluminescent light sources
glow brightly when within a surveillance
camera's field-of-view, resulting in augmented reality overlays that display
surveillance camera sightfields [78]. The overlays
occurs in a feedback loop, so alignment is near perfect and instantaneous,
because it is driven by fundamental physics rather
than by computation. Metawavefunction sampling is random and sparse, but
recoverable [20].
the form y=ax b where the slope a and intercept b are chosen to cause perfect
spatial alignment between the
recording and the physical reality in which the data are visualized.
Here for example, we have 13 ohms connected to 5 volts consuming about 385
milliamperes, and indicating
across about 100 cm, of which only about 38 cm are used, so the slope, a,
needs to be about 1/0.38 = 2.63, and
the intercept, b, depends on the position of the potentiometer with respect to
the subject.
Indexing into the array therefore happens over a 38cm or so distance as the
SWIM moves back and forth to
render the waveform visible in perfect alignment with the reality in which it
was recorded. The portion of interest
in the waveform happens over a time interval of about 4 divisions times
3ms/div (3 milliseconds per division),
which is about 12 milliseconds.
The nerve conduction velocity of S. Mann's right ulnar nerve is approximately
53 meters/second, as we can see
CA 3028749 2018-12-31

=
11111!
I k. =
Figure 59. Visualization of compound nerve action potentials along the ulnar
nerve of S. Mann's left arm, using the SWIM
(Sequential Wave Imprinting Machine).
from Fig 62 that in the first 8 measurements, which thus span 2*8=16 cm, the
wave has moved along by about
one division, i.e. by about 3ms in time. Speed = distance/time = 16cm/3ms =
53.3333... m/s.
Thus the apparatus of the invention should display the roughly 12 millisecond
long region of interest over a
distance of about 53.3333... m/s * (12/1000)s = .639999... meters, i.e.
approximately 64cm.
This is actually a bit long for the run length of the SWIM along the arm
(arm's length), but in actual fact the
CNAP is blurred out whereas an individual neuron action potential is much
tighter, running in the right distance
(approximately) to conveniently show phenomenologically (i.e. in perfect
alignment with physical reality).
More generally, the storage SWIM operates according to the following steps:
= capture data from a physical process by moving a transducer (e.g. a
sensor or effector or antenna or electrode
or the like) along a trajectory while recording both the sensor output or the
output of a sensor affected by
the effector, together with the position of the sensor and effector.
Preferably the data vary along at least one
spatial dimension in a meaningful way, and preferably the transducer (sensor
or effector/actuator/output-
device) is moved along this dimension. Data are recorded in samples comprised
of at least one sample from
at least one transducer together with the position of it, thus resulting in a
SWIM table that is at least 2
columns wide. For example, it might be a 1000 by 2 array. The data may also be
complex, i.e. the table can
be, for example, 1000 by 3, such as real, imaginary, and position being the 3
columns of data. Data can also
be multidimensional, etc.;
= display or "play back" the data by moving an array of one or more light
sources (SWIM indicator) attached
to a positioner. A positioner is a position sensing device, or a position
indicating device, transponder, tracker,
or the like;
The second step above (playback) is done by performing the following steps:
CA 3028749 2018-12-31

At .. .... . . . , i
. __ I . . .
At .. : liTi \....y,,: : .. =
A4 ................. 4
A 1 CANzeõ
_________________________________________________ E
NI ...
___________________________________ 7
= I 7 \ --: \ .:,,__: . . ArriAird
1
-Ignig
. . . ......
AS ...
AO .................
1
i
. _________________________________________________ '
.................................................
Ali ................ a
_________________________________________________ 'NAAR
. . . . .
Mt
___________________________________ V
AK
. . . .
Alt
"
... 7 \,....N.."_.... .. .. .. .. .
Figure 60. Right ulnar nerve: Inching study of right arm of S. Mann captured
at 2cm increments along the arm; there
are 20 recordings, thus spanning a distance (20-1)*2cm = 38cm (approximately
15 inches).
CA 3028749 2018-12-31

Motor NCS R Ulnar - Rec ADM
45mV+ , . , , , '3n-is + .
,
1
____________________________ +265i
f __________________________ b
, 3k , + I
i=
, Wit:,
,
N
. . VI Xlio2
Wok
,..
= ____________________ , +
.,
..; ________________________________________________________________ 341t
, ____________________
, + ,
N + + A 0 0
f +
1
0
/I I I s ' ; = r __ r =
+ Mot!
____________________________________________________________________ iv 4 t
An4 n
v,
+ WU,
\\.,.
iggIff,
__ / ,iii, , . All I .
+
- r = NI , It
. J.. !
A V
_____________________ r 4
+ + + , + r f r , + + .
Figure 61. Inching study of right arm of S. Mann captured at 2cm increments
along the arm. Pulse width was 0.1 milliseconds
until the stimulator voltage reached 400 volts, at which point the pulse width
was increased to 0.2 milliseconds since the
output voltage cannot be increased past 400 volts. This was the point along
the nerve at which it became less superficial.
= read the position of the positioner, and scale this position into an
index for the SWIM array. The scaling
can be quite simple such as an affine scaling y=ax+b. Compute "a" by
converting time or array index into
space. This can be done, for example, by computing the propagatory speed of
the wave, while also knowing
the sampling rate of the recording. Convert units of speed or sample number
into units of distance;
= interpolate into the array to obtain the array element corresponding to
the physical location of the SWIM
indicator;
= illuminate the SWIM indicator in proportion to the row element or
elements in the array, corresponding to,
but not including, the row element in the array that indicates position.
For example, suppose we wish to display interference waves from two speakers,
for use in teaching a simple
physics lesson.
Let us place two speakers at the bottom of a display field in front of a
classroom or the like. The two speakers
are connected in parallel to a signal generator so they produce identical
sound waves and they are both pointed
up. Ideally they are fixed so they do not move around. A microphone is the
sensor and it moves around in front of
the two speakers. The SWIM array comprises a single multicolor LED, i.e. a "1-
pixel" display so that this "array"
has one element.
CA 3028749 2018-12-31

Microcontroller
(e.g. Atmel AVR,
Teensy 3.1 or 3.2)
___________________________________________ AO analog Write()
on the A14 pin
Position for true
sensor analog output
______________________________________________ Position-tracker
A
+5V _______
(1) _______________________
(13
0
-142.2
0 V __________________________ X
0 999
Samples of waveform
Figure 62. Visualization of CNAP using SWIM. A position sensor or tracker is
attached to a linear array of light
sources. The positioner (position sensor or position tracker) is waved back
and forth together with the SWIM. The SWIM
displays an element of an array of SWIM values. Here in this example there are
1000 SWIM values, numbered from 0
to 999. The 1000 SWIM values represent the CNAP waveform. In this figure we
see the position of the SWIM happens
to be about 49.2 percent of the way along its trajectory from a position of
beginningmost (0 percent) to endmost (100
percent). The positioner is connected to a processor or microprocessor such as
an Atmel AVR, Arduino, Teensy 3.1 or 3.2,
"Microcontroller" or the like, which reads to position and then indexes into a
SWIM array by interpolating the values in the
SWIM array to return an estimate of the SWIM quantity at the corresponding
position in the array. The SWIM quantity
is the voltage in the array, here running from 0 (minimum voltage) to 5 volts
(maximum voltage). The SWIM quantity is a
number from 0 to 1023, which is a 10-bit quantity correspondingly scaled to
run from 0 to 5 volts, into the SWIM to thus
"display" the sample made visible to one or more people watching the
apparatus.
Let us suppose, for example, that we are using an RGBA (Red Green Blue Amber)
LED that has four elements
inside it so that it can assume the colours red, green, blue, or amber.
Such an LED has 8 connections to it, one pair for each element. Let us put
diodes in series with each element,
i.e. 4 diodes total, to protect against reverse polarity.
Now there are connected 4 pairs of wire/cord, i.e. four cords, one for each
color. The cords for red and green
are connected back-to-back, i.e. the red is forward polarity and the green
reverse. This pair we call the real cord,
or the "channel 1" cord.
Similarly, the yellow and blue are connected back to back, with the yellow
being connected forward and blue
backwards. This pair we call the imaginary cord or the "channel 1" cord.
The channel 1 cord is connected to the "X" output of a lock-in amplifier or
multicomponent array of lock-in
amplifiers and the channel 2 cord is connected to the "Y" output of the lock-
in amplifier or array of amplifiers.
Now the SWIM array (1 pixel light source) is attached to a microphone, and
this SWIM+transducer assembly
is attached to a mover. The mover can be a robotic arm that moves it around,
or it can be a human user who
CA 3028749 2018-12-31

,T4
' I
, =
:7:111 r.:1
1 =
44,
Figure 63. Robots for teaching and education: Robotic system for the
visualization of interference fringes between two
speakers. Here the speakers are 40,000 CPS (cycles per second) transducers,
wired in parallel, fed with a 40 k CPS signal,
while a third such transducer functions as a microphone to pick up the signal.
The third transducer is behind the RGBA
LED. The plotter is placed upwards on its front side, so that it is visible to
the class. Here the image is visible by way of
persistence-of-exposure to a long exposure photograph.
waves it around. A suitable mover is an X-Y plotter such as HEWLETT PACKARD
7015A X-Y RECORDER, in
which case the SWIM+transducer assembly is placed where the plotter pen would
normally be located. See Fig 63,
where the interference pattern between two speakers is visible with the colour
indicating the phase, approximately
(according to the colour pattern), and the quantity of light indicating the
magnitude. The time constant of the
lock-in amplifier was 1 millisecond at 12dB/octave.
In other embodiments, there is provided a processor for specialized trajectory
of the X-Y plot so that the light
source moves along contours of constant phase, thus actually tracing out the
interference pattern. For this purpose,
I introduce and propose the "0-pen"Tmprogramming language, a language for X-Y
pen plotters for use with lock-in
amplifiers and augmented reality.
Once the interference pattern is sensed it can be captured captured and stored
and re-rendered more quickly,
e.g. by a faster robotic mechanism (no longer limited by the 1 msec time
constant required of the amplifier), or
displayed on a television screen, for example, in perfect alignment. Placing
the two speakers near the bottom of a
TV screen while viewing the interference pattern can then be performed,
likewise.
Humanistic Intelligence is about making computation work naturally in our
world. For this invention I coined
the term "Natural User Interface" [70]. Ten years later, Microsoft's founder
and CEO Bill Gates said of this inven-
tion: "One of the most important current trends in digital technology is the
emergence of natural user interface,
or NUI ... It's exciting to think about the many ways a natural user interface
will be used, including by people
with little knowledge of technology, to tap into the power of computing and
the Internet." [46]. An example of this
is the "natural machine" based on neuroscience, such as our InteraXon Muse
product. It functions in a way that
is as natural as if it were part of our body.
44t4 Pgil
rill 31111 AP-1141.1,11k,
11111WMEorramM11=111
411" 111E1111
Mind c m
22 (1) 4 Human
a) (1) Humanistic
Intelligence
µ11111r
Body (1) FO
Machine2µ Machine
jit 421'44 1: L11111
Smart City
Human Hunnachine Humanistic Intelligence (HI)
Coveillance
Our mind and body work together as a system. Our body responds to our mind,
and our mind responds to our
body, as shown in the leftmost figure that is captioned "Human". Some
technologies like the bicycle also work this
CA 3028749 2018-12-31

way. The technology responds to us, and we respond to it. After riding a
bicycle for a while, we forget that it is
separate from our body. We begin to internalize it as part of our
proprioceptive reflex, and "become one" with it.
See second figure from the left, "Humachine".
There is a symmetry in regards to both the completeness, and the immediacy in
which the bicycle responds to
us, and in which we respond to the bicycle. It senses us in a very direct way,
as we depress the pedals or squeeze the
brake handles. We also sense it in a very direct and immediate way. We can
immediately feel the raw continuous
unprocessed, and un-delayed manner in which it responds to us, as well as the
manner in which it responds to its
environment. In our work, we extend this concept to include humans and
bicycles or cars with various kinds of
sensors, actuators, and computation, where we can sense the machine as well as
it can sense us. This is known
as "Humanistic Intelligence (HI)", as shown in the third figure from the left.
HI forms the basis for the proposed
work.
Sensors are becoming pervasive, but they don't always play well together. I've
been wearing a computer vision
system since my childhood when I invented and built it, and have come to
notice that (1) my vision system is
often blinded by active surveillance cameras, and (2) my vision system tends
to also blind those cameras. Soon
every car will have an active (eg radar, sonar, LIDAR) vision system in it,
and these will blind each other and be
blinded by active surveillance systems. (This has not been a problem in
buildings where surveillance equipment
is all part of one system, or with experimental self-driving cars when there's
only one on the road in any given
area.) Much of the world we live in has been interconnected with technologies
that are surveillant and centralized
in nature, but this information flow is often one-sided. This information
assymetry is like a body that has only
(or primarily) a working afferent nervous system (i.e. only conducting inward
toward the central nervous system),
but a broken efferent (outward-conducting) nervous system. Information
assymetry is a hallmark of a surveillant
society in which information is collected, but not revealed to those it is
collected from.
The invention addresses this information assymetry through HI, by creating HI
systems that scale and work
together cooperatively to help each other, so that everyone benefits through
better sensing. This is what we call
Coveillance[103, 81, 120, 119] (Rightmost of the four figures above).
11.1. Coveillance-based sensing and meta sensing methodology
Methodology for smart city sensing has traditionally relied on surveillance,
which has traditionally been done
with cameras hidden in dark smoked acrylic enclosures to hide their direction
of gaze. Police departments have
traditionally been less open about their surveillance. However, one embodiment
of the invention is based on
coveillance, in which sensing systems work together, using machine learning to
detect active vision systems and
adapt to (and actually benefit from) their emissions. This is done using a
lock-in amplifier referenced to surrounding
"noise", thus turning this interference into a reference signal, as
illustrated in the figure below:
i
\k,1000111' ." "owe. Coveillance and
Metaveillogrammetry 2avvniagnacned
. .1
U(3) ,,1-- -if U0 U =
Urban illumination u = Urban sensing Display
V = Vehicular Alum. v = Vehicular sensing .._
System
.... k . W = Wearable illum. w = Wearable sensing
,. SYSU +MannLab
'. ' ,......,,_ Scientific Outstrumentsn4 ilW
411 samatilHame61: ; . .
2 , w -4 W(1)
0 Model S01024 Lock-In
Amplifier
, = I V n,,mmi-i44,-%-i-i. unTI %
V(2) . liakr 2 I A NM _
-..
ib411',Z
i an 16-741kvo 1 ,,,
,.
4
,,,,....._ :...õ..\....,
. .. .,.õ... .. ,
.t.,,_,,..........._,.....4:i
qi, ....,
_.
,.., ,..-
-- k =,....
mi.......,,,...
, <
Here a blind or partially-sighted pedestrian W(1) has a smart cane, but, more
importantly, is wearing a computer
vision system (HDR EyeTap active LIDAR system [94, 62]). Smart cars on the
road such as vehicle V(2) have vision
systems as well, within a smart city with urban sensors such as sensor U(3).
Each of these urban (u), vehicular
(v), and wearable (w) sensors are also associated with various light sources
that help a sighted or partially sighted
person see with visible light that is also part of the vision system's active
illumination, using LightspaceTm [66, 67].
Additionally some non-visible active "illumination" is used, including
infrared light, RADAR and SONAR. Light
sources are denoted U for Urban, V for Vehicular, and W for Wearable (capital
letters for light sources, and
lowercase letters for corresponding sensors). Now {U, V,1/1/} assist, rather
than interfere, with each other, by
CA 3028749 2018-12-31

turning noise into signal!
Streetlights are emerging as a backbone for smart cities, providing both
illumination and sensing. Urban
sensor U(3) takes in its surroundings as ray of light u0, while illuminating
the environment as U0 which also
bears active illumination as a carrier signal that can be locked-in on, using
the multichannel multispectral lock-
in amplifier/image-sensing server depicted at the right (in actuality, in a
server rack underground). A control
center for the smart city is presented by way the metavision display system
depicted in the upper right (described
in [96, 82]).
The key methodology here is to use the interplay between these various input
channels, {u, v, w}r and output
channels, {U, V, W}y, as well as the channel capacity, fed into a machine
learning system, for smart cities, smart
cars, and "smart people" (e.g. also using "wearables" such as our InteraXon
Muse).
11.2. Shared Vision
There are two fundamental kinds of sensing ("watching" or otherwise):
surveillance (oversight) [64, 49]; and
sousveillance (undersight) [103, 127, 55, 74, 110, 42, 44, 143, 137, 6, 124,
65]. Both veillances are equally important
and required in a sensory or meta-sensory system to achieve feedback and
control (Humanistic Intelligence), i.e. we
need a world that senses us, and at the same time, we need to be able to sense
the world around us. The invention
works by sensing in the wide sense, as a phenomenon that makes the world
"smart", as well as a phenomenon that
also makes us (ourselves) "smart".
In some embodiments of the invention, metaveillance (the sight of sight
itself) [81] plays an important role. Meta
is a Greek prefix that means "beyond". For example, a meta conversation is a
conversation about conversations,
and meta data is data about data. Metaveillance is the veillance of veillance,
and more generally, metaveillance is
the sensing of sensors and the sensing of their capacity to sense.
Metaveillance answers the question: "How can we sense sensors, and sense their
capacity to sense?", and how
and why might this ability be useful in and of itself, as well as how might it
be useful to help us innovate and
create new kinds of sensors?
"Bug-sweeping", i.e. the finding of (sur)veillance devices is a well-developed
field of study, also known as
Technical surveillance counter-measures (TSCM) [146, 47, 129]. However, to the
best of our knowledge, none of
this prior work reveals a spatial pattern of a bug's or other sensor's ability
to sense.
11.3. Metaveillance and metaveillography
Metaveillance (e.g. the photography of sensors such as cameras and microphones
to reveal their capacity to
sense) was first proposed by Mann in the 1970s [84, 69, 1]. Metaveillance was
envisioned as a form of scientific
visualization [83] and scientific analysis [84], and further developed by
Mann, Janzen, and others [53, 85] as a form
of accurate scientific measurement.
11.4. Humanstic Intelligence
Humanistic Intelligence is machine learning done right, i.e. where the machine
senses the human (surveillance)
and the human senses the machine (sousveillance), resulting in a complete
feedback loop.
We see here a kind of self-similar (fractal) architecture, from the cells and
neurons in our body, to the microchips
in our wearable computers, to the buildings, streets, cities, the whole world,
and the universe:
CA 3028749 2018-12-31

1 co
13
,13,t) el410,01
f3 ro
To
c 9
in ("I -.
alb ok,
E f
Intelligent E 19
Communitl) o w
=, -
1 :
= t MAW _
tit
Ole .1;ti; At Ms Geo raphy and = hysical Sciences
a
iit 0,e 60 __ õ, Ea, g g, 'AR g, o Physical Scale
cleo' c,' st FD- m 2 43¨ 5. 4
("Vironment")
Ve" (1/g M 5
co
rAe in. - CL
(D
3-
a 04 a- viron-
9 5e4
, 05-ce ment Environment ) g 50.00
=
-01- rice '
Adapted from Figure 1 of Can Humans Being Clerks make Clerks be Human", S.
Mann, ITTI, 43(2), pp97-106, 2001
Some embodiments of the invention create a scientific foundation for phase-
coherent confluences of sensors, so
entities can work together to help each other sense. Active vision is a form
of lock-in amplifier, and in this sense,
some embodiments of the invention create a shared reference signal among
entities. This turns smart cities, smart
cars, and "smart people" (people wearing technology that used to only be
"worn" by cars and buildings) into a
cooperative collective, ie an array of lock-in amplifiers working together
toward "shared vision" (realtime sharing
of sensory data for collaborative navigation, wayfinding, and collective
intelligence).
11.5. Objectives
The general objective is basic (fundamental) reseach breakthroughs in sensing
and metasensing to improve
people's qualities of life, better transportation technologies, and to
facilitate new breakthroughs in transportation
through new sensing and meta-sensing technology. This invention assists
everyone, including the visually impaired,
as well as those (both humans and machines) that have a need to see, and, more
generally sense. In the broadest
way, this work of use in "wearables", all manner of surveillance,
sousveillance, coveillance, and sensing in general,
as well as sensory intelligence and sensor systems for smart cars, smart
roads, and smart cities. The invention, with
coveillance and metaveillance (veillance of veillance), is of direct benefit
to other scientists and engineers needing
to test or quantify existing sensor systems, as well as develop new sensing
systems.
Metaveillance, in some embodiments, is used to measure the efficacy of
surveillance systems, computer vision
systems, smart cameras, smart roads, smart cities, and smart cars.
Another objective is a fundamental scientific breakthrough regarding a new
kind of lock-in amplifier array and
multiplexer that is miniaturized for automotive use. It aggregates harmonics
for multispectral correlation and
propagatory cancellation [81], i.e. for use with sensing and meta-sensing.
This system also works for anti-fragile
sensing (i.e. sensing that actually benefits from interference). A key
objective is collaborative sensing using
phase-coherent superposimetric imaging.
In some embodiments of the invention there are two major applications (use-
cases) for sensing and meta-sensing:
Enterprise sensing and meta-sensing, i.e. to be used commercially. Examples
include technologies to be used
in the manufacture of motor vehicles, and in gaining insight into the
manufacture of motor vehicles. For example,
the work on sensing and meta-sensing helps auto manufacturers design better
sensors for automobiles.
Customer-facing sensing and meta-sensing, which itself is informed by, and
assisted by the above Enterprise
sensing and meta-sensing effort.
More generally, embodiments of the invention help both the transportion
industry, as well as users of goods and
services provided by the transportation industry.
We begin with a specific example vehicle, the Ford Focus electric, which can
then be generalized.
CA 3028749 2018-12-31

=-
SWIM
SIGNAL
: UGHTS
TRANS- WAND
WAND
SIGNAL REFERENCE M SENSOR
ACTOR SENSOR LIGHTS SWIM
,AVVI A1VEFORM
t. -SIGNAL
V SENSOR gOTO ;If\
METER
WAVEFORM I
MOVEMENT MOVEMENT
SIG.
PATH
PATH
BUG rtEEFN
GEN. AR,Fp rp
META-
swim
= = = COMP. VEILLO- = = = COMP
VEILLO-
GRAPHY GRAPHY
Figure 64. Left: here is the experimental setup that is used to capture
photographs of radar and sonar waves from the Ford
smart electric vehicle that will be equipped with a number of radar and sonar
systems. A moving sensor (receive antenna
for radio/RADAR waves, or a microphone for sound/SONAR waves) is attached to
the linear array of lights that form a
Sequential Wave Imprinting Machine (SWIM) [82], denoted as "SWIM LIGHTS", and
moves with it. This sensor feeds
the signal input of a lock-in amplifier. The reference input to the lock-in
amplifier comes from a reference sensor fixed in
the environment (not moving), near the radio signal source or sound source.
Right: Here is the setup used to generate
metaveillograms and metaveillographs. It functions much like a "bug sweeper"
but in a much more precise way, driving the
linear array of light sources (SWIM LIGHTS) that is waved back-and-forth. The
array, in some embodiments, is a single
element (just one light source), and for sensing of cameras, the transmitter
is the light source itself. In other embodiments,
the TRANSMITTER is the light source itself, or a loudspeaker (for audio "bug
sweeping", i.e. to test vehicle-mounted
SONAR sensors and arrays of SONAR sensors), or a transmit antenna (to detect
and map out receive antennae).
11.6. Veillogrammetry versus Metaveillogrammetry
It is useful to define the following basic concepts and veillance taxonomy:
= Surveillance is purposeful sensing by an entity in a position of
authority;
= Sousveillance is the purposeful sensing of an entity not in a position of
authority;
= Veillance is purposeful sensing. It may be sur-veillance or sous-
veillance.
= Veillography is observational sensing, e.g. the photography (i.e.
capture) by way of purposeful sensing,
such as the use of surveillance or sousveillance cameras to capture images, or
such as the photography of
radio waves (e.g. radar sensors in automotive systems) and sound waves (e.g.
sonar sensors in automotive
systems) and similar phenomena. Our experimental setup for this is shown in
Fig. 64.
= Veillogrammetry is quantified sensing (e.g. measurement) performed by
purposeful sensing. For example,
video from a camera in a smart streetlight is used to determine the exact size
and trajectory of a car, through
the use of photogrammetry performed on the surveillance video, which is useful
to allocate automatically the
best parking space for it (based on nearness, size of the car, safe stopping
and turning distance, etc.). Likewise,
veillogrammetry with a microphone moved through space is used to quantify the
sound field distribution
around an automobile motor in order to study the motor's sound wave
propagation.
= Metaveillance is the veillance of veillance (sensing of sensing or
sensing of sensors). For example, police
often use radar devices for surveillance of roadways to measure speed of motor
vehicles so that the police
can apprehend motorists exceeding a specified speed limit. Some motorists use
radar detectors. Police then
sometimes use radar detector detectors to find out if people are using radar
detectors. Radar detectors and
radar detector detectors are examples of metaveillance, i.e. the sensing (or
metasensing) of surveillance by
radar.
= Metaveillography is the photography of purposeful sensing, e.g.
photography of a sensor's capacity to
sense. Our experimental setup for metaveillography is shown in Fig. 64.
= Metaveillogrammetry is the mathematical and quantimetric analysis of the
data present in metaveillog-
raphy.
Comparing the setup of Fig. 64(left) with that of Fig. 64(right), the
difference is that in Fig. 64(left), a signal
sensor (receiver) moves with the Sequential Wave Imprinting Machine (SWIM)
[82], and the reference to the lock-in
amplifier remains fixed at a stationary location, whereas with Fig. 64(right)
the reverse is true: a transmitter that
feeds the lock-in amplifier reference moves with the SWIM, and the signal
input comes from a stationary sensor
CA 3028749 2018-12-31

fixed in the environment. We confirm that veillography and metaveillography
are inverses of each other, and that
veillogrammetry and metaveillogrammetry are also inverses of each other.
11.7. Experimental comparison of veillography and metaveillography
Some embodiments of the invention comprise a SONAR sensing apparatus for use
on vehicles, as well as for use
in smart cities (i.e. on a road or parking lot).
A diagram showing this embodiment apparatus is shown in Fig. 63.
We begin our experimental setup with an array of ultrasonic transducers (the
transducers typically used in most
burglar alarms and ultrasonic rangefinders) because they work equally well as
microphones or speakers.
We construct a mobile lock-in amplifier for use in a vehicle. We construct an
apparatus that stores data from
the vehicle-mounted 12-volt lock-in amplifier into an array, using a 24-bit
analog to digital converter, allowing us
to compare precise numerical quantities, and to determine experimentally the
degree to which veillogrammetry
and metaveillogrammetry are inverses of one-another, i.e. that the two image
arrays give approximately the same
quantities.
We construct vehicle-mounted radar and sonar sensing arrays. For example, we
construct and test an array of
sonar sensors and sonar emitters. We develop an automotive SWIM and use it to
assist in the sound engineering
design of such systems.
For the sonar, in beamforming mode, we will create an ultrasonic listening
device that is highly directional. By
being able to visualize the metaveillance function (capacity of the microphone
array to listen) spatially, we determine
the optimum number of array elements and optimum spacing between them, for
automotive applications.
There are numerous embodiments of the invention for automotive sensors as
examples of veillance and metaveil-
lance, as well as to show the transmit and receive arrays as inverses of each
other (i.e. when we swap roles of
transmitter and receiver), and determine experimentally that the degree to
which this reciprocity holds true.
SWIM, and phenomenological augmented reality, are used for engineering,
design, testing, and scientific analysis
leading us to new forms of automotive sensor design and integration.
11.8. On the importance of sensing and meta-sensing
Vehicles are increasingly being equipped with a wider range sensors, and thus
sensing is of growing importance.
Some embodiments of the invention allow us to analyze a very ubiquitous
sensor, namely the camera that many
vehicles use to help the driver see what is behind them.
One such study is a metaveillographic and metaveillogrammetric [82] study of
the existing rearview camera on
a Ford Focus electric vehicle. This study involves construction of a "smart
parking lot" apparatus consisting of
smart city lighting, including a SWIM (Sequential Wave Imprinting Machine)
that is used to generate metaveillo-
graphic [82] image data and metaveillogrammetric [82] data, as illustrated
below:
--swAdjustable boom Abakographit camera Adjustable
boom
Abakographic camer
Veillance flux
g g
AC:Of-rack > X from backup ¨ Roof rack
n=7
E
3 m ¨ ca me ra
), 47.11-77
LTE
n r1
, =
= a
Backvievr camera . -
grr171
Veillance flux from
@gg.g
backcam t 0
¨
7,,eoiercdtr7cccu:r
!
Vedlance flux from abakographIc camera Veillance flux from
Ford Focus electric vehicle
a bakog raphic camera
Initially we tap into the existing camera and derive an NTSC television signal
from the camera for feeding into
an existing SYSU x MannLab Model 1024-S0 Scientific OutstrumentTmlock-in
amplifier, powered by an electric
power inverter (12 volts DC to 120VAC). In embodiments, an experimental
broadband back-up camera is built
and used for capture of meta-sensing information). Our experimental apparatus
also includes a way to mount it to
the roof of a Ford smart electric vehicle using a roof rack and HSS (hollow
structural steel) frame that is assembled
using a Dynasty 350 TIG (Tungsten Intert Gas) electric arc welder. The HSS
frame includes a sliding member
with 1/4-20 thread mount for the abakographic camera, for which a SONY RX100-
VI camera is used due to its
lightweight and its ability to capture at shutter angles greater than 360
degrees. The vehicle is driven forward in
a slow steady movement, for this study. In alternative embodiments, an
"inching" standard for electric vehicles is
CA 3028749 2018-12-31

developed. The inching standard follows the way in which inching studies are
now done in neuroscience, to trace
electrical currents in the human body [15]. We create a similar system for
tracing electric currents and magnetic
fields in electric vehicles and in their sensory ("nervous") systems.
Other embodiments use narrowband (i.e. phase-coherent) sensing and meta-
sensing technologies for transporta-
tion, starting with the "MobLIA" (Mobile Lock-In Amplifier), a lock-in
amplifier specifically designed for the
transportation industry. Other embodiments comprise a backup camera that
implements phase-coherent HDR
(High Dynamic Range) sensing. HDR is a form of sensing originally invented by
Mann', and now used in more
than 2 billion smartphones. The next step in the evolution of HDR is
narrowband (phase-coherent) HDR. To make
this breakthrough tractable, we begin with a simple 1-pixel camera to focus
our work on the sensory aspects rather
than on pixel density. In other embodiments, we also use the inching standard
in the testing of the mobile lock-in
amplifier and the narrowband camera sensor.
Other embodiments are directed to development of sonar arrays for sensing and
meta-sensing, thus laying the
foundation for a fundamentally deep understanding of related sensors. Lidar,
radar, and sonar are all related,
but sonar is the simplest to implement due to its narrowband nature in simple
sensors that operate nicely around
40kHz center frequency. Some embodiments, especially for teaching purposes,
use single-element sensing systems,
as well as multi-element systems.
Other embodiments allow us to deeply understand motors using sensing and meta-
sensing. Some embodiments
comprise a single-phase SWIM (Sequential Wave Imprinting Machine) specifically
designed for analysis of electric
motors. Other embodiments comprise a 3phase SWIM.
Some embodiments use these SWIMs in a series of photographic and
photogrammetric experiments to char-
acterize the powertrain of an electric vehicle. This provides a way to sense
and understand vehicle powertrain
performance, that will help us, as well as others, make fundamental new
discoveries regarding powertrain systems.
Other embodiments include a powertrain simulator which allows us to study
powertrain sensing and meta-
sensing. The simulator uses a real physical drivetrain connected to a rotary
SWIM wheel which we call the
SWIMulatorTm, as illustrated below:
Meta-sensing Wheel with 3-
phase
SWIM (Sequential
motor
teen
Wave Imprinting
experiment achine)
;
'= r Re.
' Locked rotor
(locked center),
=-=:" =:,== , ^i= 1
Spinning "stator "Motator"
'
(spinning outer) (meta motor)
SWIM comprised of:
=10 LM3914 bargraph chips cascaded;
. . =100 LEDs (10 connected
to each chip).
(Pins 1 and 3 don't go
Pinl=brown: N.C. ==: . ,
= = " 1=:= .. all the way
through.)
Pin2=red=+8 to 12v - .
Pin3=orange=ladder - ' M 1
+4 to 12v out
= .
"...
Pin5=green= 1.3 to
ground 5.3v 2 to 6v
2.7 to 6.7v
500 ohms 500 ohms
SWIMput
1001 IF M SWIM reference
end end
Pin4=yellow=input
1"The first report of digitally combining multiple pictures of the same scene
to improve dynamic range appears to be Mann. [Mann
1993]" in "Estimation-theoretic approach to dynamic range enhancement using
multiple exposures" by Robertson eta!, JET 12(2), p220,
right column, line 26.
CA 3028749 2018-12-31

This gives rise to the concept of trochography: the study of wheels, and
trochogrammetry: the scientific
measurement of phenomenology associated with wheels.
In one embodiment, rotational SWIMs are used to characterize a powertrain, so
we can see the relationship
between rotational magnetic field and travel of a vehicle, or, alternatively,
in analysis of aircraft by way of SWIM
based propellers or the like, where we see the relationship between rotation
and advancement forward.
Human vision is very sensitive to perturbations in symmetry, and this allows
us to be able to see small defects
in wheels, e.g. to see if a nut is loose or there is another defect in the
wheel or rim or tire or the like.
Moreover, in other embodiments of the invention, transducers are bonded to the
solid matter of a blade or the
like, and used with rotational rattletale, where we can see small rotational
defects.
An example of the trochogrammetric embodiment is shown below:
Blades
= Motor
/ Red trail
4 Green trail
-
411 Blue trail
1
- -J1
, (SWIMtrails
;
4
Mount
Here a Base (in this case a Christmas tree) supports a rotary SWIM (in this
case used as a Christmas tree ornament,
e.g. by way of its star pattern) which has three SWIM Blades, which, when
spinning, leave, by way of Persistence
of Exposure (PoE) on human vision or photographic film or other imaging
devices, SWIMtrails. Here there are 3
Blades, but we may also have two at 180 deg. opposing, or two at 90 degrees
with counter weights to SWIM out a
2-phase motor, or just one with counterweight to SWIM a single phase motor or
the like. In this case we have 3 to
SWIM out a 3phase motor with the blades at 120 degree angles from one another.
The motor we're SWIMming is
a computer fan motor. A satisfactory motor to demonstrate this principle is
the Noctua NF-A14 which operates
from 6 to 30 volts DC from which is generated 3 phase control to run the 3
phases of the motor. When connected
to the first coil of the motor, the 3 SWIMs trace out the pattern of that
coil. When connected to the second coil,
they then trace out that coil. The 3 coils are traced sequentially, giving the
total picture. Alternatively, we can
use RGB (Red Green Blue) SWIMs, so we can see with just one SWIM, all three
phases spatialized 120 degrees
CA 3028749 2018-12-31

'
IL
=
Figure 65. Modular EyeTap eye glass for teaching and research purposes.
apart. In this case there is just one blade and it requires a counterweight
ideally.
11.9. EyeTap freeze-frame visualization
Alternatively, the DEG (Digital Eye Glass) such as EyeTap, is used to
visualize this pattern as an augmented
reality overlay ("sample and hold"), using the EyeTap DEG (Digital Eye Glass).
A modular EyeTap was constructed using components on MELLES GRIOT MINIATURE
STAGE components
with corresponding miniature rail. Subsequently the rail was 3D printed. See
Fig. 65.
This can be brought to the world as an open-source hardware teaching tool,
upon which various research groups
can build.
This simple device includes a processor (Raspberry Pi) with analog inputs,
analog output, video inputs and
outputs, etc.. The lock-in amplifier is implemented in the processor, and
waves are acquired, stored, and displayed
in perfect alignment with their corresponding reality, allowing the wearer to
see a persistence-of-vision to explore
and understand their physical reality.
We now have a method of teaching physics in which students can see in a set of
spacetime coordinates in which
the speed of light, speed of sound, or the like, is zero or is greatly
reduced, making visible physical phenomenology.
As an example, consider a new method of teaching neurosurgery in which
cadavers can be brought to "life" with
played back neuron action potential waveforms, combined with actual data from
ultrasound. Instead of the usual
so-called "4-D" ultrasound, we add 3 new dimensions by embedding the 4D
(spacetime continuum) volume in a
3D or 4D exploratory space, thus 7D or 8D ultrasound, as illustrated in Fig.
66 and 67.
Thus we can see that my invention has use and application in various areas of
work such as surgery, neurosurgery,
and the like.
Ideally we have a multi-electrode membrane attached to the forearm of the
patient, through which ultrasound
may be passed. With an ultrasound array of transducers, operating as a lock-in
amplifier array, we can see into the
body of the patient, while performing the surgery. Overlaid on that
information is also the neuron action potential
so that the surgeon can see neurophysiological information superimposed on
physiology in perfect alignment,
through the EyeTap Digital Eye Glass.
In another embodiment, there is a thermal camera in the EyeTap so that the
wearer can see heatfields or thermal
CA 3028749 2018-12-31

=w1/47,
_
=
!riq
11 11
_
\µ'µNil litAtiw
1,1;11
Ji 11;,
- - u
TP..Nor`V. =ma
4146. ;
.e."44wv.44#1.`53*t .41-;
= '1%,' ,:r7'
' .
- `a41
't.St: =
Figure 66. Waveform visualization in cadaver lab.
fields or coldfields, and also the EyeTap allows the wearer to see electricity
in a circuitboard, overlaid. The wearer
can see the flow of electrical signals in the circuit superimposed with
thermal dissipation.
There is typically some physical phenomenon that is otherwise invisible, such
as the electric field of an electric
wave, or the magnetic field of an electromagnetic wave. The invention, in some
embodiments, includes a phenom-
enalizer, which captures phenomena, into electrical or otherwise measurable
signals. The phenomenalizer typically
feeds into a stabilizer that works in the spacetime continuum to be stable
such as a "sitting wave".
Some embodiments include a bufferator to buffer and store the physical
phenomena in a way that allows the
phenomena to be explored and displayed faster than the time-constant of the
phenomenalizer and stabilizer formed
by a live system such as a lock-in amplifier.
CA 3028749 2018-12-31

1:==;t1. MIL
-1111111111111Pre
.1
AL.,.
I t
ewe
/
,
. ,
I
11111141.?"4.
^
I
)
Figure 67. 8D ultrasound for performing the Guo Technique (thread carpal
tunnel release) of image-guided ultrasound
surgery. Here a 2D display medium is swept through 3D space while it plays
back an image sequence, and the result is a 4D
ultrasound object embedded in an exploratory 4D spacetime continuum, within
the augmented reality world of the EyeTap
Digital Eye Glass.
12. Tactile-SWIM
A variation of the SWIM is T-SWIM (Tactile - Sequential Wave Imprinting
Machine), a naturally augmented
tactile reality system for making otherwise intangible electromagnetic radio
waves, sound waves, metawaves, etc.
graspable. TSWIM uses a haptic actuator driven by a special type of lock-in
amplifier that synchronizes with
traveling waves to transform them into coordinates in which they are sitting
still. This creates an illusion that
the speed of light, or sound, etc. is equal to zero so that otherwise
imperceptible waves and metawaves can be
explored both visually and haptically. We discuss the design of TSWIM and its
potential in creating an embodied
understanding of the transmission and surveillance phenomena around us. The
result is a Natural Augmented
Tactile Reality, a new human augmentation framework for sensory,
computational, and actuatorial augmentation
that lets us experience important phenomena that surround us yet are otherwise
invisible. Examples include the
ability to physically grasp and touch and feel electromagnetic radio waves,
gravitational waves, sound waves, and
metawaves (e.g. the sensing of sensors).
T-SWIM combines visual and haptic feedback to allow users to both see and
touch/grasp/feel waves in their
environment giving rise to a new form of tactile Augmented Reality where our
sense of touch is physically realized
in perfect alignment with the phenomena of the reality around us.
See explanation in http://wearcam.org/kineveillance.pdf as well as [69, 79].
Alignment is automatic and im-
mediate due to natural physics, without any need to sense or implement
registration between the real and virtual
worlds.
TSWIM consists of two primary components: A modified lock-in amplifier and a
graspable linear actuator with
CA 3028749 2018-12-31

LED itLinear
)0tet
ltiorneter
. '11
=
'1 .11! Te=
1`" ' =
,
= Finger loop
I
" Servomotor
.4.W
Thumb loop
Figure 68. T-SWIM's tactile actuator (closeup).
a combined receive antenna or other sensor attached to the actuator (Figure
68). The actuator along with the
sensor borne by it (e.g. the receive antenna) is held and moved throughout the
space surrounding a wave source.
As the actuator passes through the various crests and troughs of a sitting
wave [81], the actuator pulls the user's
fingers apart and together. Furthermore, an LED mounted on the device travels
with the user's finger to provide
a corresponding visual representation of the wave pattern, when viewed by way
of a camera or the human eye.
When the actuator is slid back and forth along the wave with sufficient speed,
persistence of exposure [81] results
in the user's seeing a complete sitting wave.
The user can explore various wave patterns as they change with varying
distances from the source in all dimen-
CA 3028749 2018-12-31

7
11/4114t 11/1/14.1.1.
t144,,
Pr.% 01$1.r
r.vf
,Ar
t.
y
r w 41)Its kra4
1 &
=
Figure 69. Demonstrating the attenuation of radio wave propagation through
various materials. From left to right: unob-
structed, thin wood, thick wood, human hand, copper foil.
sions. The device itself can be held in a number of ways resulting in distinct
haptic/tactile sensations: by using
the finger loops, by placing the thumb on the rail and feeling the pressure of
the motor as it attempts to follow the
wave, or by holding the actuator in the hand and feeling the inertial forces
on the whole device as the actuated
components move. Further, users can interact with the source itself, placing
hands or objects in between the path
to the antenna to dampen or reflect the wave (Figure 69).
12.1. Visonobakgraphy and the SONARadio Effect
The new combined multimedia audiovisual/tactile sensing modalities of the
present invention can be put to
practical use.
Consider, for example, a radar (RAdio Diretion And Ranging) or lidar (LIght
Direction And Ranging) set. In
practice such radio or light sets tend to emit electromagnetic radiation
(radio waves or light) and receive a response
back, reflected off objects. Radar and lidar differ from sonar in that with
the latter, sound waves are used, i.e.
bounced off objects, and those sound waves tend to sometimes set the objects
in motion (vibration). Let us define
two classes of system, sonar, and emdar (ElectroMagnetic Direction And
Ranging, i.e. radar or lidar).
Consider a Doppler or pulse Doppler radar or lidar set, i.e. one that embodies
a homodyne receiver, phase-
coherent detector, lock-in amplifier, or the like, in conjunction with
physical observables such as sound and light.
CA 3028749 2018-12-31

One such system is Mann's "Doppler Danse" apparatus of the 1970s which was
used in various artistic endeavours
in which human movement was rendered visible by way of sound and light driven
by an output from such a set.
In one such example, radar sounds from body movement are presented through a
loudspeaker responsive to an
output of a Doppler radar set, as illustrated in Fig 70. The ratio of the
speed-of-light to the speed-of-sound is
roughly equal to the ratio of radar frequencies to audio frequencies. Roughly
speaking, the speed of light (i.e.
the speed of electromagnetic radio wave propagation) is about a million times
faster than the speed of sound, i.e.
approxmiately 300,000,000 meters/second as compared with roughly 346
meters/second.
And moreover, audio frequencies are typically in the 20cps (Cycles Per Second)
to 20 KCPS (Kilo Cycles Per
Second) range, whereas radio waves are in the 20 MCPS (20 mega cycles per
second) to 20 GCPS (20 giga cycles
per second) range.
It so happens that normal human body movement of walking, running, or dancing,
causes a Doppler shift on
a typical 10 GCPS ("X-band") radar set that is in a nicely audible range,
typically audible on a large woofer or
subwoofer, thus forming the basis of Mann's "Doppler Danse" set. Typically
sound and light are controlled as a
form of artistic effect. A problem with the Doppler Danse system is that some
objects in the scene, such as walls
made of drywall, furniture made of thin veneer, cardboard boxes, etc., vibrate
when struck with the sound waves
from the woofer or subwoofer, and these vibrations are picked up by the radar
set, amplified, and cause further
vibrations.
In this sense the Doppler Danse setup causes feedback in the presence of
objects that move when subjected to
sound waves. I call this effect the SONARadio or SONARadar (feedback) effect,
which is illustrated in Fig 71.
Such an appartaus, using this effect that I disocvered, gives rise to a new
kind of imaging that allows us to see
through or into walls, cardboard boxes, furniture, etc., in new ways. See Fig
72. Thus the flaw or defect in the
Doppler Danse system is used as a desirable feature in a new form of imaging,
i.e. a new imaging modality, where
sound-responsive objects image brightly and objects that are less sound-
responsive image darkly. Additionally, the
colors of the light source indicate the nature of the sound vibrations.
More generally, any acoustic excitation may be applied geophonically,
hydrophonically, microphonically, or
ionophonically (i.e. in solid, liquid, gas, or plasma) to give rise to any
measurable vibration or motion using
any vibrometer, motion sensor, or the like, not just a radar motion sensor.
The motion sensing can be done
radiographically, photographically, videographically, or even sonographically,
i.e. by another sonar set operating
at another frequency. In the latter case, a satisfactory sonar set is a sonar
Doppler burglar alarm running at 40
KCPS or simply two 40 KCPS transducers connected to a lock-in amplifier such
as an 5R510 or a Mannlab/SYSU
aplifier (the latter allowing multiple harmonics to be output simultaneously).
In this case, sound waves are used
to vibrate the subject matter in the scene, and sound waves are also used to
"see" that vibration.
In alternative embodiments, a lock-in camera is used (i.e. each pixel behaves
like a lock-in amplifier to "see"
the change due to a known sound stimulus). To the extent that a camera can be
used for seeing motion, e.g.
through "motion magnification" or the like, some embodiments of the invention
use a software-implemented image
processing algorithm in place of the individual lock-in amplifier or homodyne
receiver. Other embodiments use a
hardware-based sonar vision system in which sound stimulus causes motion that
is imaged directly in vision, no
longer requiring the mechanical scanning. Other embodiments of the invention
use an electronically-steered radar
set in which an antenna array is used for beamforming to direct the motion-
sensing beam along the target subject
matter, while stimulating it with known sound.
Thus there are many embodiments of the sonaradio or more generally sonar-
vision system.
Such embodiments of the invention may have practical utility.
Suppose, for example, a police officer wishes to see and understand a
residence, and in particular, the walls,
windows, and door(s) to the residence. Consider the front door, and door-
frame, and surrounding wall, for example.
The officer wishes to know where the door is strong or stiff, versus where it
is weak or compliant or has more
"give". Suppose, for example, the officer wishes to be able to see, in his
EyeTap, if there are any loose panels in
the door that have some "give" and could be easily pushed out with his fist or
foot, or battering ram, so that he
could reach in and turn the door handle, or otherwise compromise the door.
In one embodiment, the officer has a smart baton with a built-in T-SWIM so
that he can feel, and more
generally, sense (see, hear, etc.) where the door is more or less complaint.
The baton contains a tactor (vibrotactile
transducer) which causes the door to vibrate when it is touched to the door.
The lock-in amplifier of the invention
picks up these vibrations and indicates their strength in a phase-coherent
fashion, with the output of the LED fed
to an RGBA LED inside the wand, such that the wand glows in proportion to the
compliance of the door at the
CA 3028749 2018-12-31

TARGET
TRANSMITTED WAVE
11;1(
RECEIVED WAVE
_______________________________ LPF __
MIXER _______________________________ AMP
LOUDSPEAKER
DOPPLER DANSE RADAR SET
BASEBAND WAVE
TARGET
TRANSMITTED WAVE
RECEIVED WAVE
&V>
ROTATOR FIG. : DOPPLER DANSE
RADAR ANTENNA ACOUSTIC RADAR
SYSTEM
Figure 70. The Doppler Danse system in its simplest embodiment emits a
TRANSMITTED WAVE from transmitter Tx
to hit a TARGET such as a person-in-motion. Suppose that the TARGET is moving
away from the DOPPLER DANSE
RADAR SET. The RECEIVED WAVE reflected off the TARGET will contain frequency
components that are shifted down
in frequency. Some of the transmitted wave is used as a reference signal in a
MIXER with the output of a receiver Rx.
The result is lowpass filtered by lowpass filter LPF, resulting in a BASEBAND
WAVE that is output through a 1/4 inch
phone jack, so that it can be connected to a guitar amplifier or bass
amplifier or the like. The baseband signal here is at
a negative frequency which in some embodiments is made discernable by having
two outputs, "real" and "imaginary". A
LOUDSPEAKER allows the TARGET person to hear his or her own Doppler signal.
When the output is complex colored
light is used to distinguish negative from positive freqencies. In other
emodiments, audiovisual output is given for different
reasons or mappings. The TARGET can comprise multiple persons scanned out by a
RADAR ANTENNA on a ROTATOR.
CA 3028749 2018-12-31

=STUDS
TARGET
DRYWALL
TRANSMITTED WAVE
Tx
RECEIVED WAVE
LPF
MIXER _______________________________ AMP CD\
DOPPLER DAN SE RADAR SET
LOUDSPEAKE'N-K7
SOUND
WAVES
BASEBAND WAVE
WeA RED
W-V BULB
SOUND
TO
V't V GREEN
LIGHT BULB
CONVERTER
_____________________________________________________________ at, BLUE
WV BULB
-41*
RECEIVED WAVE
TRANS M ITTED WAVE
ROTATOR
FIG. : ACOUSTIC
FEEDBACK
RADAR ANTENNA IMAGING
Figure 71. Acoustic feedback imaging based on the SONARadio feedback effect.
Here a span of TARGET DRYWALL
between two STUDS is subjected to SOUND WAVES from a LOUDSPEAKER such as a
woofer or subwoofer or sonar
sending device. The TARGET DRYWALL is set in motion by the SOUND WAVES, and
that motion causes a Doppler
shift of the TRANSMITTED WAVE that manifests itself in the RECEIVED WAVE
received from the TRANSMITTED
WAVE being reflected off the TARGET DRYWALL. The output of the DOPPLER DANSE
RADAR SET is also fed to
a SOUND TO LIGHT CONVERTER driving three light bulbs, a RED BULB, GREEN BULB,
and BLUE BULB. These
bulbs shine on the TARGET DRYWALL and render it in a color and light level
associated with the degree and nature of
acoustic feedback present in the overall system. When the apparatus of this
invention is built into a wand, the wand may be
waved back and forth across a wall, and will light up the wall in areas that
don't have STUDS behind them. The resulting
image of a long-exposure photograph will show the STUDS as dark, and the
TARGET DRYWALL as brightly colored.
Additionally, if there is black mould growing behind the drywall, or if there
are places where rats and mice have damaged
the wall inside, these areas show up in different colors. Alternatively, as is
more typical of a radar set, the set spins on a
rotator and the light bulbs are replaced by a pinspot or colored lasers, to
"paint" the wall with light in a color and quantity
indicative of what is hidden inside the wall.
CA 3028749 2018-12-31

STUDS
_________________________________________________________________ DRYWALL
PLYWOOD VENEER FURNITURE
..................................................... J
CEMENT PILLAR
CORUGATED CARDBORD BOX
BEAM
RADAR ANTENNA
=
ROTATOR
PIN SPOT
Figure 72. Feedbackographic imaging of variously acoustically-responsive
objects and subject matter. A PINSPOT lights
up objects according to the nature of their acoustic feedback, using the
SONARadio feedback effect. The CEMENT
PILLAR shows up dark (black or almost black) because it responds very little
to the sound stimulus. The CORUGATED
CARDBOARD BOX shows up brightly, especially in areas of the box that are weak
or loose, versus the reinforced corners
that show up a little darker. The plywood veneer furniture also shows up
brightly since it feeds back strongly. The wall in
the background shows up brightly where the drywall is free to vibrate, and
darker where the drywall is secured by studs.
The entire scene is visible in this way to everyone looking at it without the
need for special eyeglasses or other devices.
Thus we have a true phenomenological augmented reality (i.e. a "Real
RealityTm").
CA 3028749 2018-12-31

WOOD STUDS ROTTED WOOD
9 171 ft
VDRYWALL. POSITION SENSOR-. RGB LED BLACK
TRANSDUCER INPUT r MOLD
GROWTH SHOWER
LOCK-IN AMPLIFIER
MICO L_TOOTHTUNES
SINE TOOTH BRUSH
OUT X Y A B/I
_________________ TT
______________________ XY to RGB
______________________ CONVERTER __
BATHROOM ________________ CAM SHOWER
WALL STALL
____________________________________________ EYETAP
PROC.
FIG. . INSPECTING DETERIORATION INSIDE WALLS:
SEEING THROUGH DRYWALL WITH A TOOTHBRUSH
Figure 73. Seeing through drywall with a toothbrush. "Toothtunes" toothbrushes
have been mass-produced, giving rise
to a low-cost source of vibration transducers. A small hole is drilled into
the brush to access the two wires going to the
transducer. A long flexible wire is attached thereto, and connected to the
SINE OUT of a LOCK-IN AMPLIFIER. A
suitable lock-in amplifier is the one designed by Mannlab (Steve Mann design)
in collaboration with SYSU, which is the
only lock-in amplifier capable of outputting multiple harmonics at the same
time. Here the hard part of the brush (not the
soft bristles) is pressed against the wall, causing it to vibrate. A
microphone picks up these vibrations and is connected to
the signal input of the lock-in amplifier, through inputs A and B/I, where the
lock-in amplifier is set to the "A-B" setting.
The output of the lock-in amplifier is converted to RGB colors using an XY to
RGB converter to drive an RGB LED
(Light Emitting Diode) that is mounted onto the TOOTHTUNES TOOTH BRUSH. The
brush is slid along the wall in a
systematic fashion and the LED color and light output varies in a way that
shows differences in material properties within
the wall, making visible the wood studs behind the drywall, and also their
condition, e.g. showing differences between studs
in good condition and those that are rotting out from water getting in behind
and to the left of the bathroom shower.
particular point in which the wand is touched.
Alternatively, suppose a home inspector wishes to take a look at the condition
of the insides of the walls of a
home. Fig. 73 depicts a phenomenological augmented reality system made using
low cost technology of a common
musical toothbrush available in department stores. This is a good teaching
project for students and hobbyists, as
we (my children and I) discovered that the low-cost toothtunes toothbrushes
can be used to stimulate walls and
other surfaces into vibration. The toothbrush was designed to vibrate the
teeth so that a person can hear music
CA 3028749 2018-12-31

eq-
Figure 74. SONEMAR (Sonar, ElectroMagnetic, visuobakographic) image. Moving a
light and sound transducer back
and forth allows us to see the acoustical material properties of the wall. The
studs and other materials inside the wall are
visible by way of reduced light output because the wall vibrates less when
stimulated by sound. Areas where the drywall is
attached to the rotted out stud exhibit a phase-shift in the complex-valued
signal quantity, and this phase shift is visible as
a color change from blue to green. Thus areas of rot show up as green.
through bone conduction. As such it makes a good acoustic impedance match to
other solid materials like walls,
and building materials such as drywall.
A camera, shown as "CAM." in Fig 73 is setup on a tripod or placed on the
bathroom counter facing the wall
where the black mold is forming at the edge of the drywall that adjoins the
SHOWER STALL. This part of the
BATHROOM WALL is made of DRYWALL and has WOOD STUDS, some of which are of
ROTTED WOOD.
The picture seen-trough-the-drywall-darkly, as shown in Fig. 74 is captured in
a darkened room, e.g. by turning
off the bathroom lights, while moving the toothbrush that has colored lights
attached to it. The colored lights
indicate the strength of the return from the lock-in amplifier, according to
the XY to RGB converter illustrated in
Fig. 75. The outputs X=augmented reality, and Y=augmented imaginality, of any
phase-coherent detector, lock-in
amplifier, homodyne receiver, or the like, are therefore used to overlay a
phenomenologically augmented reality
upon a field of vision or view, thus showing a degree of acoustic response as
a visual overlay.
The invention is not limited to phase-coherent detection, i.e. the system can
also operate quite nicely with-
out phase-coherent detection, e.g. using magnitude-only detection, or using
feedback, as in some of the earlier
mentioned implementations.
More generally, embodiments of the invention may construct images in a variety
of different ways, which may be
combined into a single image of multiple channels. For example, in one
embodiment, a first image plane may arise
from an acoustic feedback, a second image plane from a magnitude response, a
third and fourth image plane from
a phase-coherent complex-valued detection at one stimulus frequency, a fifth
and sixth plane at another frequency,
and so on.
CA 3028749 2018-12-31

90 DEG.
POSITIVE IMAGINARY
BRIGHT
Y=10V YELLOW
BRIGHTEST
ORANGE
X=10V
Y=10V
BRIGHT 45 DEG
DIM ORANGE
Y=5V YELLOW
DIM
(r)
)7( ORANGE
BRIGHT DIM DIM BRIGHT
GREEN GREEN BLACK X-AXIS RED RED
180 DEG 0 DEG
NEGATIVE REAL X=-5V X=C) X=5V X=10V
POSITIVE REAL
Y=0
X=-10V
DIM
VIOLET
Y=-5V
DIM
BLUE
BRIGHT
VIOLET
BRIGHT
Y=-10V
BLUE FIG. .: XY to RGB
CONVERSION
270 DEG.
NEGATIVE IMAGINARY
Figure 75. Complex color-mapper. The complex color-mapper converts from a
complex-valued quantity, typically
output from a homodyne receiver or lock-in amplifier or phase-coherent
detector of the invention, into a colored light source.
Typically more light is produced when the magnitude of the signal is greater.
The phase affects the hue of the colour. For
example, a strong positive real signal (i.e. when X=+10 volts) is encoded as
bright red. A weakly positive real signal, i.e.
when X=+5 volts, is encoded as a dim red. Zero output (X = 0 and Y = 0)
presents itself as black. A strong negative
real signal (i.e. X = ¨10 volts) is green, whereas weakly negative real (X =
¨5 volts) is dim green. Strongly imaginary
positive signals (Y = 10v) are bright yellow, and weakly positive-imaginary (Y
= 5v) are dim yellow. Negatively imaginary
signals are blue (e.g. bright blue for Y = ¨10v and dim blue for Y = ¨5v).
More generally, the quantity of light produced
is approximately proportional to a magnitude, Rxy = VX2 + Y2, and the color to
a phase, e = arctan(Y/X). So a signal
equally positive real and positive imaginary (i.e. e = 45 degrees) is dim
orange if weak, bright orange of strong (e.g. X=7.07
volts, Y=7.07 volts), and brightest orange of very strong, i.e. X=10v and
Y=10v, in which case the R (red) and G (green)
LED components are on full. Similarly a signal that is equally positive real
and negative imaginary renders itself as purple
or violet, i.e. with the R (red) and B (blue) LED components both on together.
This produces a dim violet or bright violet,
in accordance with the magnitude of the signal.
More generally, in some embodiments of the invention, we sense and compute a
visuacoustic transfer function of
the subject matter, over a certain range in which the subject matter has a
linear time-invariant transfer function.
In this situation, with sufficiently fast imaging, an equivalent impulse or
step response is measured in response
to a transient acoustic disturbance.
CA 3028749 2018-12-31

LOCK-IN AMPLIFIER EYETAP AFFECTED
PROPERTY
MUX REF.
C1).1C7?
IN X Y A B/I
0 00 00
El El
TOOL 1 TOOL 2 _______
DLIII
El 0
DUD
El El El El
-_- =
\z"
______________ IIII REF. TRANSDUCER 1 REF.
TRANSDUCER
2
SUBJECT MATTER
FIG. . VISUALIZING ENVIROMENTAL IMPACT OF
INDUSTRIAL NOISE AND MECHANICAL DISTURBANCE
Figure 76. Various tools, such as "jackhammer" pneumatic impact drills TOOL 1
and TOOL 2 affect SUBJECT MATTER
including surrounding buildings, such as an AFFECTED PROPERTY. Using EyeTap
Digital Eye Glass, the effects on the
AFFECTED PROPERTY due to each tool can be visualized. A multiplexer, MUX
sequentially selects to be responsive
to each of the tools. While switching to TOOL 1 the AFFECTED PROPERTY is
visualized through a lock-in camera in
the EyeTap. Upon the EyeTap's display screen is displayed a video magnified
view of the vibrations in the AFFECTED
PROPERTY that are caused by TOOL 1. The AFFECTED PROPERTY is vibrated by many
things including some
of the tools, but what we want to see is the vibrations due specifically to
TOOL 1, as made visible by selecting TOOL
l's pickup REF. TRANSDUCER 1 for the reference input REF IN to the LOCK-IN
AMPLIFIER that is the basis of
the lock-in camera of the EYETAP. Subsequently we select through the
multiplexer MUX, TOOL 2, specifically, through
connecting REF. TRANSDUCER 2, to the lock-in amplifier REF. IN. This forms the
basis of the lock-in camera in the
EyeTap to display the vibrations in the AFFECTED PROPERTY that are due to TOOL
2. Thus we can see selectively or
superimposed the affects of individual tools on the surrounding landscape and
environment.
In other situations, where there is significant nonlinearity, a different
response is measured and computed for
a variety of different amplitude and frequency reference waveforms,
waveshapes, and wave spectra. Additionally,
the capabilities of the Mannlab/SYSU amplifier can be fully put to use here,
in characterizing harmonic responses
arising from nonlinearities and distoritions of waveforms, such as when
stimulating subject matter at higher ampli-
tudes where objects that "buzz", can be distinguished from those that don't.
In this way, we can see loose objects
as distinct from firm but still easily vibrated objects. Harmonic imaging
using the Mannlab/SYSU amplifier makes
visible, for example, loose fence boards, when looking out across the fence in
our back yard.
Moreover, we can selectively image responses due to different inputs, as
illusrated in Fig. 76.
12.2. Metaveillance
Users can also observe metawaves, i.e. T-SWIM can sense sensors and sense
their capacity to sense (Figure
77). By recording the response of a surveillance device to a known signal, the
metawavefunction of the device
CA 3028749 2018-12-31

is rendered. In this way, TSWIM becomes a form of augmented reality overlay,
allowing users to see and feel
sur/sous/veillance fields/flux around them.
12.3. Technical Implementation
Embodiments of the early T-SWIM prototypes used various X-Y recorders, X-Y pen
plotters, or the like. In one
embodiment, an HP (Hewlett Packard) X-Y plotter has the X-axis disconnected so
that the pen or another device
in its place (a haptic/tactile handle or grip) can move freely and
effortlessly back-and forth in the X direction,
while the Y-axis was driven. In this way, a user can grab the pen or grip of
the X-Y plotter and move it back and
forth left-to-right to freely explore a tactile or haptic augmented reality.
To the left of the X-Y plotter is mounted
the SIGNAL ACTOR, SIG. GEN., or the like of Fig 39, or the BUG of Fig 44. The
output of the LIA, as marked
"X" on the LIA of Fig. 39 or Fig. 44 is connected to the "Y" input of the X-Y
plotter, causing the plotter to move
the grip up-and-down or to apply a force to the grip of the user stalls it or
graps it firmly enough to slow or prevent
its movement. In this way the user can grasp and hold and feel and touch and
explore radio waves, sound waves,
etc., and especially sitting waves.
T-SWIM uses a "feedback linear actuator" such as the DC servo mechanism
salvaged from or adapted from a
Hewlett Packard XY plotter, XY recorder, stripchart recorder, or the like.
A more modern satisfactory linear actuator is the Progressive Automations PA-
14P that comes in stroke sizes
from 2 to 40 inches, with forces ranging from 35 to 150 pounds, and speeds
ranging from about 0.6 ips to 2 ips
(inches per second).
A faster-responding, and gentler (e.g. in terms of user comfort) feedback
linear actuator is the "montorized
potentiometers" such as the Sparkfun COM-10976
(https://www.sparkfun.com/products/10976), or the original
"ALPS Fader motorized RSAON11M9 touch sensitive 10K linear slide potentiometer
" which many others have
immitated.
In an alternative embodiment, a hobby servo is used as the linear actuator (or
substituted as a rotary actuator
with a long arm). The hobby servo is driven by a converter that converts
analog voltage to pulses of pulse width
varying in proportion to the analog voltage. Alternatively, an Arduino with
analog input can be used with a
PWM/Servo Shield, thus converting analog input to linear or rotary actuation
that can be touched, felt, grasped,
etc., plus also move an LED attached to it for visuals. Alternatively the
linear actuator of Sean Follman's inForm
system [38] (based on the Soundwell clone of the ALPS fader) can be used.
A receive antenna is attached to the linear actuator and picks up the target
signal and feeds the signal to the
lock-in amplifier (see Figure ??. An ATMEGA2560 microcontroller provides PID
[11] control using the actuator's
built-in linear slide potentiometer.The actuator is fitted with adjustable
loops for the user's thumb and index
finger, as shown in Figure 68.
There are two fundamental operational modes of T-SWIM:
= touching and grasping electromagnetic radio waves, i.e. a haptic/tactile
version of the embodiment shown in
Figs 39 and 44;
= feeling veillance flux, e.g. feeling the effects of a surveillance
camera, i.e. a haptic/tactile version of the
embodiment shown in Fig 57.
The lock-in amplifier is the same design as described by Mann in the original
SWIM device (see [81] for technical
details). It amplifies a target signal with a moving time scale equal to the
wave's speed, resulting in a stationary
frame of reference with respect to the wave. In the case of rendering waves
from a transmitting source, the
amplifier is connected to the actuator's receive antenna, while performing
phase-coherent detection of the signal.
The amplifier typically has a gain that is adjustable from one to 109 with a
dynamic reserve on the order of 105
(e.g. capable of picking up signals buried in noise that is about 100,000
times stronger than the signal of interest).
In the case of metaveillance, the amplifier receives input from a sensor while
stimulating the surveillance device
with a known probing signal (the "reference" signal of a specialized lock-in
amplifier as described in [81]).
12.4. T-SWIM Discussion
Various users were recruited to try TSWIM and learn differences between purely
visual and visuo-haptic feed-
back. Users explored radio waves from a 10.525 GHz radio transmitter and the
veillance flux of a CCTV camera.
In the metaveillance case (seeing the capacity of a CCTV camera to see), users
remarked that in contrast to
simply observing visual changes in the LED brightness, being physically acted
on by the surveillance of the device
CA 3028749 2018-12-31

caused an innately emotional response in them. Interestingly, while some
interpreted the effect as being intrusive,
one user remarked that he felt comforted and protected by the tug while in the
camera's field of view. Another
found that the haptic sensation allowed her to detect the surveillance in a
discreet way, such that others were not
notified of the awareness. She liked the idea of knowing that she was being
watched, without others knowing that
she knew that she was being watched.
T-SWIM allows the user to move their finger through electromagetic radio waves
in their environment. It is
possible with the T-SWIM invention to feel and grasp wave representation,
signals, etc., experiencing:
1. motion of the finger along a wave;
2. forces applied to the finger as a servomotor acts even if it is stalled
(when a user deliberately resists movement);
3. inertial feel of the actuator and its LED or other load, as it flutters up
and down quickly or more slowly.
The invention alows a user's finger to be moved or influenced (in a tactile or
haptic sense) by a wave. This
makes otherwise invisible waves graspable (tangible). For observation of
transmitted waves, the user is physically
pushed and pulled by various wireless devices around them (i.e. devices that
emit radio waves), and in the case
of metaveillance, users can literally feel surveillance cameras pressing
against their bodies. This effect grows with
proximity, so that users can locate the source of a transmission or a
reception (surveillance) by feeling the point
or regions of greatest tactility.
ti1/4<
t sisi::4 ili i t..)oi
,
,
,=
,
= .
i
,
,
'
...
Figure 77. Measuring veillance flux of a CCTV surveillance camera, and
rendering that veillance flux as tactile interaction
in a realtime feedback loop. A black cloth was placed behind the camera at the
left side of the frame, to make the green
LED on the tactile actuator more visible, and show its degree of exertion.
Zooming into this picture you can see the faint
trails the LED makes on either side of the central sightlines, as faint dotted
lines formed by the pulsating (oscillating) of
the electrically modulated LED.
12.5. Going further with T-SWIM
While the TSWIM device was designed to displace the finger along wave, other
tactile/haptic representations
of phenomenological augmented reality are possible. In general, vibrotactile
stimuli can be modulated by the
amplitude (real or imaginary), frequency, or phase of the observed wave, as
applied as forces to a static finger, or
to provide movement to the finger, proportional to the wave amplitude,
frequency, phase, or the like.. Alternatively,
the force or movement can be proportional to the amplitude of a wave (e.g. to
the square root of the sum of the
squares of an in-phase and a quadrature component).
CA 3028749 2018-12-31

111
.1;
immiNNO,
-201111W
zsogro. iimpe*m
'01110P
ZOSS
-
00.70. ;A
µ;ak:
Figure 78. Study of lens aberrations using Veillance Waves. Left: Cartesian
Veillance Waves; Right: Polar Veillance Waves.
Lens aberration visible near lower left of Veillance Wave. Mann's apparatus
(invention) attached to robot built by Marc de
Niverville.
Multiple TSWIM tactuators can be used together, so that a user can
simultaneously experience various locations
along a wave by running all of their fingers over it.
In some embodiments, the SWIM or T-SWIM are wireless devices that allow a user
to walk around untethered,
while grasping and holding electromagnetic waves or other signals.
TSWIM is a system which allows for unique visuo-haptic exploration of waVes
and metawaves in the real world.
As a natural augmented tactile reality system, it bridges the gap between the
virtual and the phenomenological,
and allows us to explore signals with continuous (i.e. non-discrete or
"undigital") feedback.
13. Sparsity in the Spacetime Continuum
In conclusion, Metaveillance and Veillance Wavefunctions show great promise as
new tools for understanding the
complex world where surveillance meets moving cameras (wearables, drones,
etc.). Further research is required in
the area of Compressed Sensing [16, 33] to fully utilize this work, e.g. to
build completely filled-in high-dimensional
spacetime maps of Veillance ("Compressed Metasensing").
Moreover, just as oscillography was the predecessor to modern television, the
alethioscope and robotic SWIM
("ARbotics") could be the future that replaces television, VR, and AR.
Finally, surveillance, AT, and security are half-truths without sousveillance,
HI, and suicurity (self care). To write
the veillance/cyborg code-of-ethics we need to fully understand all veillances
and how they interplay. Metaveillance
gives us the tools to accomplish this understanding, in a
multi,cross,inter/intra,trans,meta-disciplinary/passionary
mix of design, art, science, technology, engineering, and mathematics.
14. CorepointTM
One embodiment of the invention involves wrestling with a robot to achieve a
high degree of simultaneous
dexterity and strength. With this "Wrobot" (wrestling robot), the body can
become a joystick or pointing device.
Other examples of such Corepointmtechnology include a planking board in which
the user's core muscles become
a joystick, pointing device, or the like. The board's shape is a circle, or
alternatively, a point shape, such as like a
cursor or directional element, such as the front of a surfboard. A
satisfactory shape is designed using SWIM as a
haptic augmented reality computer aided design tool, as shown in Fig. 79 This
shape is suggestive of a cursor or
pointer.
The plankpoint board is mounted on a pivot so it can tilt and turn. There are
up to 4 degrees of freedom which
operate a gaming console in the following manner:
= tilt left-right (port-starboard), which moves the cursor left-right;
= tilt fore-aft, i.e. tilting the board forward so the front goes down and
the back goes up, results in the cursor
going down, and tilting back (stern down and bow up) results in the cursor
going up;
CA 3028749 2018-12-31

A A
..../A .
_
V
y - y . $ ORIOW ta404-f, - -.,d
l'A tf#
# '
-..,
, yy
yyj ' . ...
'''f':'=Plit'a" 31.1,1LY;X . .. y .
Figure 79. The planking pointer shape is made using SWIM to create two
waveforms overlapping that define the shape.
This process of computer-aided design gives a shape that resembles a cursor,
but with nice smooth edges.
TARGET
POINTER
ERROR AREA
Figure 80. Corepoint uses a typically time-varying target function, and the
core muscles control a cursor. The accumulated
area between the two curves forms the time integral that the user tries to
minimize. When the target is not moving, this
area is the absement.
= rotate, while keeping the board level or at the same overall tilt angle
results in an advanced game function;
= push-pull, i.e. putting more or less weight on the board results in
another advanced game function.
A smartphone is placed on the plankpoint and its accelerometer senses tilt
angle, for the cursor left-right and
up-down functions. A game is written that works in this way, so a player goes
into a planking position to control
the game.
In this embodiment, the invention comprises a planking board with a tilt
sensor that moves a cursor. A score is
provided that is proportional to the absement or absangle (integrated absolute
angle) of the board, as computed
by the square root of the sum of the squares of the two cursor dimensions away
from a target point. The target
point moves throughout the game to create a gaming environment for core
fitness training. The objective of the
game is to get the lowest absement or absangle. A score can be presented as a
reciprocal of this, so that the goal
becomes highest score. Alternatively score can be based on presement, i.e.
integral of reciprocal displacement or
reciprocal tilt angle. Typically a target moves and the user tries to follow
it. See Fig. 80. When the target sits
still, the invention is also still useful, as the integral is the absement,
and we can score by minimum absement or
maximum presement.
There are various forms of the corepoint technology which include:
= plankpointTM;
= pullpointTM;
= JUMPointTm.
Pullpoint is a pullup bar similiary equipped, but in typical embodiments with
only one degree of freedom counting
in the score. See Fig. 81. The bar may also be robotically actuated, as shown
rightmost in Fig. 82.
CA 3028749 2018-12-31

___________________________________________________________________ ,
0N-N
= =
44=200 =
IMP ,
* t, r0,14trrialtrk:
0F....Nis :tst
= "'" SNP
.,404pe- 010 ^
= =.`" s
WI, = 11
Aff
=
4
.tt t ===
,
tt
Figure 81. CorepointTmtechnologies include Plankpoint planking board that
allows a user's core muscles to function as a
pointing device. The concept of integral kinesiology is also applied to ankle
exercises (center picture). Another embodiment
is PullpointTm(rightmost). In this embodiment, a pullup bar is suspended from
a pivot or swivel, or the like, and the angle
of the swivel is sensed by a tilt sensor that forms the input device (pointing
device) for a video game. Here in this example
shown, the bar is the steering mechanism. In some embodiments, rings are
attached to opposite ends of the pullup bar to
engage the steering mechanism with the user's core muscle groups in a combined
requirement of strength and dexterity.
15. More on Integral Kinesiology for physical fitness
Further thoughts and inventions on Integral Kinesiology:
Existing physical fitness systems are often based on kinesiology (physical
kinematics) which considers distance
and its derivatives which form an ordered list: distance, velocity,
acceleration, jerk, jounce, etc.. In this embodi-
ment of the invention, integral kinematics is used to evaluate performance of
exercises that combine strength and
dexterity. Integral kinematics is based on distance and its time integrals
(absement, absity, abseleration, etc.).
A new framework is presented for the development of fitness as well as for the
assessment (evaluation, mea-
surement, etc.) of fitness using new conceptual frameworks based on the time-
integrals of displacement. The new
framework is called "integral kinesiology". The word "integral" has multiple
meanings, e.g. in addition to meaning
the reciprocal of derivative, it also derives from the same Latin language
root as the word "integrity", and thus
integral means "of or pertaining to integrity".
This connects to our broader aim to bring a new form of integrity to three
important areas of human endevour:
= Integral Veillance: Surveillance tends toward the veillance of hypocrisy
(sensing people while forbidding
people form sensing), and the opposite of hypocrisy is integrity, thus we must
evolve from a surveillance
society to a veillance (sur/sous/meta/veillance) society;
= Integral Intelligence: Al (Artificial Intelligence) involves machine
sensing which often happens without hu-
mans understanding what's happening around them. In this sense AT is a form of
surveillance. Thus we
must evolve from surveillance intelligence to veillance (integral)
intelligence.
= Integral Kinesiology, the topic of this paper.
15.1. Background
Physical fitness has traditionally been measured and improved through the use
of kinesiology. Kinesiology
derives from the Greek words "ntvnita" ("kinema") which means "motion" and "Ao-
yoc" ("logos") which means
"reason" or "study". Thus kinesiology is the study of motion, and in
particular, the application of kinematics to
physical fitness. Kinematics itself also derives from the Greek word "nivnita"
("kinema"), and traditionally is the
study of distance and its derivatives:
CA 3028749 2018-12-31

Jr/ -
*4/4171
=
I \ I
=;," j,"/ I
e
yr

it
toommainsei
Awkr
¨
If
4iµ
orrri
Jr*
,',0inutpg
Figure 82. CorepointTM bar with rings, and sensor-only (at left) as compared
with actuator+sensor (right). Here the S.
Mann modified an automobile alternator to function as both a sensor and an
actuator. This provides simultaneous sening
and haptic actuation, so a person playing the game can feel virtual bumps in a
road, or the like.
= distance (displacement);
= velocity (speed);
= acceleration;
= jerk;
= jounce; ...
If we look at differentiation as an operator, (dIdt)n (i.e. as action of the
differential operator), we can place
distance and its nth derivative on a number line, with n = 0 for distance, n =
1 for velocity (i.e. 1st derivative),
n = 2 for accceleration (i.e. 2nd derivative), etc..
Looking at this numberline, we see that we only have half the picture, i.e.
only the right side of the numberline.
If we consider the entire number line, we will also want to consider distance
and its time integrals, such as
f dt = (dIdt)-1, which acts on distance to give a quantity known as absement
[102, 56]. Likewise, for n = -2, we
have f f dt = (dIdt)-2 acting on distance to give a quantity known as absity.
For n = -3 we have abseleration,
for n = -4, abserk, for n = -5, absounce, etc..
Absement first arose in the context of the hydraulophone (underwater pipe
organ), and flow-based sensing
(hydralics) [102].
CA 3028749 2018-12-31

Kinematics has traditionally been the study of distance and its derivatives,
but we proffer the concept of "Integral
Kinesiology" which is the study of distance and its derivatives AND integrals.
In this way integral kinesiology
gives us a more complete picture.
It is interesting to also note that the word "integrity" comes from the same
Latin root as the word "integral": in-
tegritas ("wholeness", "soundness", "completeness", "correctness", from
"integer" ("whole") [www.etymonline.com/word/integri
In this way, Integral Kinematics brings completeness and correctness to the
otherwise "half truth" of considering
only the postive half of the spectrum of derivatives.
Likewise Integral Kinesiology brings integrity to physical fitness in new and
significant ways.
15.2. Buzzwire
Let us begin with a well-known game of the prior art, "buzzwire", which
comprises a serpentine wire, along
which participants attempt to move a conductive loop without touching the loop
to the wire. See for example,
Kovakos, U.S. Patent 3208747, 1965.
This game requires a steady hand, i.e. a certain degree of dexterity.
The game is digital in the sense that the electric circuit is binary, i.e.
either open (zero) or closed (one). In this
sense, the penalty for almost, but not quite touching the wire, is zero.
Mann et al. undigitalized this game, by making a virtual undigital buzzwire
game in which the serpentine
path was drawn abakographically in a virtual space by moving a light bulb
through the space to generate a long-
exposure light trail, and then attempting to move along the light trail with a
virtual ring, while not touching the
light trail [96]. Being undigital is the concept of using digital computers to
achieve continuous (analog) results.
Examples of undigitalization include PWM (Pulse Width Modulation) which uses a
binary (digital) output to
achieve a continuous voltage, and HDR (High Dynamic Range) imaging, which uses
digital cameras to produce
undigital images [104].
The Mann et al. version of buzzwire provides an undigital game in which the
score is in proportion to the
reciprocal of the absement along the virtual wire. A first particpant draws a
serpentine path with a light bulb in
a long-exposure photograph, and then challenges subsequent participants to
"ring" the wire, following along the
same path. The nearest distance from the path is calculated for each time
period, and the integral of this distance
(i.e. area under time time-distance curve) is calculated. This integral is the
absement, so the goal is to minimize
the absement (integrated error in position).
In a physical embodiment of this invention, a proximity sensor senses the
distance between the wire and the
ring, by way of a capacitance meter, so that it measures how close the ring is
to the wire. In this way, rather than
a binary continuity tester, the feedback is continuous (analog) rather than
binary digital.
Fig. 83 shows an early prototype invented and built by Mann, using
refrigeration tubing (easy to bend into
nearly any desired shape) for the wire, and an open-ended wrench for the ring.
A capacitance meter is used to
sense the proximity of the wrench ring to the wire, to obtain a distance
estimate which is then integrated to obtain
absement.
15.3. Deadheading
Another activity that involves integrized fitness is deadheading. Deadheading
is the complete obstruction of
hydraulic flow to the point of zero flow, at which point the resulting
hydraulic head is referred to as the "dead
head".
The proper technique for deadheading an upwards-facing hydrualic jet is to
approach it from a sufficient height
above the jet so as to easily cover the water, and then gradually lower down
upon it. For example, a downward-
facing palm is placed in the jet, and the hand is lowered until the jet is
completely obstructed, but without touching
the jet itself until it is completely obstructed (e.g. not bracing the hand
against the solid matter from which the
jet is made). Proper deadheading technique is illustrated in Fig. 84.
Hydraulic systems form the basis of multimedia games for a variety of
experiences, in some embodiments
facilitated by VR games in sensory deprivation tanks.
People tend to perceive any information as reality (even hallucinations) when
put in a sensory deprivation tank,
for example [88]. Accordingly, we propose VR games in which each player is in
a sensory deprivation tank.
CA 3028749 2018-12-31

..
- ,
..4
o ,-
, =,
/ õ 1 µ1,:
' A'".==,1=:.,:'' '= . ,
,/
. = I ;sew - 4., '''.4.,;`-,;, , , .,--
-.=\
1,f , 4,,,' , ,õ*.; . =õ... ,,, .. e,f,,: ,. , , , v ,
,
.
i
,. . ....
... ,
, 001 ' bk. '4414411, ' = ' ; ;:aill,"'
:11* '-';
0 ..,
- , ..; --.3,.._, .- .A, Yr.r.wr-,%'w
'''!:,L,-= , :' = ..=
= -'' ' 2 ,';'j ' :''' .:''''''''''-'7 /;P!..44',1: ," '
,:',.'
.., .
IK
\
Ak,
lip_ ..
.0'.
t
/ -
,
/
.....- = --- 4-'..1!"-'
,...... = uo.,,,,
,
:.,. .
I
Figure 83. Left: Undigital buzzwire game. A serpentine length of wire (copper
refrigeration tubing) is connected to one
terminal of a capacitance meter (proximity sensor), and the other terminal is
connected to an open-ended wrench. Distance
between the wrench and the wire is sensed, and the distance from the center of
the wrench is calculated. This calculated
center-distance is then integrated to obtain absement. Score is based on the
reciprocal of the absement. This game tests
dexterity. Right: By adding weights, we turn this game into a physical fitness
activity that measures and developes "dex-
trength", i.e. a combined simultaneous exertion of dexterity and strength. The
MannFitTmsystem [98] is a commercialization
of this technology in which absement is integral to building combined
simultaneous strength and fine-motor control. Bottom:
Extreme strength combined with fine-motor control, by adding massive
quantities of weight.
CA 3028749 2018-12-31

Deadheading as a form of Integral Kinesiology
Frame 1: Frame 2: Frame 3: Frame 4:
.initial appro.y lf:litial =-= .õ,
,
:
...
.%1-'1',,t
' Ji.:,
' .. :,4- . - t '
,-,..e.,..r0:7 - .., ,,,= .,....
''' "4..,4V=ti;,,,, ' ,
' =
, . =,. ,
' = , A ' ,P':;õ . IQ?'
' 4i'' ' ==
' ' ;X, i. ' Y= r , =!.' ' ':..4...= i:,': ' . =
''- .4' V .' 'N-
,=.:* t
i.
µ , .-, ...-,...6..-.. = ,g., ,...: ..,
...,,,`:, .:,r'lt i'....,, : ..t:i' .- = .
= 1.,
....
l'-', ' =1,. , ..,1
.4. '..-
. ,; ,,ttn?.
. . .
IA* .1110rian"
,
,,...4.
Frame 5: Frame 6: Frame 7: Frame 8:
..... ,-- _
=rnlete-:.,;:.5 s ,
'-i.:.tained
.
1 .' ..9 41. . .... 'It '-i.,,.,P.b,?,!' ,
' =,', õ.' . 1.:;'''itlf".11 , '' 4 ,
V.,: - = " -,-e -11-
,::','",2- . = 1-e.1 .....;t4';'f. .'. -.,,,ii 4., 4!il'''e-
4 ' = ,
'...i.,
..44
= '.' . , '.2.11i,i:: =
. ' i ' '''' `, ' .' P. , ,' ' .
. -, I. .- , 1 -.riza,c.. ri = ,.,
. =
-= e., Y . '` 1 I I ; 4 =
''''', .c't-i .4.: .A= '41 '''. JIP:r
g= " A\ JO Z.' Vi. t: '' ' .;.. ' õ ., , ' '
;Ir. * ' ' 44 ' ' '''. 44'6' - .
\
,
'N ' .4 - .40 = ..
, 4 1.V ' ''''', -`,0 .. =
= ' -1 - ,..,
. ,
=,, .r,.....
lv
',
.,
Figure 84. Demonstration of proper deadheading technique by deadheading the
tallest jet of Stanford's Tanner fountain
with just one hand. Novice deadheaders begin by deadheading the smallest jet
with both hands, and eventually work up to
single-handed deadheads on larger jets as a higher degree of physical fitness
is attained. Stanford University has a tradition
called "fountain hopping" in which students and professors frolic in the
fountains. In this sense Stanford is perhaps the
world's epicenter of fountain hopping culture. From July 2016 to January 2018,
Tanner fountain formed the venue for a
series of lectures, experiments, and teachings on principles related to
hydraulic head.
15.4. Games, Entertainment, and Media
We propose the use of this new medium of artistic expression as the basis for
a number of games we call "veillance
games" and "metaveillance games". In one example game, we have a microphone
displaying its metaveillance wave
function, and invite people to sing into the microphone and see the effect on
its wavefunction. In particular,
this game gets interesting when we use multiple microphones nearby or
remotely, so that when one person sings,
others are invited to match the exact phase of that person's voice. Each new
player that joins is invited to exactly
match the phase of all the other players already on the network. Using the
power of phase-coherent detection
(i.e. the lock-in amplifier), we created a shared virtual reality environment
in which a number of participants can
sing together in one or more locations, and try to match the phase of a steady
tone (e.g. match each other, or
match a recording of past participants if there is only one player), and see
who can produce the most pure tone.
Additionally, we can set the tone quite high, and, using the SYSUxMannLab Lock-
in-Amplifier (the world's only
CA 3028749 2018-12-31

lock-in amplifier that can aggregate multiple harmonics [68]) we created a
competition game for throat singers to
hit multiple harmonics (while deliberately missing the fundamental)
simultaneously.
Singing a little too high, the phase advances forward, and the waves radiate
inwards toward the sound source,
in the virtual world.
Singing a little too low, participants see the waves retreat outwards from the
source.
The goal of the game is to stabilize and lock-on to the wave and make it "sit"
still. The visuals for the game
build on what has been referred to as a "sitting wave" [81] and is distinct
from the concept of a standing wave [108].
So in summary, the object of the game is to generate "sitting waves" by
singing.
We found this process to be very meditative, and as a form of meditation, to
be quite engaging. To take it a
step further, we created a series of sensory deprivation chambers (soundproof
anechoic chambers in darkrooms).
Each chamber was fitted with a sensory deprivation tank, thus allowing for a
fully immersive VR experience.
Each player is isolated from external sensation, other than the game, which
itself is collaborative. Thus we have
communal sensory deprivation as a new form of immersive VR.
To visualize the soundwaves, we have above each sensory deprivation tank,
suspended a robotic mechanism to
trace out the sound waves. The sound waves are traced in a sound-evolving
Argand plane, i.e. in the 2 dimensions
the Argand plane (one dimension for real, and another for imaginary), and in
the third dimension is time. Thus a
pure tone appears as a helix in the 3D space.
See Fig. 85 for experimental setup. Various nodes were setup around the world,
over the Internet, so that
participants in Toronto, Silicon Valley, and Shenzhen, China, could link
together and mediate using biofeedback
from within sensory reprivation tanks. See Fig. 87, Fig. 86, and Fig. 88.
15.4.1 Mersion¨ and Mersive¨ User-Interfaces
The concept of underwater or in-water interactive multimedia experiences and
virtual reality was first explored by
Mann et al. as "Immersive Multimedia", "(Im/Sub)mersive Reality" and
"Immersive (direct) Experience" [71, 73,
75, 76, 97, 101, 95, 88], and further developed as a form of underwater
meditation using other forms of underwater
multimedia including 3D graphics, vision, and underwater brain-computer
interfaces [75, 88], as illustrated in
Fig. 89.
15.4.2 Indirectly Immersive Reality
The Mannfloat¨ fitness game is an example of an indirectly immersive reality
game. In an indirectly immersive
reality, the user is not directly immersed, but is inside something that is
immersed. Examples include an underwater
submarine experience in which the user is inside a dry submarine which itself
is immersed in water. Another
embodiment is a hydraulophone played in a glovebox to keep the user dry. In
one embodiment, for example, a
piano keyboard layout is used in which the keys are tubes or hoses of water.
The instrument is played by pressing
down on the water tubes to restrict the water flow or change properties of its
flow. This form of hydraulophone
can be played completely dry with no water getting on the user's fingers. This
also allows other hydralic fluids
such as oil to be used to produce the sound and the nerve stimulation.
Hydraulophones typically operate in the
110 CPS (Cycles Per Second) to 330 CPS range. This is from the musical note
"A" to the musical note "e" (one
octave above "E"). Nerve stimulators and tactile actuators often operate
around 200 CPS, so the hydraulophone
is directly in the range in which the water vibrations can be felt most
readily.
15.5. Sub/Im/Mersive VR Singing + Meditation Game
Singing a steady tone ("OM") is a popular meditation technique, and often
groups of people do this together.
Frequency differences between various participants are captured by the brain,
and this can occur in a spatial pattern
(e.g. such that the left and right ears hear nearby but slightly different
frequencies). This is called a binaural beat
frequency [138]. When this frequency falls around 8 Hz, it is reported to have
an entrainment effect on the brain,
which can be a meditation aid [7]. Accordingly, we created a game based on
meditative vocal collaboration across
isolation tanks.
The game is played by one or more players, each wearing a VR headset and
floating in their own tank. One
player (or a computer) initiates a note or slow chirp and players try to sing
the same note or follow (track) the chirp.
Scorekeeping is by ratio of rabsement (strong voice) to phabsement (integrated
phase error): f Rdt I f lboldt [81].
The sound wave interference between the speaker and each player is phase-
coherently captured by a special lock-in
CA 3028749 2018-12-31

III 4..
IP. 441b.
a
Ur'
wisp
' , ,.. , -=-= '...: ;'...
d
e - 4 '4111.1111;1111a
. 4446 410011.* 1 elf : . .=
3/4.,,.,.,...................,-,.-.%,-
,
*1:124, dolopt
/41")
.. " 11gitill'\
--"14110riii": Transmitter
...
3D Veillography and
Metaveillography.
/
w 1
)
Reciever
Signal Generator
r
e ______________ .
L I A
Ref Signal Re Im
0 % i 0 - (2.1 5), =
Control & Data Logging
> ¨.00----1
1 0
¨i---
- -1-1¨. 1
X,Y,Z Coordinates ¨
Motion Control 4 _______________________
Figure 85. Setup for sensory deprivation tank singing+meditation
game/performance. Sensory Reprivationr. Tanks: players
are distributed across the world but play together in real-time. The game
involves singing and holding a steady pure note.
Each tank serves to isolate the players from their sensations of any other
reality, while allowing for a surreal communal
bathing experience linked by their sound waves (seen and heard), allowing the
game to exercise deeper control over the
players' experiences. Microphones and speakers allow players to interact with
(hear and watch, and contribute to) a game's
audio soundfield generated by other players. Players engage with an audio-
visual feedback system, while meditating in the
sensory reprivation tanks. Above each tank there is a robotic sound visualizer
similar to that shown in Fig. ?? (right). A
number of sensory deprivation tanks are networked so that multiple players can
compete worldwide. Each player interacts
with a SWIM based on the Delta configuration of 3D positioning device.
CA 3028749 2018-12-31

1111PRIFF.4- i ..ii I === LL =
liP,. , = '. i = , .
0 ilk' - =40141110# 4 1
.Z-ia. MP =
a iftie¨ .
iltt
., ..... ...r
e
/
;
, .
-,
dt:mmtio,=ii-
, , ... 4.,.... .
...-
....-
h.
i
lf
1,1
plif
,
.... .c. w..-
hi
i --- == .,-õ,, II
.t ....
õ,,,õõ.õ,...õ,......
. ,
,
. .,
..._ , = it; µ
.';', ,-,:== 'ei,,-4,
= , A
*.1 ; , I y T.....s,
'' ..,paiir71 ,,,`- "-- ',-
' .. " " , =
r `,09,0"tC:P'..:== .N1. '
--;=':',:i1A, ',5 ' 40.
.4,!;'',.;',,. '
,_ ,, 1..1.! , ,9.' 't ', .=,' 7
= L , , `H r'.'.' -
, = r ' ,l,",= 1 ..t
,
= ,c
1-"'",' - = ' 1.4.
1.1.46V1 f'T . 19., . = ... a
=rlYil9r..4" 9 C. J." =,-(''
' "'''.,:i.oi.=.'1"IN1Y.7..:1=N= - 's"(-'14'
'. . .. . Imis"%aiiil, ' = VrAVII 7 . ' ', ;
,_,
* = 2;4(4 %Ar ' , :
1 ,'. -:'-:
''''.'"==;14,,, : ...'Lll'.7ule:'111''! ,.. . ' -
Figure 86. Sensory Reprivatiom tank setup for collaborative throat-singing
meditation game across networked sensory-
deprivation tanks. Note the 3D SWIM and microphone.
amplifier, as shown in Fig. ?? (right) [132]. Each player sees the
interference pattern in their VR headset and can
control their pitch according to this feedback. Phenomenological Augmented
Reality technology [132] is used along
with a robotic arm to plot the wave pattern (a pure tone is a perfect helix,
extruded in the Argand plane) in the
room for them to see prior to donning the VR headset. The 3D plotting device
is suspended above each tub, so
the user can accustom to it prior to donning the VR glasses, and thus
strengthen the reality of being inside the
soundfield.
We also have a single-player mode in which the player competes against their
own voice as previously recorded,
or against other recorded voices. The score in the game is the reciprocal time-
integral of absolute phase.
15.6. Mersivity TM
We proffer the new field of Submersive Reality in which we fully immerse
ourselves. By immersing ourselves
physically and fiuidically, we attain a much stronger coupling between our
bodies and the immserive experience
around us. With one's feet on the ground, one can still believe the things we
see in our eyeglass are not real. But
CA 3028749 2018-12-31

1,14
/
f
=
:
ii
Figure 87. Collaborative throat-singing meditation game across networked
sensory-deprivation tanks. Apparatus traces out
the helix pattern of a pure tone note, while the participant wears a VR
headset to stabilize the visual exposure.
when floating, we attain a new height of connection to the synthetic reality.
One example simulation we created
as an app to swim with sharks and avoid being eaten. We were able to, in a
small wading pool, create an illusion
of vastness and total immersion. See Fig. 90
We also created a virtualization of the MannFit (Integral Kinesiology) system.
The concept of IK (Int. Kin.)
was originally based on stopping water leaks, either directly, or with a ball
valve or other valve that requires being
held at a certain angle to stop water flow. By simulating this in a virtual
environment, we were able to create
similar physical fitness benefits without the need to use water or wear
bathing suits or provide a place for people
to change and dry off, etc.. However, in the presence of original (wet) IK, we
can add also the virtual elements
as well. For example, we used a wobbleboard in a pool, where participants
stand on the board in the pool while
wearing a VR headset with their head above water. The virtual world is one of
a room filing up with water due to
the leak that most be stopped by keeping the board level.
Other experiences include interacting with water jets, as shown in Fig. 91
(see also our co-pending paper
"Effectiveness of Integral Kinesiology...").
We also created another concept, "Liveheading,." akin to deadheading, but with
a tee fitting so that when the jet
is blocked, water is diverted through a side discharge across at Karman vortex
shedder bar and Nessonator-- [100],
CA 3028749 2018-12-31

doe'
rot:
.;4.e' ''''' ' '.'.,%,fi = -r :4
1 H
- ' 4. IF tep4 ,,,,,,,,,,
'
\.,\ .irlt
A µ, \ 0 4
I = '1
.. /11
.1
,t -
= , ,1 .,..t
.n.
, =,4cor
,
\ ..
,
. = . f.; -::5114,144õ,?'"".,,1%, \
1
. 1 i
.4. IMil\
; .,=?). . -,
.. i -7-;'' . =
'
:.,
. .
i
___________________________________________________________________ õ---
Figure 88. MersivityTmtank in MannLab Shenzhen. Here a linear 2D SWIM (visible
in red, in the background, with
SWIMwave in green on the TV screen at left) is used instead of the 3D SWIM at
MannLab Toronto (compare with Toronto
setup in Fig. 87).
as shown in Fig. 92.
15.7. Submersive Reality
Some embodiments use a fully submersive reality headset developed for
audiovisual mediation and interaction,
as shown in Fig. 93. It consists of a headset similar to standard VR headsets
but with the lenses changes to shorter
focal length to focus underwater at the right distance, where a submersible
waterproof smart phone is used as the
display. The lenses in the headset are replaced with lenses having a shorter
focal length, typically about 3/4 of
their original focal length.
15.8. "Head Games" for teaching concepts of Hydraulic Head
A series of outdoor research, teaching, and lab events were created at
Stanford University (See Fig. 94) to
leverage Stanford's "fountain hopping" culture (Fig. 95) toward our idea of a
"teach beach" [93]. Nearly every
day, Stanford students and professors gather around (and some jump into)
Stanford's glorious fountains, such as
Tanner fountain. Some of the professors even hold their classes in the
fountain. For example, during one of our
"Head Games" lectures/labs, another professor was using the fountain for
teaching a drama class.
Thus I envision in the future, a more fully equipped "teach beach" that
embodies elements of waterpark, spa,
research lab, (outdoor) classroom, and "beach culture".
One exhibit/teaching feature we envision in this environment is a waterfall
that teaches concepts of hydraulic
head, by way of a circular staircase surrounding it, with every step a known
height such as 20cm, and a landing
every 5 steps, so that there are landings every metre of elevation.
Conversely, one embodiment of the invention
exhibits head from ground level, as well as head from jet mouth exit, and thus
mark the height increasing and
CA 3028749 2018-12-31

decreasing from top-to-bottom as well as from bottom-to-top.
Another embodment is is a "teach beach" climbing wall, with water jets, for
deadheading while climbing. See
Fig. 99.
The apparatus of the app-based version, in a very simple form, is shown in
Fig. 100, and further advanced
embodiments, like the absement-based climbing wall, build on this concept.
16. Fitness Game Design and Data Collection
A goal of a game-based embodiment of the invention is to allow a user to
strengthen their core muscles and
improve their body balance [136] and other physical fitness skills. An
integral kinesiology board prototype (a smart
wobble board manufactured by Mannlab Shenzhen) was built to allow a subject to
use their body as a joystick to
tilt towards a destination goal while maintaining high stability. The goal
starts at the center of the board, then
moves around like a driving game. An accelerometer was created using a
smartphone app from which is computed
Cartesian coordinates of each axis.
An "application" (computer program running on a smartphone) was developed to
provide feedback and moti-
vation to a subject using real-time audiovisual displacemnet and absement
feedback. As the subject moves away
from the goal, the background music is distorted by changing its pitch, and a
ball in the center of the screen moves
away from the center proportionally. It functions like a level or bubble
level, or simulation of a real ball on a board
that rolls away from the center when you tilt it.
Absement training improves stability. When a user of the apparatus of the
invention is more stable (i.e.
exhibiting a lower relative displacement from the desired position), the
absement will be less. When the user
shakes more, the absement will be greater.
Absement provides a good way to quantify and measure dextrength (combined
simultaneious dexterity and
strength).
17. Conclusion regarding Integral Kinesiology
Integral Kinesiology arises from the concept of absement, a new quantity which
itself arises from hydraulics
(hydraulophones, water flow, etc.). Integral Kinesiology is based on
activities that test and develop a combination
of strength and dexterity. Activities such as deadheading, are ideally suited
to developing this skill. We created an
interactive virtual deadheading studio environment for immersive/submersive
aquatic play experiences. Addition-
ally, we developed a game with absement scoring that has proven its
effectiveness in motivating people to improve
their exercise results. For exercises that do not only rely on speed and
strength, integral kinematics provides an
alternative way to evaluate the performance.
18. Big Data is a Big Lie without little data: Humanistic Intelligence as a
Human Right
I now introduce an important concept: Transparency by way of Humanistic
Intelligence (HI) as a human right,
and in particular, Big/Little Data and Sur/Sous Veillance, where "Little Data"
is to sousveillance (undersight) as
"Big Data" is to surveillance (oversight).
Veillance (Sur- and Sous-veillance) is a core concept not just in human-human
interaction (e.g. people watching
other people) but also in terms of HCI (Human-Computer Interaction). In this
sense, veillance is the core of HI,
leading us to the concept of "Sousveillant Systems" which are forms of HCI in
which internal computational states
are made visible to end users, when and if they wish.
An important special case of Sousveillant Systems is that of scientific
exploration: not only is (big/little) data
considered, but also due consideration must be given to how data is captured,
understood, explored, and discovered,
and in particular, to the use of scientific instruments to collect data and to
make important new discoveries, and
learn about the world. Science is a domain where bottom-up transparency is of
the utmost importance, and
scientists have the right and responsibility to be able to understand the
instruments that they use to make their
discoveries. Such instruments must be sousveillant systems!
A simple example is a ShowGlowTmor FlowGlowTmelectrical outlet which has built
in LED (Light Emitting
Diode) indicators showing how much voltage is present, as well as how much
amperage is flowing. The outlet glows
to indicate what's happening, namely the current flowing through devices
plugged into it, not just the open-circuit
CA 3028749 2018-12-31

voltage. Color indicates phase angle, so one can see at-a-glance not just the
load, but also the power factor (e.g.
how inductive or capacitive the load is). The simple concept of phase-coherent
colormapping is useful for electrical
power, as well as for the output of a lock-in amplifier in augmented reality
wave visualization, such as back-to-back
red and green LEDs on the in-phase output of a lock-in amplifier and back-to-
back yellow and blue LEDs on the
quadrature output of the lock-in amplifier, thus indicating phase as color.
19. Surveillance, Sousveillance, and just plain Veillance
Surveillance2 (oversight, i.e. being watched) and sousveillance3 (undersight,
i.e. doing the watching) can both
be thought of in the context of control theory and feedback loops'.
In particular, McStay considers surveillance in this way, i.e. in regards to
the form of privacy that is inherently
violated by profiling, and related closed-loop feedback systems that
manipulate us while monitoring us [106].
Ruppert considers the interplay between surveillance and public space, through
a case study of Toronto's Dundas
Square [131], where security guards prohibit the use of cameras while keeping
the space under heavy camera
surveillance. This surveillance without sousveillance [?] creates a lack of
integrity, i.e. surveillance is a half-truth
without sousveillance [99].
The intersection of Sousveillance and Media was pioneered by Bakir, i.e.
sousveillance as not merely a capture
or memory right, but also sousveillance as a disseminational (free-speech)
right. This gave rise to two important
concepts: sousveillance cultures and sousveillant assemblage [9], analogous to
the "surveillant assemblage" of [49].
Surveillance has strong connections to big data, where states and other large
organizations, especially in law
enforcement, collect data secretly, or at least maintain some degree of
exclusivity in their access to the data [117].
Two important concepts have been proposed to help mitigate this one-sided
nature of big data: (1) "giving
Big Data a social intelligence"[130] and (2) the concept of "Personal Big
Data"[48] which might more properly
be called "little data". Both of these concepts embody Big Data's sensory
counterpart that corresponds more to
sousveillance than surveillance [81].
19.1. The Veillance Divide is Justice Denied
A good number of recent neologisms like: "Big Data", "IoT" ("Internet of
Things"), "Al" ("Artificial Intelli-
gence"), etc., describe technologies that aim to grant the gift of sight, or
other sensory intelligence, to inanimate
objects. But at the same time these inanimate objects are being bestowed with
sight, that very same sight (abil-
ity to see, understand, remember, and share what we see) is being taken away
from humans. People are being
forbidden from having the same sensory intelligence bestowed upon the things
around them.
Indeed, we're surrounded by a rapidly increasing number of sensors [22]
feeding often closed and secretive "Big
Data" repositories [121]. Entire cities are built with cameras in every
streetlight [133, 109]. Automatic doors,
handwash faucets, and flush toilets that once used "single-pixel" sensors now
use higher-resolution cameras and
computer vision [52].
Surveillance is also widely used without regard to genuine privacy, i.e. with
only regard to Panoptic-privacy. In
Alberta, for example, the Privacy Commissioner condones the use of sureillance
cameras in the men's locker rooms
of Calgary's Talisman Centre where people are naked [29] as long as only the
police (or other "good people") can
see the video. Westside receration Centre also in Calgary, Alberta, also uses
surveillance cameras in their men's
(but not women's) locker rooms [41].
While surveillance (oversight) is increasing at an alarming rate, we're also
seeing a prohibition on sousveillance
(undersight).
Martha Payne, a 9-year old student at a school in Scotland, was served
disgusting school lunches that lacked
nutritional value. So in 2012 she began photographing the food she was served
[122]. When she began sharing
these photographs with others, she generated a tremendous amount of online
discussion on the importance of good
school nutrition. And she brought about massive improvements in the
nutritional value of school lunches around
the world. She also raised considerable money for charity, as a result of her
documentary photography. But, in
part due to the notoriety of her photo essays, she was suddenly banned from
bringing a camera to school, and
barred from photographing the lunches she was served by her school.
2[118, 64]
3 [72, 103, 37, 127, 9, 8, 35, 126, 21, 6, 4, 142, 115, 124, 65, 137, 43]
4 [86, 81]
CA 3028749 2018-12-31

While schools begin the process of installing surveillance cameras, students
are increasingly being forbidden from
having their own cameras. And for many people on special diets, or with
special health needs, using photography
to monitor their dietary intake is a medical necessity. I proposed the use of
wearable sensors (including wearable
cameras) for automated dietary intake measurement in 2002 (US Pat. App.
20020198685 [105]). This concept is
now gaining widespread use for self-sensing and health monitoring [32, 31].
So when people suffer from acute effects like food poisoning or allergic
reactions, or from longer-term chronic
effects of poor nutrition, like obesity, being forbidden from keeping an
accurate diary of what they have eaten is
not just an affront to their free speach. It is also a direct attack on their
health and safety.
Neil Harbisson, a colorblind artist and musician, has a wearable computer
vision system that allows him to hear
colors as musical tones. And he wears his camera and computer constantly.
Wearable computing and Personal
Imaging (wearable cameras) are established fields of research [Mann 1997],
dating back to my augmented reality
vision systems of the 1970s. I also wear a computer vision system and visual
memory aid. Harbisson and Mann
both have cameras attached in such a way as to be regarded as part of their
bodies, and thus their passports both
include the apparatus, as it is a part of their true selves and likenesses.
And we are not alone: many people now are
beginning to use technology to assist them in their daily lives, and in some
ways, the transition from a surveillance
society to a veillance society (i.e. one that includes both surveillance and
sousveillance) is inevitable [5].
Referring back to Martha Payne, discussed earlier in this chapter, there was a
hypocrisy of the school officials
wanting to collect more and more data about us, while forbidding us from
collecting data about them or about
ourselves (like monitoring our own dietary intake, monitoring our exercise, or
helping us see and remember what
we see). We need to be critical of this hypocrisy because (1) it is a direct
threat to our health, wellness, and
personal safety, and (2) data obtained in this manner lacks integrity
(integrity is the opposite of hypocrisy). Thus
it is with great relief that recently Martha Payne fought back and won the
right to continue using photography
to monitor her dietary intake, not only for the journalistic freedom (in the
Bakir sense), but also for the personal
safety that such self-monitoring systems can provide.
19.2. Surveillance is a half-truth, without sousveillance
Surveillance is the veillance of hypocrisy, in the sense that, as often
practiced by security guards, closely
monitoring surveillance cameras, these guards tend to observe and object to
individuals taking pictures in the
surveilled spaces. The opposite of hypocrisy is integrity. [?]
Surveillance typically tells a story from the side of the security forces.
When stories told from other points-of-
view are prohibited from capturing evidence to support these other points-of-
view, what we have is something less
than the full truth [?]. In this sense, surveillance often gives rise to a
half-truth.
19.3. Justeveillance (fair sight) in AI and Machine Learning
Much has been written about equiveillance, i.e. the right to record while
being recorded [140, 141, 65, 87], and
Martha's case is like so many others.
In the context of human-human interaction, the transition from surveillance to
veillance represents a "fair"
(French "Juste") sight and, more generally, fair and balanced sensing.
But our society is embracing a new kind of entity, brought on by AT
(Artificial Intelligence) and machine learning.
Whether we consider an "Al" as a social entity, e.g. through Actor Network
Theory [116, 60, 14, 145], or simply
as a device to interact with, there arises the question "Are smart things
making us stupid?"[114].
Past technologies were transparent, e.g. electronic valves ("vacuum tubes")
were typically housed in transparent
glass envelopes, into which we could look to see all of their internals
revealed. And early devices included schematic
diagrams ¨ an effort by the manufacturer to help people undertand how things
worked.
In the present day of computer chips and closed-source software, manufacturers
take extra effort not to help
people undertand how things work, but to conceal functionality: (1) for
secrecy; and (2) because they (sometimes
incorrectly) assume that their users do not want to be bothered by detail,
i.e. that their users are looking for an
abstraction and actually want "bothersome" details hidden [80, 13].
At the same time these technologies are being more concealing and secretive,
they are also being equipped
with sensory capacity, so that (in the ANT sense) these devices are evolving
toward knowing more about us while
revealing less about themselves (i.e. toward surveillance).
Our inability to understand our technological world, in part through secrecy
actions taken by manufacturers,
and in part through a general apathy, leads to the use of modern devices
through magic, witchcraft-like rituals
CA 3028749 2018-12-31

rather than science [63]. This technopaganism [134] leads people to strange
rituals rather than trying to understand
how things work. General wisdom from our experts tell us to "reboot" and try
again, rather than understand what
went wrong when something failed [128]. But this very act of doing the same
thing (e.g. rebooting) over and over
again, expecting a different result is the very definition of insanity:
"Insanity is doing the same thing, over and over again, but expecting
different results." ¨ Narcotics
Anonymous, 1981.
In this sense, not only do modern technologies drive us insane, they actually
require us to be insane in order
to function properly in the technopagan world that is being forced upon us by
manufacturers who conceal its
workings.
I propose5 as a solution, a prosthetic apparatus that embodies the insanity
for us, so that we don't have to.
All call this app "LUNATIC". LUNATIC is a virtual personal assistant. The user
places a request to LUNATIC
and it then "tries the same thing over and over again..." on behalf of the
user so that the user does not need to
himself or herself become insane. For example, when downloading files, LUNATIC
starts multiple downloads of
the same file, repeatedly, and notifies the user when the result is obtained.
LUNATIC determines the optimum
number of simultaneous downloads. Typically this number works out to 2 or 3. A
single download often stalls, and
the second one often completes before the first. If too many downloads of the
same file are initiated, the system
slows down. So LUNATIC uses machine learning to detect slowed connections and
makes a best guess as to the
optimum number of times to repeat the same tasks over and over again. This
number is called the "optimum
insanity", and is the level of insanity (number of repetitions) that leads to
the most likely successful outcome.
At times the optimum insanity increases without bound, typically when websites
or servers are unreliable or
erratic. LUNATIC is not performing a denial of service attack, but, rather, a
"demand for service". A side effect is
that when large numbers of people use LUNATIC, erratic websites will
experience massive download traffic, such
that LUNATIC disincentivises insanity.
In this sense, LUNATIC is a temporary solution to technopagan insanity, and
ultimately will hopefully become
unnecessary, as we transition to the age of Sousveillant Systems.
19.4. Combatting the "Design-for-stupidity" trend
In the same way that more people are being driven insane by technology, and
also that insanity is becoming
a requirement for using technology, we also have the trend that technology is
being "dumbed down" so that it
appeals to people of lesser intellect. In this way, the technology begins to
require a lesser intellect to operate it. In
some cases, technology is incomprehensible to all but those of lesser
intellect, and so a reduced intellect becomes
a mandatory requirement in order to relate to the modern "smart" technologies.
Thus we have "smart technology for stupid people", i.e. a kind of feedback
loop that favours and rewards
stupidity. Those trying to ask questions or those trying to more deeply
understand technology are punished by
merely getting less out of the technology.
Additionally, science is becoming marginalized and even criminalized, i.e. by
stupid laws that make it illegal
to try and understand how things work. [See for example, The Law and Economics
of Reverse Engineering, by
Pamela Samuelson and Suzanne Scotchmer, 111 Yale L.J. 1575 (2002)]
19.5. Sousveillant Systems
Humanistic Intelligence is defined as intelligence that arises by having the
human being in the feedback loop of
a computational process [112].
Sousveillant Systems are systems that are designed to facilitate Humanistic
Intelligence by making their state
variables observable. In this way machines are "watching" us while allowing us
to also "watch" them. See Fig 14
(a detailed description is provided in [81]).
19.6. Undigital Cyborg Craft
Sousveillant Systems give rise to a new form of Human-Computer interaction in
which a machine can function
as a true extension of the human mind and body. Manfred Clynes coined the term
"cyborg" to denote such an
interaction [24], his favorite example being a person riding a bicycle [23]. A
bicycle is a machine that responds
5This began as an interventionist/awareness-raising effort, but could evolve
into a useful field of research.
CA 3028749 2018-12-31

to us, and we respond to it in an immediate way. Unlike modern computers where
feedback is delayed, a bicycle
provides immediate feedback of its state variables all which can be sensed by
the user.
Consider the CNC (Computer Numerical Control) milling machine that extends our
capacity to make things. We
begin with CAD (Computer Aided Design) and draw something, then send it to the
machine, and let the machine
work at it. There's not much real feedback happening here between the human
and machine. The feedback is
delayed by several minutes or several hours.
David Pye defines "craft" ("workmanship") as that which constantly puts the
work at risk [123]. As such,
modern CNC machine work is not craft in the Pye sense. Nor is anything
designed on a computer in which there
is an "undo" or "edit revision history" functionality.
Imagine a CNC machine that gave the kind of feedback a bicycle does. Could we
take an intimate experience in
craft, like a potter's wheel, and make a CNC machine that works at that kind
of continous ("undigital") feedback
timescale?
HARCAD (Haptic Augmented Reality Computer Aided Design) is a system to explore
"Cyborg Craft", a new
kind of undigital craft: being undigital with digital computers. One such
embodiment is the T-SWIM (Fig 105) as
a haptic augmented reality interaction device and system. In this system, one
or more users can grasp and touch
and feel virtual objects in cyborgspace, and share the resulting haptic
augmented reality experience, as a way of
designing and making things like cars, buildings, furniture, etc., through a
collaborative design and realization
process.
Let's consider a simple example. We wish to make a table with a nice curvy
shape to it. A good place to begin
is with waves. As a basis for synthesis, we have a shape synthesizer. In some
embodiments the shape synthesizer
is made in software, from simple shapes like rectangles, circles, ellipses,
etc., as one might find in a computer
program like Interviews idraw, Inkscape, or Autodesk Fusion 360. In other
embodiments the shape synthesizer
exists as a piece of hardware, as illusrated in Fig 102. Here is a multiple
exposure, showing three exposures while
moving back-and-forth, which align, approximately, with each other, while
admitting a small variation. The shape
is sculpted by press-pull operations. Additionally, harmonic variations are
made to the shapes, to design furniture.
Here a wavetable is designed in which the table is the fundamental of the
waveform from a musical note, and
the shelf above it is the fundamental and first overtone (i.e. the first two
harmonics). Here the amplitude of the
waveforms decreases from left-to-right as we move further from the sound
source.
Another embodiment comprises a robot that a user can wrestle with. One
embodiment of the wrestling robot
involves a core-strenght development system, based on integral kinesiology.
The user exercises their core by playing
a video game or a real game or an interaction in which there is an error
signal that must be overcome. One version
of this uses an interactive spray jet that takes evasive action while a user
tries to deadhead the jet from some
distance. Deadheading a water jet is a good form of fitness exercise that is
inherently absement based, or stability
based, because it requires covering the jet from some distance, and then
coming close to the base of the jet to block
it completely. The jet is preferably too strong to be blocked at the base,
except by gradual approach. Coming
straight toward the jet, a user can carefully cover it, but a little wandering
off-course causes the user's fingers or
hand to be thrown wildly off course. Thus the exercise requires combined
simultaneous dexterity and strength.
To make this more challenging and more fun, a game is made of it. The robot or
a mobile phone app, or the
process in general, has a moving cursor or pointer, and the user's objective
is to exert a combination of strength
and dexterity to track or follow the moving pointer. An example with a water
jet on a robot is shown in Fig. 103.
Moreover, by attaching grip handles to the robot, a new form of interaction
arises. In one embodiment, the grip
handles are made from laser cut wood (See Fig 104). See 106 for the finished
wrestling robot.
This gives rise to:
= WrescillatoryTmUser Interfaces;
= The WrescillographTM;
= The WrescilloscopeTm...; and
= MAPathrm(e.g. like a GPS says "recalculating" but instead of a map this
os for a toolpath. Tools like CNC
have grab handles so you can wrestle with them and then have them go ...
"recalculating" the tool path.
Here we have a synergy between human and machine in which the feedback loop is
essentially instantaneous.
CA 3028749 2018-12-31

This brings us full circle back to the topic of Sousveillant Systems. If we're
going to be able to do things like
wrestle with machines, we need machines to be more vulnerable, more
expressive, and in many ways, more human.
To make machines more expressive, let us equip them with premonitional user
interfaces, an example of which is
the concept of a PhotoolpathTM, i.e. a photographic representation of a
toolpath, as shown in Fig. ??.
A set of special Photoolbits are provided for use with a CNC machine like for
example, the DMS (Diversified
Machine Systems) 5-Axis Enclosed Router. It has an existing tool library in
which 12 items can be loaded. In
an embodiment of the invention, some of the 12 tool items loaded are
Photoolbits. One or more Photoolbits
are included in the library. In one embodiment there is simply an LED light
source in a housing with the same
dimensions as typical tools used in the DMS tool library. Part of the tooling
sequence can include selection of a
Photoolbit to trace out with during a long exposure photograph which can then
be re-rendered from any viewpoint
and fed to DEG (Digital Eye Glass) such as the EyeTap AR (Augmented Reality)
system.
Photoolbits include small batteries that charge wirelessly when in proximity
to their tool crib storage location,
and turn on automatically when grasped by the toolbit holder.
In another embodiment, the light source is modulated phase coherently with a
phase coherent detector array so
it can ignore other light sources other than the Photoolbit.
Alternatively a photographic toolbit (Photoolbit) subsitute is intergrated
right into the print head. In this way,
the overlay can happen during printing.
In another embodiment, a laser cutter is fitted with a photographic print head
that prints on photographic
material at the same time as it (cuts or etches) a workpiece. In this way we
can continuously view the toolpath as
it is evolving. A photodetector is aimed at the laser so as to pick up its
light and amplify its light and then pass
that along to one or more LEDs borne by the head. Preferably a simple
attachment is provided. Preferably the
simple attachment takes a form similar to the focus piece of an Epilog Legend
EXT36, so it can simply be inserted
and stuck on with the magnets already present. The insert has a small
lightweight battery and just snaps in place,
being of lightweight construction to minimize payload. A small ferrous piece
is for being held by the Epilog's
magnets, and the rest of it is made of aluminium or lightweight plastic, and
houses a batter holder, photodiode,
amplifier, and LED. A small microprocessor or microcontroller is used in some
embodiments and it distinguishes
between etching and cutting modes of the laser cutter. It thus drives one or
more LEDs or at least one multicolor
LED with a unique color for etching, different than the color for cutting. A
third color indicates traverses or
movement of the head in which no cutting or etching takes place. This is all
done with no requirement that there
by any communication or sensing between the Epilog and the sousveillant print
head attachment. This allows
sousveillance to be added to a laser cuttor or other computer controlled tool,
without requiring collaboration with
the manufacturer. See Fig. 109
19.7. Feedback delayed is feedback denied
Let us conclude with a hypothetical anecdote set in the future of musical
instruments, which parallels the author's
personal experience with scientific instruments. This points to a possible
dystopia not so much of government
surveillance, but of machine inequiveillance.
The year is 2067. Ken is the world's foremost concert pianist, bringing his
own Steinway grand piano to each
concert he performs in, now that concert halls no longer have real pianos.
Software synthesis has advanced to the
point where none of the audience members can hear the difference. Steinway
stopped making real pianos in 2020.
Yamaha and others also switched to digital-only production the following year.
Even Ken has trouble telling the difference, when someone else is playing, but
when he plays a real piano himself,
he can feel the difference in the vibrations. In essence, the performance of
the Steinway Digital is as good as the
original, but the vibrotactile user-interface is delayed. The tactuators
installed in each key simulate the player's
feeling of a real piano, but there is a slight but noticeable delay that only
the musician can feel. And user-interface
is everything to a great musical performance.
Ken no longer has access to a real piano now that his Steinway grand piano was
water-damaged by a roof leak
while he was away last March. He tried to buy a new piano but could not find
one. Tucker Music had one in their
catalog, for $6,000,000, but when Ken called Jim Tucker, Jim said there were
no more left. Jim sold about 50 of
them at that price, over the past few years, as he collects and restores the
world's last remaining real pianos, but
no more are coming up for sale.
Ken has felt that his musical performances have declined now that he no longer
has access to a real piano.
Software, AT, and machine learning make better music anyway, so there's no
longer a need for human musicians,
CA 3028749 2018-12-31

anyway.
But there has been no great advancement in music in recent years, now that
there are no longer any great
musicians still passionate about music for music's sake. Today's musician
spends most of the time writing grant
proposals and configuring software license servers rather than playing music.
19.8. The need for antiques to obtain truth in science
The above story depicts a true event, except for a few small changes. My (not
Ken's) instrument that was
damaged was not a musical instrument, but, rather, a scientific instrument
called a Lock-In Amplifier[107, 111,
135, 27] made by Princeton Applied Research in the early 1960s. It was easy to
understand and modify. I
actually did some modifications and parts-swapping among several amplifiers to
get some special capabilities for
AR (augmented reality) veillance visualizations, such as bug-sweeping and
being able to see sound waves, radio
waves, and sense sensors and their capacity to sense [84].
The roof leak occurred in March 2016, while the amplifier was running (it
takes a few hours to warm up, and
since it uses very little electricity it is best to leave it running
continuously).
The PAR124A is no longer available, and large organizations like research
universities and government labs are
hanging on to the few that remain in operation.
It should be a simple matter of purchasing a new amplifier, but none of the
manufacturers are willing to make the
modifications I require, nor are they willing to disclose their principles of
operation to allow me to do so. Neither
Princeton Applied Research, nor Stanford Research Systems (nor any other
modern maker of lock-in amplifiers)
is able to supply me with an instrument I can understand.
One company claims to have equalled the performance of the PAR124A, at a
selling price of $17,000:
Since the invention of the lock-in amplifier, none has been more revered and
trusted than the PAR124A
by Princeton Applied Research. With over 50 years of experience, Signal
Recovery (formerly Princeton
Applied Research) introduces the 7124. The only lock-in amplifier that has an
all analog front end
separated, via fiber, from the DSP main unit.
Recent research findings, however, show that the PAR124A from the early 1960s
still outperforms any scientific
instrument currently manufactured [139].
And performance alone is not the only criterion. With a scientific instrument
we must know the truth about
the world around us. The instrument must function as a true extension of our
mind and body, and hide nothing
from us. Modern instruments conceal certain aspects of their functionality,
thus requiring a certain degree of
technopaganism [134] to operate.
Thus we currently live in a world where we can't do good science without
access to antiques.
Imagine a world in which there are no Steinway grand painos anymore, a world
bereft of quality, excpet old
ones in need of restoration. A musician would have to be or hire a restorer
and repair technician, and hope for
access to one of the few working specimens that remain.
Is this the world we want to live in?
Science demands integrity.
Only through Sousveillant Systems and "little data" can we preserve the
tradition of science, in the face of
technopaganism, surveillance, and "Big-only Data". A goal of our research is
to produce devices that embody
sousveillance-by-design, starting with scientific test instruments like lock-
in amplifiers, and progressing toward
concepts like "little data". To that end, let us hope that we can build
sousveillance into our networked world,
starting with instruments.
20. Detailed description of the drawings
Fig. 1 illustrates an embodiment of a sousveillant system in the form of a
haptic/tactile augmented reality
computer-aided design (CAD) and computer-aided manufacture (CAM) system, i.e.
a system that allows a person
to design and build something, by using a haptic wand 120 to sculpt and shape
something. A shape generator
110 generates initial shape information. This initial shape information comes
from another user or collaborator,
or it is pulled from a shape library or it is drawn by other means, such as by
traditional CAD, for being entered
as data, into the shape generator 110. Alternatively, input comes from nature,
such as from a phenomenological
process, or from some visual or other element found in nature itself, and
forms a basis upon which shape generator
110 derives its input.
CA 3028749 2018-12-31

Shape generator 110 exists as software within a processing system, or as
hardware, either separate from, or as
part of a general processing system. A multisensory processor 111 receives
input from the shape generator 110 by
way of a shape information reference signal 112. Reference signal 112
functions as an initial starting point for a
user to work with.
A user of the system grasps a device such as wand 120 and moves it through 3D
(3-dimensional) space. The
wand 120 includes a position sensor 122. A satisfactory position sensor 122 is
a transducer connected through a
signal amplifier 116, producing spatial sensory signal 113 to a lock-in
amplifier also fed by a stationary transducer
as reference. A location sensor 140 is either disposed in the environment
around the user (e.g. a stationary
transducer) or is computed from the environment. In some embodiments sensor
140 is merely virtual, comprising
the natural unprepared environment, as first encountered by a user. Thus no
advance preparation is required,
and the user can enter any space. In this case location sensor 140 comprises
simply objects already present in the
environment. For example, radio waves or sound waves (or computer vision)
emitted from a transducer of position
sensor 122 bounce off things in the environment, are reflected, and then
received by sensor 122, thus enabling the
determination of the 3D position of wand 120. A location sensing field 150
comprises a sound field, electromagnetic
field, lightfield, or the like. For example, if sensor 122 is a 40,000cps
(cycles per second) sonar, the sensing field
is the sound waves bouncing around in a pool or room or other space. Location
sensing is relative to other fixed
or moving objects such as desks, floor, ceiling, the walls or bottom of a
pool, the bottom of the sea or a lake, fish
swimming in the sea or lake, or the like.
The wand optionally includes one or more visual elements, real or virtual,
such as LEDs 121 or virtual LEDs
synthesized in an eyeglass display as if emenating from the wand 120. The LEDs
(real or virtual) are sequenced
by a SWIM (Sequential Wave Imprinting Machine) computer 115, by an output
signal, X, from the multisensory
processor.
Signal X may be real or complex, or multidimensional in nature. Computer 115
drives LEDs 121 to make visible
a CAD (Computer Aided Design) shape 130. Wand 120 includes a bidirectional
tactor 123 which both senses and
affects tactility. Tactor 123 creates a vibrotactile sensation in wand 120 so
that when grasped by the hand of a
user, the user can feel shape 130.
Moreover, tactor 123 is also a sensor, e.g. a transducer that can operate both
ways (transmit and receive).
During a refractory period (i.e. during which it is unresponsive to stimulus)
it emits tactile signals. In between
emissions it senses how it is being squeezed. A pressure sensor function in
processor 111 records this squeeze force,
and modifies a pertion of shape 130 at which wand 120 is located. So a user
reshapes shape 130 by waving the
wand 120 back and forth while squeezing the wand, to alter shape 130. A shape
table in processor 111 is indexed
by a position of wand 120, while the value at the index corresponding to the
position of wand 120 is adjusted in
proportion to the squeeze force on tactor 123.
The updated shape table in processor 111 is continuously played out on the
LEDs and output (during refractory
period) phase of tactor 123, so the user can touch and feel and see the
updated shape 130, in a continuously fluid
manner.
In some embodiments tactor 123 is a pressure sensor and hydrophone and wand
120 is waved back and forth
underwater while a small jet of water conveys shape 130 information to a hand
of a user, and also senses, hydraulo-
phonically, force given by the hand of the user. In air, this can also work,
with simply a change in fluid for water
to air, thus providing a fluid user interface for use in air, where the tactor
123 is a fluid (air or water) sensor and
transmit transducer in one, or contains separate send and receive transducers.
Fig. 2 illustrates an embodiment of a collaborative cloud-based Haptic
Augmented Reality Computer-Aided
Design system based on the embodiment of Fig. 1.
Here wand 120 is held by user U201 running a cloud-based CAD software instance
201. A wide variety of
cloud-based CAD software packages will work with this system. A good choice of
cloud-based CAD software is
Fusion 360 by Autodesk. Another user U202, or the same user at a different
point-in-time (e.g. with U202 being a
user-present, and U201 being the same user at a past point in time) is running
the same cloud-based CAD sofware,
such as, but not limited to, Autodesk Fusion 360, e.g. different users (or the
same users at different points-in-
time) can use different software as long as all the software follows the same
HARCAD (Haptic Augmented Reality
Computer Aided Design) protocol.
User U202 is running instance 202 of CAD software such as Autodesk Fusion 360.
In situations where user U202
is the same user at a different (future) point-in-time, user U202 is running
the same or a different instance 202 as
instance 201.
CA 3028749 2018-12-31

Within the instances 201 and 202 of CAD software, a shared object S200 exists
within the cloud software
instance 200, and resides on cloud storage 290 by way of network 280. Storage
290 may be a disk, solid state
drive, magnetic core, punched paper tape, or any past, present, or future,
e.g. newly yet-to-be discovered form of
computer memory storage device.
Users U201 and U202 interact by manipulating various shared objects in a
virtual or augmented reality space,
such as shared object S200.
Shared object S200 exists as a 3D spatial object in 3D space, but objects in
higher dimensional spaces are also
edited and manipulated, since virtual objects are not limited by dimensions.
Some of the shared objects are four
dimensional spimes (spacetimes), visible in the axes of x,y,z,t (time),
whereas others are of higher dimensions. Some
objects are real whereas others are complex, e.g. the users share and
manipulate complex objects like functions in
complex space, such as waveforms made of a spatiotemporal complex-valued
Fourier series.
Thus shared objects S200 range from simple things like circles, rectangles,
cubes, spheres, cylinders, cones, and
the like, to more complex things like multidimensional complex-valued
manifolds in the spacetime continuum of
quantum gravity waves, as users U201 and U202 collaborate on preparing online
course materials.
Shared objects S200 are not limited to CAD/CAM but also include the making of
virtual content that is never
physically realized, such as course material for teaching quantum field theory
or other abstract concepts.
User U201 grasps wand 120 and moves it along object S200 to select a
neighbourhood of points around point
P221, and feels that part of S200, while seeing all of S200.
User U202 grasps wand 220 and moves it along object S200 to select a
neighbourhood of points around point
P222, and feels that part of S200, while seeing all of S200.
Within instance 200, P201 and P202 are updated so that both users U201 and
U202 see and touch and feel and
grasp the same shared object S200 and observe (see and feel) each others'
edits.
In this simple example, the shared object is an essentially one-dimensional
manifold in multdimensional (3D or
4D or higher dimensional) space, so that it exists simply as a list of
numbers, preferably as a list of double precision
floating point numbers as thus an "undigital" (essentially continuous)
representation of the object S200.
User U201 is trying to pull up on a portion of the ojbect a bit toward the
right of center, whereas user U202 is
trying to pull down on a portion of the object near the left side of the
object S200.
The object representation in this case comprises a table of numbers, with
enough sampling to losslessly describe
it. There are several cycles (about seven cycles) of waveform of which the
first several harmonics are relevant, so
by Nyquist's theorem, we need maybe 7*7=49 cycles per second, times two is
about 98 samples, i.e. about 100 or
so samples would suffice for the specific object shown in Fig. 2, but to allow
for further edits, let's say there's 1000
samples in object S200, i.e. tht S200 is represented by 1000 floating point
numbers, thus having 1000 undigital
degrees of freedom.
More generally, more complicated objects like an automobile or jet engine,
will include even more degrees of
freedom or data points. For example, the curve of an automobile's body is
represented by a Fourier expansion of
the surface in 3D space, and is manipulated by grasping the surface, using the
"press pull" function of Fusion 360.
Therefore we require a simple way to manipulate the shared objects in
multidimensional spaces. In our simple
example, user U201 grabs point P221 by hovering wand 120 over point P221 and
squeezing the tactor of the wand,
so as to select this point. A software interface reads this squeeze and
requests a block of object S200 from the cloud
server instance 200 and updates the numerical values, in this case by
increasing the numbers in the neighbourhood
of P221 region, i.e. as indicated on scale S1000, with finer scale divisions
just before index i200 and i600. Index
markers i0, i200, i400, i600, and i1000 are shown on scale S1000, and indicate
1001 indices running from 0 to 1000.
Under this programming, point P201 falls at sample 587 of the sample array
indicated by scale S1000, thus the
587th sampe of object S200 is increased, as well as the sample points in its
neighbourhood.
This is done by adjusting an object representation, such as a spline or a
series representation such as a Fourier
series representation, resulting in a general upward shift of the portion of
object S200 in the high five hundreds
area of indices, e.g. samples at indices 580 all the way up to 600 also
increase substantially.
Likewise user U202 is tugging down on the left side of object S200 in a
similar fashion, around sample 117 of
the list of numbers that represent object S200. Our algorithm thus needs to
decrease the 117th number in the list
of numbers that represent object S200. For smoothness and continuity, we
filter this movement similarly, e.g. by
modifying the represetnation of the data, e.g. by taking a Fourier transform
of the desired change, selecting only
the first several principal components, (typically the lowest frequency
coefficients), zeroing out the higher (less
principal) coefficients, and then performing the inverse Fourier tranform to
get the new modified object S200.
CA 3028749 2018-12-31

In this way user U202 can tug down on the left side of object S200 while user
U201 tugs up on another part of
the object just right of center, and the two changes are made and experienced
by both users who can each see the
whole object and feel the part of the object they are exploring.
When the two users run on top of each other (e.g. if both try to edit the same
part of the object) they are
essentially "wrestling" with each other, and the action and general feel of
the apparatus is much like the Wrobot
(wrestling robot) in which they feel each others' tugging and know that they
are both trying to edit the same piece
of object S200. In this way it feels a bit like a ouija board, when multiple
people try to move a shared planchette,
but these multiple people here are in separate geographic locations.
Wands 120 and 220 are ideally lightweight and easy to carry around or put in a
pocket, or wear as a necklace,
or even exist as virtual objects of zero mass or exist as small things like
rings worn on a finger.
But wands 120 and 220, in some embodiments are also large objects and even
have deliberately added mass. A
nice weighted wand feels good in the hand, and especially in pursuit of
physical fitness, the wands may be weighted
and used also in a competitive kind of sport.
This sets forth a way to revolutionize the workplace, and rather than being
fat and lazy sitting on a chair all
day, we can be fit while desinging things.
Thus there is a new method of fitness, using the MannFitTmSystem in which
players or competitors, or workers,
collaboratively design objects like furniture, or other objects for sale, by
playing fitness games that develop a
combination of strength and dexterity. I call this "dexstrength", i.e. the
capacity to act with precision+accuracy
while at the same time exerting one's self.
Abakography (computational lightpainting) itself can be made into a game or
sport, as outlined in "Intelligent
Image Processing", author Steve Mann, publisher John Wiley and Sons, year
2001.
Fig. 3 illustrates an embodiment of a collaborative cloud-based fitness game
designed to create strength and
stability in a user's core muscles. Such a game is aimed at providing
strength, dexterity, endurance, stability, and
control over the following muscle groups generally known as the "core":
= rectus abdominis;
= transverse abdominals
= oblique musicles (internal and external "obliques");
= pelvic floor muscles;
= multifidus;
= lumbar spine stabilizers;
= sacrospinalis;
= diaphragm;
= latissimus dorsi;
= gluteus maximus; and
= trapezius.
Fitness rings 310 hang in pairs each at the end of a destabilizer bar 311
which hangs from its center by cable
312 from ceiling 313. Cable 312 connects to a swivel 316 which forms a pivot
point about which the bar can rotate.
Each player hangs from a pair of rings hanging from a destabilizer bar hanging
by a single cord. The cord may
be a rope, chain, cable, wire, or swivel bar, or pivot point affixed without a
cord.
Each pivot point is connected to a rotary potentiometer. The counterclockwise
most side of the potentiometer
is connected to +15 volts from a power supply and the clockwise most side is
connected to -15 volts from the
power supply. This causes a voltage change proportional to the angle of the
bar. When the bar is horizontal
straight across, the voltage is zero. A standard 270 degree rotary
potentiometer is used with a +/- 15 volt (i.e. 30
volt center-tapped) DC (Direct Current) power supply. Thus there is about 1
volt increase or decrease for each 9
degrees of tilt. When the bar tilts to the left the voltage goes up, and vice
versa. So if the bar tilts 9 degrees left
of horizontal, the voltage is +1 volts. When the bar tilts 9 degrees to the
right, the voltage is -1 volts. In Fig. 3
CA 3028749 2018-12-31

the bar is shown at about a +15 degree angle (positive angles are counter
clockwise), so the voltage present at the
wiper terminal of the potentiometer is about plus one and two thirds of a
volt.
This system forms the basis for the CorePointTmsystem in which the user's core
muscles form a pointing device.
The pointing device is, in this example, one-dimensional in the sense that
there is just one degree of freedom which
is the voltage at the center arm of the potentiometer.
The Corepoint system functions as a simple game starting with level 0 of the
game to warmup, and then
advancing.
In level 0, the objective is simply to hold the bar horizontal. For each
player, there is a separate output of the
central arm of the potentiometer which is a voltage. For user U301, the
voltage is v1 and for user U302 the voltage
is v2. For each player, this voltage goes through an absolute value circuit or
absolute value taker, absval 350, the
output of which is fed to integrator 351. A satisfactory integrator is formed
by an op amp such as a Fairchild 741
or Signetics 5532, with capacitor 353, in its feedback loop, and resistor 352,
in series with its negative input, the
positive input being grounded. Note that signal 355 is the negative of the
integral, so it requires further inversion,
this being part of the absement tracker 360's job. The integration is
implemented in hardware or software or
firmware. Preferably the integration is performed computationally rather than
requiring a separate op amp, since
there are other computations to also be performed.
The integral of the absolute value of the voltage from potentiometer 322 is
referred to as the absement, and
exists as signal 355 which is fed to the absement tracker 360. The absement
tracker displays continuously the
absement on a shared cloud SCOREboard / CoreboardTmso both players can see a
time-varying graph of their
absement overlaid upon each other, and this provides motivation to hold out
for longer.
For simplicity on the drawing, only the circuit for player 2 is shown, but in
reality both players have such circuits
or software running, and, more generally, more than 2 players are involved in
many embodiments of the invention.
Typically any number of players can logon and join the game over the cloud-
based server on network 280.
Players achieve "Corepoints" by getting minimum absement. A Corepoint is
defined as a thousand times the
reciprocal of the absement, i.e. 1000 divided by the time-integral of the
absolute value of the deviation from
horizontal of the bar. Corepoints are displayed on the Coreboard, for both
players to see.
Players next advance to level 1 of the game. In level 1 of the game the
additional feat of raising the feet is
provided, i.e. leg-raises, into L-seat position while also minimizing
absement. Additional corepoints are computed
based on height of legs, and good form, as determined by a 3D vision system.
Players then advance to level 2 of the game. Level 2 of the game involves
generalized absement in which a
target function is defined and displayed as object S200 to both players. This
object is a surface, typically on
one-dimensional manifold in two or three-dimensional space.
The gaol of the game is to move a cursor along the curve of the object S200
from left-to-right. The target curve
of object S200 will have two instances, instance 301 traced out by user U201
("player 1"), and instance 302 traced
out by user U202 ("player 2"). Instances 301 and 302 shows level 2 of the
game, where we see rough jagged traces
corresponding to the actual angles as a function of time, and the smooth trace
of instance 5200 as the desired
angle as a function of time.
The curves resemble traces on a CRO (Cathode Ray Oscillograph), and are
virtualized as such in the game.
Players both see an accurate rendition of a 1935 RCA Cathode Ray Oscillograph,
Type TMV-122, displayed on
the screen. With its round screen, it is reminiscent of a radar scope and
provides a nice retro aesthetic for the
game. The characteristic green glow of object S200 has a cursor that sweeps
from left-to-right as a function of
time. Instead of keeping the bar horizontal, players must tilt the bar left to
increase the voltage on the TMV-122
plot (i.e. move its cursor up), or right to drop the cursor down. For this
action, the waveform of object S200 is
added to the operational amplifier of integrator 351, or, in the software
embodiment, the 1000 or 1001 samples of
object S200 are sequentially subtracted from a sampled voltage of v1 for
player 1 and v2 for player 2. This is done
as follows:
= An initial input voltage, v(t) (either v1 or v2 depending on which
player) is received, and used to update
instance 301 for v1 and instance 302 for v2 as traces evolving along with time
(e.g. getting traced from left to
right. Thus initially all that will exist of instance 301 and 302 is the
leftmost point. A difference is computed
between the input voltage and the first element of the list of 1000 or 1001
numbers corresponding to object
S200. This difference is called e0 and corresponds to the error signal between
the desired and actual position
of the bar 311 or 321;
CA 3028749 2018-12-31

= At the next point in time, a new voltage sample is taken, and this sample
of voltage v(t) at this later time,
for each player, is used to update instance 301 and 302, and is used to
compute a new difference voltage
against the next sample of object S200, for each player, and this difference
is called el (each player has a
growing list of error terms, i.e. there's an e0 for each player and an el for
each player, and so on);
= Continuing, at eacah next point in time, a new voltage sample is taken,
and this sample of voltage v(t) at
each later point in time, is used to compute new difference voltages against
each next sample of object S200,
and these difference are called e2, e3 e4, and so on, all the way up to e1000;
= At each point in time, error voltages and their corresponding error
angles (i.e. as calcualted at 9 degrees per
volt) are displayed on the Coreboard for the two or more players, along with
the running total error, thus
far, as well as the Corepoints calculated as 1000 divided by the total error;
= At each iteration, the winning player thus far, is identified, based on
the player with minimum error (maximum
Corepoints).
At the conclusion of the game, the player with the lowest error (maximum
number of Corepoints) is identified as
the winner, and the result is entered into a permanent record in cloud storage
290. This record is made for every
sample so that a player in the future can play against a player from the past.
Players can compete against each other in real time, or they can compete
against other players from the past,
or against themselves from the past, e.g. each shape S200 defines a "course"
that can be played, and any player
can compete against someone who did, in the past, that same course of shape
S200.
The TMV-122 Cathod Ray Oscillograph emulation is just one example of a game
that can be played with this
invention. It is a good game for the novice, because there is very little
distraction, and the game is very simple
and easy to understand.
Level 3 of the game advances to a driving game.
Level 3 uses a video game, for steering a car, for example. In level 3, the
bar functions as the steering mechanism
for racing the car.
The next level, level 4, is the same driving game but using the leg-raise
function for control of the speed of the
vehicle. To go faster, players raise their legs higher, in an L-seat position.
In addition to the angle sensor formed by potentiometers 321 and 322, another
sensor senses leg level, and the
object of the game is to do an L-seat position on the rings, raising the legs,
as indicated in Fig. 82 (left), keeping
the legs as high as possible. A suitable sensor is a 3D camera such as a
Kinect camera, aimed at the body, namely
the legs, of each user U201 and U202. Users may be together facing each other,
in which case one vision system
can monitor both users, or they can be in different places, such as in
different countries, connected by network
280, over a cloud server, where each user is monitored by a separate 3D camera
based vision system.
In the video game, steering is by the angle of potentiometer 321 and 322,
which function as steering wheels for
the players such as users U201 and U202 respectively, and the accelerator of
the virtual car (one for each player)
is controlled by the leg raise height.
The result is to develop solid core muscles by using fitness and fun, pointing
with the core muscles, as core
function and stability and control maps directly to performance in the game.
Fig. 4 illustrates an embodiment of the invention using the tilt sensor 421
(gyroscopic sensor) built into a
smartphone 471 rather than that the separate potentiometer 321 as a tilt or
angle sensor.
The display of smartphone 471 shows an initial "splash screen" screen display
that has wavy water-like lines
on it, along with also some branding and a corporate product slogan or
aphorism such as "Abs of cement with
absement" or the like. Upon the screen is also the HORIZONETmindication, i.e.
the zone of the horizon, as
indicated by waves 481 symbolizing water waves as seen on an ocean view that
indicate horizon. The waves are
synthetic in some embodiments whereas in others there is provided a view of a
nice beach, rotated to indicate
either the horizon as target position or reference position.
Coordinates are selectable as "Normal" (Forward) mode or "Corrective"
(Backward) mode, i.e. as either showing
the horizon as it should be, or as it currently is.
The waves 481 are modulated as per their degree of correspondence with the
correct angle of tilt.
In level 0 of the game, the waves simply show where the horizon should be, and
the player keeps level, and the
waves indicate tilt, e.g. as going "stormy" when upset, or "calm" when closer
to target.
CA 3028749 2018-12-31

In level 2 of the game, the target angle is built into a pre-tilt or pre-
rotation of the waves 481 as displayed,
and this is the means for showing the target angle. Instance 401 shows level 2
of the game, where we see a rough
jagged trace corresponding to the actual angle as a function of time, and the
smooth trace of instance S200 as the
desired angle as a function of time.
In one embodiment, the waves 481 swing around to various angles, and the
player must counteract this tendency.
In one embodiment, the absement is presented as a seasickness, and the goal is
to minimize seasickness.
This is done with a seasickness metaphor which is how the whole system works
to present absement as something
comprehensible to the user, since teaching of principles of absement is
usually based on water accumulated in a
bucket, for example (as a metaphor for the process of integration).
Fig. 5 illustrates an embodiment of the invention for planking or pushups,
upon a surface that is shaped like
a foreshortened surfboard or "boogie board". The board is about 18 inches
wide, and about 28 inches long, and
there is a ball joint or swivel on the bottom of it, about 13 inches back from
the front. This point forms a pivot
562, around which the board can rock side-to-side, fore and aft, rotate
clockwise or counterclockwise with respect
to the ground, or push a little bit closer to the ground or a little further
away (i.e. to sense total weight upon the
apparatus). Thus there are 4 degrees of freedom, of which three are motion
based and the fourth is pressure or
force based.
Pivot 562 defines centerline 565 which is about where the center-of-gravity
for the upper body contact region of
user U201 falls along. Thus for doing pushups or doing planks in pushup
position, user U201 places the palms along
centerline 565, whereas for doing planks in forearm position, user U201 places
his or her elbows approximately
along centerline 565, with forearms 570 resting on the board surface 567.
Pivot 562 is a ball or partial ball, such as a rounded end of a pipe end cap,
or other rounded shape, in the range
of 1.5 to 6 inches in diameter, or so (can be smaller or larger depending on
desired degree-of-difficulty when it sits
directly on a flat floor or flat base surface 560).
It either rocks around on the floor directly, or rocks on a surface 560, or
mates with a socket attached to surface
560 to keep it from slipping side-to-side or fore-to-aft.
Near the bow of surface 567 is a display region 571 comprised of an array of
addressable picture elements, or
simply a place for putting an external display such as an external smartphone
or other computing display device,
i.e. region 571 may simply be a rubber mat or a recess or indentation or
framing.
For a better neck position, the display region 571 is, in some embodiments of
the invention, on the ground in
front of the apparatus, or the surface 567 is extended outward more, just for
display 571, or in some embodiments
there is a small wireframe extension attached to the front of surface 567 that
slides in and out, to extend forward
(adjustable) and hold display 571.
Alternatively an eyeglass 509 is used to display the material as instance 501
of a game scenario. The material
of instance 501 is displayed on display 571 or eyeglass 509 or both or a
mixture of these (e.g. some content as an
augmented reality overlay).
Alternative embodiments use full-body version where a user does pushups on a
board that requires balancing
of the whole body, as shown in Fig /110.
A typical game is a game of buzzwire or buzzwave in which the user U201 needs
to follow along a complex-valued
waveform, such as wave 540, with pointer 539. Wave 540 is a function of spime
(space or time) on a stime (space
or time) axis 541, and in particular, its real part as plotted on a real axis
542, and its imaginary part as plotted
on an imaginary axis 543. Wave 540 may be generated by any of a wide range of
methods. A satisfactory method
is to slide a microphone along a rail, at one end of which there is a
loudspeaker connected to an oscillator output
of a lock-in amplifier, wherein the microphone is connected to the signal
input of the lock-in amplifier, and the
"X" and "Y" outputs of the lock-in amplifier are recorded while doing this.
The result is an object that can be
visualized in 3D space, resembling a corkscrew kind of shape, with some
irregularities due to sound reflections off
walls, etc., and this is shown here for level 2 of the game.
The game is a 3D maze of sorts which must be navigated by keeping the pointer
539 as close to wave 540 as
possible. The distance between pointer 539 and wave 540 is captured as a
function of space or time, along the
wave 540. The time integral of this distance is the absement, and is displayed
dynamically on a scoreboard 551,
e.g. here shown as 3.25 degree seconds (degrees times seconds). (For small
angles, absement and absangle are
approximately equivalent.) The score of 1000 / absement is also displayed. The
absement increases or stays the
same during the following of the wave, while the user U201 tilts surface 567
left-to-right to move along the real
axis and fore-to-aft to move along the imaginary axis. Holding surface 567
level keeps the pointer 539 on the spime
CA 3028749 2018-12-31

axis 541, whereas the more tilted surface 567 is, the further from the spime
axis the pointer 539 goes. A tilt sensor
senses the left-right tilt of surface 567 and converts that tilt to a position
along the real axis 542, and senses the
fore-aft tilt of surface 567 and converts tht to a position along the
imaginary axis 543.
In level 0 of the game, the wave 540 is just the spime axis itself, and the
goal is simply to keep surface 567 as
level as possible at all times, to stay on this axis. The absement is a record
of how unlevel surface 567 is over
time. Tilting 1 degree for 10 seconds costs the same against one's performance
record as tilting 10 degrees for 1
second. The goal here is simply to get the smallest area under that curve. The
absement is computed in Euclidian
geometry or in other embodiments it is computed in any other metric space or
even in spaces that are not metric
spaces. Thus more generally an accumulated deviation from straight is
computed, as desired.
In level 2 of the game (what's shown in the figure), the goal is to navigate
the path as close as possible. Again
the absement is computed in Euclidian geometry or in other embodiments it is
computed in any other metric space
or even in spaces that are not metric spaces. Thus more generally an
accumulated deviation from the course is
computed, as desired.
To up the ante, the user U201 places the feet on a fitness ball or other
unstable surface such as surface 575,
which is part of the game. Surface 575 includes a gas pedal 576 and brake
pedal 577 allowing the user U201 some
control over navigating the course of wave 540.
In level 2 of the game, surface 575 is grounded and simply serves as a pedal
interface.
In level 3 of the game, surface 575 becomes unstable and rocks or swivels to
make it harder for the user U201
to keep level and stable. This more quicly develops core muscles.
Fig. 6 illustrates an embodiment of the invention that does not require the
use of a separate smartphone or
other external device. Surface 667 rests upon a joystick type controller 600,
that supports surface 667 and allows
it to swivel as a whole. Controller 600 is a game controller. A satisfactory
game controller is the Logitech Extreme
3D Pro, cut off shorter so that the handle part is essentially replaced by the
board of surface 667. However, in
a preferred embodiment, a game controller 600 is built directly from first
principles, as part of the device of the
invention. The controller 600 sits on base 660.
Surface 667 is fashioned after a surfboard in style of design, with a water
themed pattern 640. The water themed
pattern 640 is made of LEDs (light emitting diodes) that are addressed or
sequenced with waves that either create
the game experience, or augment it. Preferably surface 667 is translucent and
carries light throughout it, so that
a relatively small number of picture elements ("pixels") can be used to create
an interesting set of wave patterns
for gaming experience design.
Surface 667 has a port side that faces to the user's left when the user is
facing forward toward the bow, and a
starboard side that faces to the right when the user is facing forward toward
the bow. The bow is a pointed end
intended to indicate a user's forward-facing direction in the game. There is a
stern side opposite the bow side.
Base 660 is radially symmetric and the controller 600 is preferably designed
so that the whole board surface 667
can pivot and point in any heading. The heading in which the board 667 is
facing, within the game, is indicated
by a compass 620 displayed as a virtual compass display on a screen display
671, in some embodiments that have
the display 671. In embodiments without display 671, the heading is indicated
in a simpler (lower cost) fashion by
way of a small number of pixels of wave patterns in pattern 640. Display 671,
when present, exists in a housing
that also houses a processor 602 that runs the score keeping or gaming
functions of the apparatus.
As such there are three degrees of freedom in the orientation of the board of
surface 667. A player or user:
1. tilts the board along the port-starbord axis;
2. tilts the board along the bow-stern axis; and
3. changes its compass heading, i.e. the way that it is pointing.
The controller 600 outputs three signals for each of these three axes, and
these three signals are read by
processor 602. Processor 602 generates a course and displays it on screen
display 671. The course is, in one
embodiment, a complex-valued waveform displayed as looking down the spime
axis, with the real axis aimed
toward STARBOARD side, and the imaginary axis aimed toward BOW end of surface
667. The real axis 642 runs
from PORT to STARBOARD side and passes through the center of rotation of
surface 667 afforded by controller
600. The imaginary axis 643 runs from STERN to BOW and also passes through the
center of rotation of surface
667 afforded by controller 600.
CA 3028749 2018-12-31

A player must navigate in a somewhat circular motion to follow the waveform.
In a simple embodiment the
waveform is of the form ei27f`t where i = VT ¨ 1) and f, is a carrier
frequency of oscillation, and t is time.
This circular motion develops abdominal muscles while resulting in a fun game.
As the player progresses to
higher levels of the game, higher harmonics are introduced into the waveform,
resulting in greater challenges.
Additionally, a fourth degree of freedom is provided in some embodiments, and
this fourth degree of freedom
is the total weight pressing down on surface 667, allowing a user to "bump",
press, thrust, etc., with floorward
and ceilingward (or groundward and skyward) forces. For example, when there is
no gravityward (floorward or
groundward) force, a process in processor 602 goes into a standby mode, until
there is some such force that "wakes
up" processor 602 and begins the game or training session.
A heavy player is also sensed and distinguished from a player weighing less,
and also the apparatus tracks and
monitors weight gain and loss over time.
Thumb control 601 is for being pressed by a thumb of a user, and it faces
upward on surface 667. Index finger
control 602 faces downward and is thus shown as a dotted (hidden) line in the
drawing of Fig. 6. Finger controls
602, 603, 604, and 605 face downwards so a user can grasp them together and
controls 601 to 605 form a left-hand
port-side control. A right handed starbord side control is also provided as a
mirror image, mirrored along the
imaginary axis 643. Additionally elbow controllers such as control 606 for the
left elbow and another controller for
the right elbow, allow the user to control some functions with the elbows
while reseting the elbows near the real
axis 642.
In another embodiment, surface 667 is a semitransparent mirror, and another
surface mirror is below it, and
together the two mirrors form an infinity mirror.
In level 0 of the game, base 660 can be the bottom mirror, and LEDs around the
space between the infinity
mirrors provide an image of an infinity tunnel. The player simply attempts to
steer straight down the infinity
tunnel.
For level 2 of the game, an actuated bottom mirror moves in a certain pattern
and the user attempts to follow
that pattern.
Fig. 7 shows an alternate embodiment of the invention in which surface 767 is
a wheel that comprises, includes,
or is a disk 768 or a ring 769 with spokes 797. The ring 769 with spokes 797
is constructed like a steering wheel,
i.e. with the spokes below (behind) the ring, angled down toward the floor, so
as not to obstruct its topside. The
spokes meet at hub 700. A player grasps the wheel like one might grasp the
steering wheel of a car or boat or
airplane or other vehicle or conveyance or craft, but with outstretched arms
in pushup position, or the user planks
upon the wheel in forearm position. A hub 700 for the wheel sits on the bottom
of the wheel facing down (hence
leftmost, shown as a dotted or hidden line because it is under the solid disk
and we can't normally see it from
above). The hub 700 is a round end or ball or half ball or fraction of a ball
that faces a floor or ground or tarmac
or the like, or in other embodiments, faces a socketplate that sits on the
ground or earth or tarmac or floor, or the
like. In other embodiments the socketplate is just a piece of material to
protect the floor or other surface. The
ball of hub 700 and the socketplate together form a ball and socket joint in
some embodiments whereas in other
embodiments the ball of hub 700 sits directly on the ground, making the unit
more compact and easier to carry
around. In some embodiments the hub 700 detaches and stows at the edge of the
disk or ring, so that it is easier
to carry the disk or ring under the arm while walking or jogging.
In another embodiment, a portion of the hub detaches while the rest of it
remains. The portion that detaches
is hemispherical in shape, and there are provided a variety of different
hemispheres (i.e. different ball diameters)
that a user can use in order to adjust the degree of difficulty of keeping the
apparatus level or at the desired or
required angle during a fitness training session such as planking or doing
pushups on the board or wheel 767. The
detachable and/or split hub works also with surface 667 of Fig. 6 (surfboard
shape) or other embodiments of the
invention.
Preferably the split hub embodiment has an upper part attached to the wheel
that has a transparent window
facing up, and a lower part that has a cavity or recess into which a
smartphone can be placed. In this embodiment
the user can look through the window and see the screen of a smartphone inside
the hollow or partially hollow
lower portion of hub 700. The two halves are held together by magnets, and the
magnetic clip opens and closes
to accept the smartphone. The smartphone typically displays a heading
indicator that shows North, and runs a
driving game that is played with an additional degree of freedom, namely the
position of the hands upon the wheel
of surface 767. Now the user can:
1. tilt along a port-starboard axis;
CA 3028749 2018-12-31

2. tilt along a bow-stern axis;
3. turn the wheel clockwise or counterclockwise with respect to his or her
body and the ground;
4. and also change his or her body heading with respect to the ground, e.g.
with the user's head facing North,
then change a little bit to the head (heading) facing a different compass
direction heading.
The apparatus, embodied within a smartphone app, senses these four degrees of
freedom through the smartphone's
compass, tilt sensors, and the like, or, alternatively, in other embodiments,
by an electronic compass, tilt sensors,
and the like. In addition to the smartphone sensors, the wheel 767 also has
sensors to sense contact with the player's
hands, to obtain the points-of-contact, which form a fourth degree-of-freedom
of the input device, i.e. position of
hands on the wheel, from which can be derived body heading. So there is
provided separate body-heading and
wheel-heading inputs.
In another embodiment, a separate heading indicator compass 720 shows the user
where a real or virtual North
is, as this is part of a driving or boating or flight simulator game. For
example, in some embodiments the apparatus
is a flight simulator that simulates planking as a thrilling flying experience
to "Take fitness to new heights"Tm.
In other embodiments heading is indicated by a compass 720L that comprises a
ring of LEDs in which the LED
that's Northmost will glow or change colour or otherwise indicate a difference
in heading. Preferably rather than
just one light change, a group of lights change to (1) make the heading more
visible, and (2) get subpixel accuracy.
So for example, if we change course 15 degrees from North, we're heading
between two LEDs and instead of pick
just one, they both come on full brightness and even the lights further left
and right come on a little bit, so what
we essentially have is a fuzzy blob of light generally North, and there is a
blob generator which is an antialiaser
to antialias the heading information, i.e. to endow it with subpixel precision
and possible also subplixel accuracy
(the indicated North need not be true and accurate to make an exciting game,
but precision is still required to
make the game seem real).
The entire apparatus, in some embodiments, is made compact and portable, e.g.
spokes 797 can detach from
ring 769 and hub 700. Base 760 also has radial arms 761 that attach to it, so
that instead of being a big object,
folds up small.
In another embodiment the components are used separately for a variety of
fitness exercises. Ring 769 is used
as a hula hoop exerciser, or separate hand-held ring, used in free-space,
where the objective of the game is to keep
the ring at a specific tilt angle as prescribed by the pattern of LEDs on the
ring. In some embodiments the ring
769 is weighted to make it challenging to move it around in space. Weighting
is adjustable, in some embodiments
by filling the ring (hollow) with more or less water. A waterproof version of
the ring is also useable underwater or
in a pool for further forms of fitness training.
An abakography game is created in a virtual environment with the lights on the
ring modulated while the ring
is moved through 3D space by players, with a game engine displaying its output
by persistence-of-exposure to the
retina of human vision, or to photographic film or video or other sensors and
displays.
The invention of Fig. 7, in some embodiments, is also combined with the
infinity mirror effect, where disk 768
is a partially silvered mirror, and LEDs 720L are behind it, but in front of a
lower mirror underneath disk 768.
Fig. 8 shows a grainboard for the surface 767. The grainboard is made of
layers such as layer 810, layer 850,
and layer 860. Layers have circuit traces in them such as trace 820 to form a
circuit board similar to a printed
circuit board. It includes things like vias or condutors 830 that carry
electricity from one layer to another. In
other embodiments, the traces 820 simply connect to one another. In a top
layer such as layer 810 the traces
run predominantly in the bow-stern direction. This is layer number 1. The next
layer 850, i.e. layer number 2,
has traces that run mostly in a port-starboard direction. The third layer,
layer 860 has traces that run mostly
bow-stern. And a fourth layer (not shown) has traces that run mostly port-
starboard, and so on.
Each layer forms a play of a multilayer board, wherein the traces run in
predominatly alternating directions in
each layer.
Active elements like element 840 form transistors and other components on the
layers.
In one embodiment the layers are made of fiberglass, and form a printed
circuit board.
In another embodiment the layers are made of wood and form plywood. Such
GrainboardTM" has layers with
grain running alternately in alternating directions. Since it is easier to
embed circuit traces along the grain of
natural wood fibers, this produces a natural way to make a circuit board is
plys of plywood.
Thus surface 767 can be made in a natural way using natural materials.
CA 3028749 2018-12-31

Fig. 9 shows a robotic embodiment of the invention. One or more touch surfaces
910 are disposed on a play
surface 920 which has also a device surface 930 for touch pad or smart
telephone tablet computer or the like, or for
a built in device that performs similar function. The device has or contains a
processor for processing and control
of motors like motor 950 that turns roller 970, in a closed feedback loop with
sensor 960. The rollers 972 may
comprise wheels with spokes 971, or they may be partial circles, e.g.
semicircular supports, or tracks, or sliders,
or other actuable or passive pivots that allow surface 920 to pivot or tilt.
This pivot or tilt is sensed by a sensor
in the device on surface 930 or by sensor 960 or both. The apparatus can
balance itself like a self-balancing robot,
by way of roller 970 supported by support 940. In an initial ("easy") mode of
operation the device balances itself
so the user can just start doing pushups on it.
In one embodiment there's a game scenario. As the game advances to higher
levels, the self-balancing is reduced,
so that the user needs to keep balance while doing pushups or planking or the
like.
Eventually there is no self-balancing and the user does all the balancing.
This the robotic function that began
to assist the user now has turned off that assistance. At even more advanced
levels, the robotic function fights
against the user. This it evolved from helping the user, to being no help, to
actually hindering the user, thus
challenging the user.
In another embodiment there is provided a VR (Virtual Reality) game that
involves navigating through a space
in which there is a road the user sees, and there are bumps on the road. The
processor reads a bump map (texture
map) and applies the texture to both the display in the VR game as well as to
the rollers to make a disturbance
the user can see and feel in synchronism, so the experience is total.
Fig. 10 shows a gaming variation of the robotic embodiment of the invention.
The surface 920 of Fig. 9 is a first
surface, for a first player. A second surface 1020 is provided for a second
player. The second surface 1020 has a
touch surface 1010, device surface 1030. It also has a support 1040 to support
motor 1050, having sensor 1060 to
actuate and control roller 1070. A second roller, in some embodiments, is
provided so that there are a plurality of
rollers 1072. In some embodiments the rollers are wheels with spokes 1071, and
in other embodiments the rollers
are simply pivots of some kind that the second surface 1020 can pivot on.
A first processor 1001 receives input from sensor 1060 and controls motor 950.
A second processor 1002 receives
input from sensor 960 and controls motor 1050. In some embodiments the
processors interact and the motors are
each responsive to both sensors 960 and 1060 associated with both surfaces 920
and 1020.
There are two modes of operational orientation. In a first mode, surfaces 920
and 1020 are facing each other in
the same space, and two players face each other so that one surface mirrors
the other. Thus there is an orientation
reverser in processor 1001 or 1002.
In a second mode of operation, the two players are side-by side, and the
surfaces are not mirrored with respect
to one another. Thus there is not an orientation reverser in processor 1001 or
1002.
Fig. 10a shows a possibly robotic fitness game in which the robotic embodiment
of Fig. 10 has instead of, or in
addition to, device surfaces 930 and 1030, an upwards-facing television
display 1002 sitting upon four legs 1090 so
that it is at approximately the same playing level as play surfaces 920 and
1020. Upon television display 1002 is a
video game such as the game of Pong, having virtual paddles 973 and 1073, and
a virtual ball 1003. Ball 1003 is
animated on television display 1002, while the position of paddle 973 is
controlled by the tilt of surface 920, and
the position of paddle 1073 is controlled by the tilt of surface 1020. In game
play, two persons each do pushups or
planking, while playing Pong. A first player planks on surface 920 while a
second player planks on surface 1020.
Touch surface 910 and device surface 930, as well as touch surface 1010 and
device surface 1030, are optional,
because both can clearly see display 1002.
The system can operate roboticially or non-robotically, and it can switch back
and forth between these modes, as
well as transition smoothly therebetween. For example, rollers 972 for surface
920 as well as rollers 1072 for surface
1020 can enable continuosly adjustable degree of difficulty. Gameplay is more
challenging when the rollers are free
spinning than when locked. Motors 950 and 1050 have dynamic braking. A degree
of difficulty is controlled by
transitioning toward locked-rotor condition to make the game play easier. This
is done dynamically, by a processor
1091 that accepts game input from tilt sensors 960 and 1060, and runs game
play for the video game such as
Pong, and monitors score. In situtations where one player is much more
advanced than the other, the game is kept
interesting through balance of difficulty. For example, an advanced player on
surface 920 against a novice player
on surface 1020 is run as follows: as game play progresses, processor 1091
senses an extreme score difference in
the game, and subsequently locks the rotor of 1050 and partially unlocks the
rotor of motor 950. As gameplay
continues to progress with a continued high disparity, motor 950 is fully
freewheeled. As gameplay continues with
CA 3028749 2018-12-31

high disparity, despite total freewheeling, motor 950 is set to extreme
difficulty mode and thrashes about, to throw
the surface 920 player off-course.
Fig. 10b shows the Mannfit Pong system from a top view, simplified. Processor
1091 receives input from two
Mannfit boards as surfaces 920 and 1020, which have sensors 960 and 1060.
Processor 1091 drives display 1002
upon which are drawn ball 1003 and paddles 973 and 1073. Paddle 973 is
rendered at a position responsive to
sensor 920 and paddle 1073 is rendered at a position responsive to sensor
1020. A satisfactory processor 1091
is a video processor that outputs an NTSC RS-170 television signal over a 1-
wire connection (plus ground) to
display 1002, in the form of a cathode ray screen at the front of a cathode
ray tube, in which an electron beam
draws patterns on the tube to form spots of light, one for the ball 1003, and
one for each of the game paddles 973
and 1073. Such game principles are well documented as for example in the The
Magnavox Odyssey game which
connects to a television receiver through its antenna input terminals, and has
game controllers with potentiometers
as input. In one embodiment, such a game is setup but with external
potentiometers each with a weight hanging
down from it, so as to sense the tilt of surfaces 920 and 1020. Sensor 960 is
a potentiometer with a weight hanging
from it so that as surface 920 tilts, a weight hanging from the potentiometer
shaft turns the shaft in proportion
to the tilt. Sensor 1060 is a potentiometer with a weight hanging from it so
that as surface 1020 tilts, a weight
hanging from the potentiometer shaft turns the shaft in proportion to the
tilt. In this embodiment, the processor
1091 is the Magnavox Odyssey game console device, together with its control
boxes, less their potentiometers.
Fig. 10c shows an embodiment of the Mannfit system a round surface 920 for a
first player, and a round surface
1020 for a second player, in which the players do integral kinesiology
exercises in a multidimensional virtual reality,
each wearing a VR (Virtual Reality) or RRTm(Real RealityTM) headset to render
a 3D world or multidimensional
world display, showing objects such as a 3D or multidimensional pipe 1004 and
paddles such as a paddle for the
first player (hidden in this view) and a paddle 1073 for the second player,
and a virtual spherical ball 1005. Tilting
surface 1020 left-to-right moves paddle 1073 left-to-right in the virtual
game. Tilting surface 1020 fore and aft,
moves paddle 1073 up and down in the virtual game.
Ball 1005 bounces off the insides of pipe 1004, to create a game scenario in
which the objective is goalkeeping
much like Pong, but in higher dimensions. Moving the paddle left-to-right and
up-down, the objective is to block
the ball and cause it to bounce back to the opponent.
Fig. 11 shows a fitness ball 1110 inside of which a user 1120 exercises. There
is a rotation sensor 1130. A
satisfactory rotation sensor is a camera that sees patterns or indicia marked
on the ball. Preferably the indicia are
infrared-visible markers that can be visible to an infrared camera for
tracking purposes without detracting from the
aesthetics of the experience in visible light visible to user 1020. A
processor 1190 computes tracking information
and renders it to display 1140. A satisfactory display is a data projector
that projects patterns onto the ball.
Preferably the ball is white or translucent rather than totally transparent.
Processor 1190 performs a spherical
transormation that renders a view as it would appear from where the user is
positioned. A signaller 1150 allows
the user to signal when he or she wishes to be let out of the ride, should he
or she not wish to remain to the end.
Typical rides in a waterball are on the order of 10 minutes, but if a user is
in need of exiting prior to that time,
the signaller 1150 may be used. A wearable sensor 1160 monitors the health of
the user, and allows an attendant
outside the ball 1110 to make sure the user is OK. The processor 1190 is also
responsive to a water sensor 1180,
installed in pool 1170, in which the ball 1110 floats.
Waterballs like ball 1110 are well known in the art, and are commonly found at
amuseument parks and the like.
Safety is ensured by a ride operator familiar with waterball safety
procedures.
Fig. 12 shows a deadheading studio setup, suitable for exercise, fitness,
recreation, play, or competition such
as deadheading championships. A pool 1250 is at least partially filled with
water 1260 to submerge the intake
portions of pump 1240 which has a water jet aimed upwards. A user 1230
performs fitness or recreational or play
activity by obstructing the water jet. Sensors in the jet sense both pressure
and flow. The characteristic curve of
pump 1240 is displayed upon display 1210. Characteristic curves of pumps are
generally plots of head as a function
of flow. Head is usually expressed in feet or inches of water column, whereas
flow is usually presented in GPM
(Gallons Per Minute) or GPH (Gallons Per Hour). Other suitable units such as
S.I. units, may also be used.
The sensed pressure and sensed flow are displayed as coordinates on the graph
on display 1210. As shown in
Fig. 12, the flow is zero and the pressure is almost maximal, indicating a
deadhead condition. This is the game
objective of the HeadgamesTmcompetition, to be the first to totally "deadhead"
the pump, i.e. to cause the flow to
go to and remain at exactly zero for a sustained time period of at least 5 or
10 seconds.
A second player is shown unable to deadhead the jet from a second identical
pump.
CA 3028749 2018-12-31

Alternatively, separate jets are used from the same water pump. Preferably
there are additional internal jets
that are not accessible to the players, such that when all visible or
accessible jets are deadheaded, fluid flow can
continue through one or more internal jets or bypass, so the pump does not
overheat.
The water jets emerging from pump 1240 and other pumps in the pool are in a
size range of one to one-and-
a-half inches in diameter. Processor 1280 senses the flow and pressure
associated with multiple users such as user
1230. Processor determines a winner, based on sensing flow and pressure. Upon
establishing a winner, the result
is displayed on the HeadgamesTmCoreboarem.
In some embodiments, users such as user 1210 wear a VR (Virtual Reality)
headset that senses their head
location and orientation, etc., and renders an image or images. In one
embodiment, users in an indoor pool in a
cold climate, experience visual imagery from a nice warm outdoor climate. For
example, user 1210 sees imagery
from Stanford's Tanner fountain, overlaid on top of the water jet of pump
1240. This is done by recording from
a video-based VR headset worn onece in Stanford's Tanner fountain, and the
recording is kept and sampled, to
result in a fully immersive multimedia.
21. Some further embodiments of the invention
Here is a list of some of the various embodiments of the invention.
1. A core fitness training system for planking or pushups, said system
including a user-interface surface for
the hands or arms of a user of said surface, a pivot for said surface, and a
video game for being played with said
system, a position on a cursor of said video game varying in proportion to the
tilt of said surface.
2. The core fitness training system of embodiment 1, where the horizontal
position of said cursor is controlled by
a left-right tilt of said surface, or the vertical position of said cursor is
controlled by a fore-aft tilt of said surface.
3. The core fitness training system of embodiment 2, where a score of said
video game increases in proportion to
a decrease in the time-integral of an error signal between a desired cursor
position and an actual cursor position.
4. The core fitness training system of embodiment 2, where a score of said
video game increases in proportion to
the square root of the sum of the squares of a horizontal error and a vertical
error, said horizontal error being the
difference between an actual left-right tilt of said surface, and a desired
left-right tilt, said vertical error being the
difference between an actual fore-aft tilt of said surface, and a desired fore-
aft tilt.
5. A fitness system for training of a user's core muscles, said fitness system
including a board and a game activity,
said board having a pivot of sufficient strength to bear downward-facing load
of human body weight, and said
game having a game controller, said game controller responsive to tilting of
said board.
6. A fitness system comprising a tilt controller, said game control incuding a
large flat surface for accepting the
hands or arms of a player, said tilt controller operable by tilting said
surface.
7. A fitness system comprising two flat surfaces, a first surface for
placement on a floor or ground, and a second
surface for accepting the hands or arms of a user of said fitness system, said
surfaces having a pivot or swivel joint,
said pivot or swivel joint having one or more degrees-of-freedom.
8. The system of embodiment 7 where the degrees-of-freedom are left-right tilt
of the second surface with respect
to the first surface, or fore-aft tilt of the second surface with respect to
the first surface.
9. The system of embodiment 8 further including a third degree of freedom, the
third degree of freedom being a
rotation of said second surface with respect to said first surface, along an
axis of rotation that passes through both
of said surfaces.
10. The system of any of embodiments 7 to 9, further including a sensor for
sensing movement along said degrees-
of-freedom.
11. The system of embodiment 10, where said sensor is or is coupled to an
actuator, said actuator actuating at
least one angle between said first surface and said second surface, wherein
relative movements of the two surfaces
are affected by said actuator, while also being sensed by said sensor.
12. The system of any of embodiments 7 to 10, further including a recessed
area in said second surface, said
recessed area for receiving a smartphone.
13. The system of any of embodiments 7 to 9, further including a recessed area
in said second surface, said recessed
area for receiving a smartphone, said smartphone for sensing movement along
said degrees-of-freedom.
14. The system of any of embodiments 1 to 14, said system including an audio
or visual feedback, said feedback
indicating an error between a desired and an actual tilt of said board or
surface.
15. A fitness device for holding a planking position on said fitness device
comprised of two surfaces that are
rotatably disposed, a first surface for being placed on the ground or a floor,
and a second surface for bearing
CA 3028749 2018-12-31

human weight from the hands or arms or other body part of a user of said
fitness device, said fitness device also
including a sensor for sensing tilt of said second surface with respect to
said first surface.
15. A fitness device for planking on, said fitness device comprised of two
surfaces that are rotatably disposed, one
of said surfaces for bearing human weight from the hands or arms of a user of
said fitness device and having a
receptacle for receiving a tilt-sensing device.
16. The fitness system or device of any of embodiments 1 to 16, further
including an apparatus for accepting the
feet of said user.
17. The fitness system or device of any of embodiments 1 to 16, further
including a pedipulatory input device for
accepting input from one or both feet of a user of said fitness system or
device.
18. A hands-and-feet planking system, said system including a wobbleboard for
the hands, said wobbleboard for
controlling a pointing device of a game console or game device, said system
also including a separate foot-operated
controller to which said game console or device is also responsive.
19. A principally flat game console, said game console including means for
sensing tilt of said game console, as
well as pivotal support means, said pivotal support means of sufficient
strength to bear human body weight.
20. A method of fitness, said method comprising the steps of: providing a
surface upon which a particiant can
perform a planking operation or pushups, said surface upon a pivot which keeps
it from falling down under the
load of the user's body weight, but allows the surface to swivel or tilt or
rotate; providing a game for the user to
play; providing a visual or audible cursor for said game; varying a position
or sound of said cursor in response to
a tilt of said surface.
21. The method of embodiment 20 where said cursor is a visual cursor, said
cursor moving left-to-right in proportion
of a left-to-right tilting angle of said surface, and said cursor moving up-to-
down in proportion to a fore-aft angle
of said surface.
22. The method of embodiment 20 where said cursor is an audible cursor
comprised of a musical sound, said
musical sound being played at musically accurate or pleasant note pitches when
said board is at a desired tilt
angle, and said musical sound being played at a musically bent, warped, or
unpleasant note pitch when said board
is at an undesired tilt angle.
23. The method of embodiment 22 where said musical sound is a song played at
normal pitch when said surface
is at a desired tilt angle, and at a wavering or warped pitch when said
surface is at an undesired tilt angle.
24. The method of embodiment 23 where the degree of wavering or warping of
pitch is proportional to the square
root of the sum of the squares of the difference between the actual and
desired left-right tilt of the surface and the
difference between the actual and desired fore-aft tilt of the surface.
25. A means for fitness based on a competitive gaming situation in which one
or more players at the same or
different points in time compete against each other or themselves, each player
putting a substantial portion of his
or her body weight upon a pivotable gaming interface, while being presented
with a course to navigate, and a
navigational cursor that is controlled by tilting said interface.
26. A pointing device for being operated by both hands or both forearms of a
user, the pointing device including
a surface for facing upward, and a ball or partial ball for facing downward.
27. The pointing device of embodiment 26, further including a socket for said
ball.
28. A fitness device or application program for the pointing device of
embodiment 26 or 27, said application
including a course to be followed by a user of said program by operating the
pointing device.
29. A fitness device or application program for the pointing device of
embodiment 26 or 27, said application
including a cursor made visible to a user of said device or program, said
cursor responsive to said pointing device.
30. The fitness device or application of embodiment 29, said device or
application providing a course to be followed
by a said user.
31. The fitness device or application of embodiment 30 where said course
includes one or more virtual pathways
that the user must stay within.
32. The fitness device or application of embodiment 30 where said course
includes one or more virtual pathways
that the user must stay without.
33. The fitness device or application of embodiments 28, 30, 31, or 32, where
said course is a path in a virtual
maze game.
34. The fitness device or application of embodiments 28, 30, 31, or 32, where
said course is a road or path in a
virtual driving game.
CA 3028749 2018-12-31

35. The fitness device or application of embodiments 28, 30, 31, or 32, where
said course is a path in a virtual
spaceship driving game.
36. The fitness device or application of embodiment 32, where said course is a
virtual wire in a buzzwire game.
37. The fitness device or application of embodiment 32, where said course is a
glowing one dimensional manifold
in a two or more dimensional space.
38. The fitness device or application of embodiment 32, 36, or 37, where said
game includes a virtual ring around
said virtual pathways, and a sound effect when said virtual ring touches said
virtual pathways.
39. The fitness device or application of embodiment 28, or any of claims 30-
38, where said game includes music,
and where said music plays at a pleasant or normal pitch when said user
follows said course, and where said music
plays at a wavering or warped pitch when said user deviates from said course.
40. The fitness device or application of embodiment 39, where the degree of
wavering or warping of pitch is
proportional to the square root of the sum of the squares of the difference in
each dimension between a user's
position in the course, and the nearest part of the course.
41. The fitness device or application of embodiment 28, or any of claims 30 to
40, where said fitness device or
application provides a score to said user, said score derived from a
reciprocal of the time integral of a time-varying
error function, said error function equal to the difference between the user's
navigation of the course and the actual
course.
42. A collaborative gaming system, using a plurality of fitness devices or
applications of embodiment 28 to 41, said
collaborative gaming system computing relative scores of multiple players by
integrating a distance of deviation
from a course to be followed by said players.
43. The system of embodiment 42 where said course is a complex course having a
real part derived from the
in-phase component of a lock-in amplifier, and an imaginary part derived from
a quadrature component of the
lock-in amplifier.
44. A system for generating the course of any of embodiments 1 to 43, said
system including a stationary element
and a moving element, and a lock-in amplifier device, wave analyzer device,
phase-coherent detector device, or
homodyne device, said device having a reference input responsive to said
statinary element, and a signal input
responsive to said moving element, or vice-versa.
45. The system of embodiment 44 where said elements are antennae.
46. The system of embodiment 44 where said elements are transducers.
47. The system of embodiment 46 where one of said elements is a loudspeaker,
transmit transducer, transmit
hydrophone, or transmit geophone, and the other of said elements is a
microphone, receive transducer, receive
hydrophone, or receive geophone.
48. A portable or mobile computer application, said application for being used
in a portable or mobile computing
device, by placing said device on a flat surface that can tilt or pivot at one
or more different angles, said device
including a tilt sensor, said tilt sensor sensing one or more different
different angles, said application providing
a course to be navigated by a user of said application, said application for
integrating a tilt of said surface and
generating a score based on a reciprocal of an absement of said tilt.
49. A fitness device comprising a wobbleboard with a holder for a smart phone,
said holder comprising a recessed
region in which to place a smartphone while exercising on said wobbleboard.
50. A steering device for operating a virtual game while bearing weight of a
human user hanging from or resting a
portion of his or her body weight on said steering device, said steering
device including a swivel, said steering device
for including a computing device, said computing device including a tilt
sensor, said computing device displaying
a course to be followed by a user of said steering device, said computing
device also displaying a cursor varying
in proportion to a tilt of said steering device, said computing device
providing a score based on a reciprocal of an
accumulated deviation between said cursor and said course.
51. A core fitness training system for planking or pushups or pullups or L-
seat exercises, or the like, said system
including a user-interface for the hands or arms of a user of said user-
interface, a pivot for said user-interface, and
a video game for being played with said system, a position on a cursor of said
video game varying in proportion
to the tilt of said user-interface.
52. The system of embodiment 51, where said game includes a course to be
followed by a player of said game,
and where a score of said game is derived from a reciprocal of an accumulated
deviation of said cursor from said
course.
53. The system of embodiment 52, where said course is a waveform from a lock-
in amplifier.
CA 3028749 2018-12-31

54. The system of embodiment 52, where said game includes levels, and where a
level of said game includes where
said course is a straight line path.
55. The system of embodiment 52, where said course is a road.
56. The system of embodiment 52, where said course is a maze.
57. The system of embodiment 52, where said course is a flight path.
58. The system of embodiment 54, where said cursor is audible, and said cursor
is the pitch stability of a sound
track.
59. The system of any of embodiments 51 to 58, where said system includes a
CAD (Computer-Aided Design)
tasks, said task being to design a virtual object, said object being
constructed by moving said virtual cursor.
60. The system of embodiment 59, where multiple users each have one of the
user-interfaces of embodiment 51,
where a collaborative CAD task is presented, and where each user of said
system is presented with their own cursor
and the cursors of other collaborators.
61. A workplace environment based on the system of any of embodiments 51 to
60, where points, cash, or other
incentives are provided to users creating the best CAD designs using the
system.
101. A modular digital eye glass system for making visible otherwise invisible
phenomenology, said system compris-
ing: a phenomenalizer; a bufferator for buffering an output of the
phenomenalizer; and a phenomenon stabilizer.
102. The system of embodiment 101, where said phenomenalizer includes a lock-
in amplifier.
103. The system of embodiment 102, where said lock-in amplifier is a
multiharmonic lock-in amplifier.
104. The system of embodiment 103 where said amplifier includes a referator,
said referator producing a sum of
cosine functions.
105. The system of embodiment 103 where said amplifier includes a referator,
said referator producing a stream
of short pulses.
106. The system of embodiment 103 where said amplifier includes a complex-
valued referator, a real part of said
referator producing a sum of cosine functions at a fixed phase angle, and an
imaginary part of said referator
producing the same sum of cosine functions but at a phase angle approximately
90 degrees shifted from said fixed
phase angle.
107. The system of embodiment 103 where said amplifier includes a complex-
valued referator, a real part of said
referator producing a sum of cosine functions at a time-varying phase angle,
and an imaginary part of said referator
producing the same sum of cosine functions but at a phase angle approximately
staying approximately 90 degrees
phase-shifted from said time-varying phase angle.
108. The system of embodiment 101, where said phenomenalizer is a heat-sensing
camera.
109. The system of embodiment 108, where said phenomenon stabilizer includes
means for superimposing electrical
signal information upon a temperture display of said heat-sensing camera.
110. The system of embodiment 109, for visualization of electrical equipment,
where said electrical signal infor-
mation includes visualization of electrical waves superimposed upon a thermal
map of the heat generated by said
heat-sensing camera.
111. A modular digital eye glass system for making visible otherwise invisible
electrical phenomenology, said system
comprising: a phenomenalizer for capturing electrical signal phenomena; a
bufferator for buffering an output of
the phenomenalizer; and a phenomenon stabilizer for making the buffered output
stable in a visual field of view of
said digital eyeglass.
112. The system of embodiment 111, further including a thermal map overlay
upon the stabilized buffered output.
113. A modular digital eye glass system for making visible otherwise invisible
thermal phenomenology, said system
comprising:
= a phenomenalizer for capturing thermal signal phenomena;
= a bufferator for buffering an output of the phenomenalizer; and
= a phenomenon stabilizer for making the buffered output stable in a visual
field of view of said digital eyeglass.
114. A modular digital eye glass system for making visible otherwise invisible
neurophysiological phenomenology,
said system comprising:
= a phenomenalizer for capturing possibly combined neuron action
potentials;
= a bufferator for buffering an output of the phenomenalizer; and
CA 3028749 2018-12-31

= a phenomenon stabilizer for making the buffered output stable in a visual
field of view of said digital eyeglass.
115. The system of embodiment 114, further including an ultrasound imaging
overlay upon the stabilized buffered
output.
116. A modular digital eye glass system for making visible otherwise invisible
thermal phenomenology, said system
comprising: a thermal phenomenalizer for capture of thermal information; a
bufferator for buffering an output
of the phenomenalizer; and a phenomenon stabilizer, said stabilizer for
providing a coordinate-stabilized thermal
map overlay.
201. A modular system for making visible otherwise invisible phenomenology,
said system comprising:
= a vibrator, for vibrating subject matter in view of a human eye or
imaging system;
= a visualizer for visualizing or imaging the vibration.
202. The system of embodiment 201, where said visualizer is an abakographic
visualization system for making
vibrations visible to the human eye by illuminating subject matter in
proportion to its degree of vibration.
203. The system of embodiment 202, where said abakographic visualization
system includes a light source to shine
on objects and light up the objects in accordance with their degree of
vibration.
204. A police baton or police flashlight in accordance with the system of
embodiment 203, where said light source
is built into said baton or flashlight, said baton or flashlight also
including said vibrator for vibrating a door or
wall and shining light on the door or wall in accordance with a degree of
vibratability of said door or wall.
205. A stud finder using the system of embodiment 201, said vibrator connected
to a signal generator forming
the reference signal of a lock-in amplifier or phase-coherent detector, said
visualizer a light source connected to an
output of said lock-in amplifier or phase-coherent detector.
206. An acoustic visualizer, said acoustic visualizer responsive to an
acoustic signal from a sound source, said
acoustic visualizer correlating the sounce source with motion of subject
matter subjected to vibrations from said
acoustic signal.
207. The acoustic visualizer of embodiment 206, said sound source from a
signal generator, said acoustic visualizer
including a computer vision device having a phase-coherent detector with a
reference input from said signal
generator.
208. The acoustic visualizer of embodiment 206, said sound source from a
signal generator, said acoustic visualizer
including a computer vision device having a phase-coherent detector feeding
into said signal generator.
209. A sonic visualization feedback system, said system including:
= a sound source;
= a motion sensor;
= a correlator, said correlator for correlating an output of said sound
source with said motion sensor;
= a display means, said display means providing a spatialized display of
the output of said correlator.
210. The sonic visualization feedback system of embodiment 209, said
correlator being a lock-in amplifier.
211. The sonic visualization feedback system of embodiment 209, said motion
sensor being a laser vibrometer.
22. CLAIMS
WHAT I CLAIM AS MY INVENTION IS:
1. A core fitness training system for planking or pushups, said system
including a user-interface surface for the
hands or arms of a user of said surface, a pivot for said surface, a tilt
sensor to sense a tilt or orientation of said
surface, and a video game for being played with said system, a position on a
cursor of said video game varying in
proportion to the tilt sensed by said tilt sensor.
2. The core fitness training system of claim 1, where the horizontal position
of said cursor is controlled by a
left-right tilt of said surface, or the vertical position of said cursor is
controlled by a fore-aft tilt of said surface.
3. The core fitness training system of claim 2, where a score of said video
game increases in proportion to a decrease
in the time-integral of an error signal between a desired cursor position and
an actual cursor position.
CA 3028749 2018-12-31

4. The core fitness training system of claim 2, where a score of said video
game increases in proportion to the
square root of the sum of the squares of a horizontal error and a vertical
error, said horizontal error being the
difference between an actual left-right tilt of said surface, and a desired
left-right tilt, said vertical error being the
difference between an actual fore-aft tilt of said surface, and a desired fore-
aft tilt.
5. A fitness system for training of a user's core muscles, said fitness system
including a board and a game activity,
said board having a pivot of sufficient strength to bear downward-facing load
of human body weight, and said
game having a game controller, said game controller responsive to tilting of
said board.
6. The fitness system of claim 5 where said pivot is a curved surface located
approximately in the middle of
said board, said pivot concave upward.
7. The fitness system of claim 5 where said pivot is a hemisphere with the
flat side of said hemisphere attached
to said board, and the curved side of said hemisphere facing down for resting
upon a floor or ground.
8. The system of claim 5, incuding an actuator for tilting said board, as well
as a sensor for sensing a tilt of said
board.
9. The system of claim 5, further including a recessed area in said second
surface, said recessed area for receiving
a smartphone.
10. The system of claim 5, further including a recessed area in said second
surface, said recessed area for receiving
a smartphone, said smartphone for sensing tilt, said tilt sensor being a
sensor within said smartphone.
11. The system of 5, said system including an audio or visual feedback, said
feedback indicating an error between
a desired and an actual tilt of said board or surface.
12. A method of fitness, said method comprising the steps of: providing an
object upon which a particiant can
perform a planking operation or pushups, said object coupled to a pivot which
keeps it from falling down under
the load of the user's body weight, but allows the object to swivel or tilt or
rotate; providing a game for the user
to play; providing a visual or audible cursor for said game; varying a
position or sound of said cursor in response
to a tilt of said object.
13. The method of claim 12 where said cursor is a visual cursor, said cursor
moving left-to-right in proportion of
a left-to-right tilting angle of said object, and said cursor moving up-to-
down in proportion to a fore-aft angle of
said object.
14. The method of claim 12 where said cursor is an audible cursor comprised of
a musical sound, said musical
sound being played at musically accurate or pleasant note pitches when said
object is at a desired tilt angle, and
said musical sound being played at a musically bent, warped, or unpleasant
note pitch when said object is at an
undesired tilt angle.
15. The method of claim 12 where said musical sound is a song played at normal
pitch when said object is at a
desired tilt angle, and at a wavering or warped pitch when said object is at
an undesired tilt angle.
16. The method of claim 12 where the degree of wavering or warping of pitch is
proportional to the square root of
the sum of the squares of the difference between the actual and desired left-
right tilt of the object and the difference
between the actual and desired fore-aft tilt of the object.
17. The method of fitness of claim 12, further including an actuator for
actuating said object.
18. A means for fitness based on a competitive gaming situation in which one
or more players at the same
or different points in time compete against each other or themselves, each
player using an embodiment of the
invention of claim 12.
19. A fitness device or application program using the method of claim 12, said
application including a course to
be followed by a user of said program by operating the pointing device.
20. A fitness device or application program using the method of claim 12, said
cursor game providing a score in
inverse proportion to a time-integral of a distance between a sensed path of
tilt of said object, and a stored path
in the game, said game providing visual feedback, said visual feedback
synchronized with tactile feedback provided
by an actuator coupled to said pivot for tilting said object with respect to a
floor or ground under said object.
Additional drawings (in addition to the drawings, diagrams, and illustrations
throughout the description), appear
CA 3028749 2018-12-31

at the end, like they should normally appear in a typical patent application
(the other drawings should probably
also be moved to the end at some point in time).
References
[1] Steve mann. Campus Canada, ISSN 0823-4531, p55 Feb-Mar 1985, pp58-59 Apr-
May 1986, p72 Sep-Oct 1986.
[2] B. Abbott, R. Abbott, T. Abbott, M. Abernathy, F. Acernese, K. Ackley, C.
Adams, T. Adams, P. Addesso, R. Ad-
hikari, etal. Observation of gravitational waves from a binary black hole
merger. Physical review letters, 116(6):061102,
2016.
[3] B. Abbott, R. Abbott, R. Adhikari, P. Ajith, B. Allen, G. Allen, R. Amin,
S. Anderson, W. Anderson, M. Arain, et al.
Ligo: the laser interferometer gravitational-wave observatory. Reports on
Progress in Physics, 72(7):076901, 2009.
[4] M. A. Ali, T. Ai, A. Gill, J. Emilio, K. Ovtcharov, and S. Mann.
Comparametric HDR (High Dynamic Range) imaging
for digital eye glass, wearable cameras, and sousveillance. In ISTAS, pages
107-114. IEEE, 2013.
[5] M. A. Ali and S. Mann. The inevitability of the transition from a
surveillance-society to a veillance-society: Moral
and economic grounding for sousveillance. In ISTAS, pages 243-254. IEEE, 2013.
[6] M. A. Ali, J. P. Nachumow, J. A. Srigley, C. D. Furness, S. Mann, and M.
Gardam. Measuring the effect of sousveillance
in increasing socially desirable behaviour. In ISTAS, pages 266-267. IEEE,
2013.
[7] F. H. Atwater. Accessing anomalous states of consciousness with a binaural
beat technology. Journal of scientific
exploration, 11(3):263-274, 1997.
[8] V. Bakir. Tele-technologies, control, and sousveillance: Saddam hussein¨de-
deification and the beast. Popular
Communication, 7(1):7-16, 2009.
[9] V. Bakir. Sousveillance, media and strategic political comm... Continuum,
2010.
[10] D. R. Begault and L. J. Trejo. 3-d sound for virtual reality and
multimedia. 2000.
[11] S. Bennett. Nicholas Minorsky and the automatic steering of ships.
control systems magazine, pages 10-15, november
1984.
[12] L. J. Brackney, A. R. Florita, A. C. Swindler, L. G. Polese, and G. A.
Brunemann. Design and performance of an
image processing occupancy sensor. In Proceedings: The Second International
Conference on Building Energy and
Environment 2012987 Topic 10. Intelligent buildings and advanced control
techniques, 2012.
[13] J. Burrell. How the machine 'thinks': Understanding opacity in machine
learning algorithms. Big Data i Society,
3(1):2053951715622512, 2016.
[14] M. Callon. Actor-network theory¨the market test. The Sociological Review,
47(S1):181-195, 1999.
[15] W. W. Campbell. The value of inching techniques in the diagnosis of focal
nerve lesions: Inching is a useful technique.
Muscle l Nerve: Official Journal of the American Association of
Electrodiagnostic Medicine, 21(11):1554-1557, 1998.
[16] E. J. Candes. Mathematics of sparsity (and a few other things). In
Proceedings of the International Congress of
Mathematicians, Seoul, South Korea, 2014.
[17] E. J. Candes, P. R. Charlton, and H. Helgason. Detecting highly
oscillatory signals by chirplet path pursuit. Applied
and Computational Harmonic Analysis, 24(1):14-40, 2008.
[18] E. J. Candes, P. R. Charlton, and H. Helgason. Gravitational wave
detection using multiscale chirplets. Classical and
Quantum Gravity, 25(18):184020, 2008.
[19] E. J. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval via wirtinger
flow: Theory and algorithms. Information
Theory, IEEE Transactions on, 61(4):1985-2007, 2015.
[20] E. J. Candes, J. K. Romberg, and T. Tao. Stable signal recovery from
incomplete and inaccurate measurements.
Communications on pure and applied mathematics, 59(8):1207-1223, 2006.
[21] P. Cardullo. Sniffing the city: issues of sousveillance in inner city
london. Visual Studies, 29(3):285-293, 2014.
[22] D. Cardwell. At newark airport, the lights are on, and they're watching
you. New York Times, 2014.
[23] M. Clynes. personal communication. 1996.
[24] M. Clynes and N. Kline. Cyborgs and space. Astronautics, 14(9):26-27,and
74-75, Sept. 1960.
[25] P. Corcoran. The internet of things: why now, and what's next? IEEE
Consumer Electronics Magazine, 5W:63-68,
2016.
[26] P. M. Corcoran. Third time is the charm - why the world just might be
ready for the internet of things this time
around. CoRR, abs/1704.00384, 2017.
[27] C. Cosens. A balance-detector for alternating-current bridges.
Proceedings of the Physical Society, 46(6):818, 1934.
[28] J. L. R. d'Alembert. Suite des recherches sur la courbe que forme une
corde tendue, mise en vibration... 1749.
[29] p. David Fraser. Cameras can stay in talisman's locker room, says
commissioner. CBC News, Mar 22, 2007, 1:32 pm.
[30] K. Dennis. Viewpoint: Keeping a close watch-the rise of self-surveillance
and the threat of digital exposure. The
Sociological Review, 56(3):347-357, 2008.
CA 3028749 2018-12-31

[31] A. Doherty, W. Williamson, M. Hillsdon, S. Hodges, C. Foster, and P.
Kelly. Influencing health-related behaviour with
wearable cameras: strategies & ethical considerations. In Proceedings of the
4th International Sense Cam & Pervasive
Imaging Conference, pages 60-67. ACM, 2013.
[32] A. R. Doherty, S. E. Hodges, A. C. King, A. F. Smeaton, E. Berry, C. J.
Moulin, S. Lindley, P. Kelly, and C. Foster.
Wearable cameras in health. American journal of preventive medicine, 44(3):320-
323, 2013.
[33] D. L. Donoho. Compressed sensing. Information Theory, IEEE Transactions
on, 52(4):1289-1306, 2006.
[34] L. Euler. Principes generaux du mouvement des fluides. Academie Royale
des Sciences et des Belles Lettres de Berlin,
Mernoires, 11, pages 274-315, Handwritten copy, 1755, printed in 1757.
[35] J. Fernback. Sousveillance: Communities of resistance to the surveillance
environment. Telematics and Informatics,
30(1):11-21, 2013.
[36] P. Flandrin. Time frequency and chirps. In Aerospace/Defense Sensing,
Simulation, and Controls, pages 161-175.
International Society for Optics and Photonics, 2001.
[37] G. Fletcher, M. Griffiths, and M. Kutar. A day in the digital life: a
preliminary sousveillance study. SSRN,
http: //papers. ssrn. com/sol3 /papers. cfm?abstract_id= 192362.9, September
7, 2011.
[38] S. Follmer, D. Leithinger, A. Olwal, A. Hogge, and H. Ishii. inform:
dynamic physical affordances and constraints
through shape and object actuation. In UIST, volume 13, pages 417-426, 2013.
[39] M. A. Foster, R. Salem, D. F. Geraghty, A. C. Turner-Foster, M. Lipson,
and A. L. Gaeta. Silicon-chip-based ultrafast
optical oscilloscope. Nature, 456(7218):81-84, 2008.
[40] W. H. Fox Talbot. The pencil of nature. Project Gutenberg (e-book#
33447), 16, 1844.
[41] N. Frakes. Calgary rec centre reassures members after security breach.
2016.
[42] D. Freshwater, P. Fisher, and E. Walsh. Revisiting the panopticon:
professional regulation, surveillance and sousveil-
lance. Nursing Inquiry, May 2013. PMID: 23718546.
[43] J.-G. Ganascia. The generalized sousveillance society. Social Science
Information, 49(3):489-507, 2010.
[44] J.-G. Ganascia. The generalized sousveillance society. Soc. Sci. Info.,
49(3):489-507, 2010.
[45] H. S. Gasser and J. Erlanger. A study of the action currents of nerve
with the cathode ray oscillograph. American
Journal of Physiology¨Legacy Content, 62(3):496-524, 1922.
[46] B. Gates. The power of the natural user interface. Gates Notes,
www.gatesnotes.com/About-Bill-Gates/The-Power-
of-the-Natural-User-Interface, October 28, 2011.
[47] M. Grover et al. Is anyone listening. Agent, The, 40(1):18, 2007.
[48] C. Gurrin, A. F. Smeaton, A. R. Doherty, et al. Lifelogging: Personal big
data. Foundations and Trends in
Information Retrieval, 8(1):1-125, 2014.
[49] K. D. Haggerty and R. V. Ericson. The surveillant assemblage. The British
journal of sociology, 51(4):605-622, 2000.
[50] B. Horn and B. Schunk. Determining Optical Flow. Artificial Intelligence,
17:185-203, 1981.
[51] T. Instruments. Intelligent Occupancy Sensing.
http://www.ti.com/solution/intelligent occupancy sensing, 2012.
[52] J. Iott and A. Nelson. Ccd camera element used as actuation detector for
electric plumbing products, 2005. Canadian
Patent 2602560; US and Internatioanl Patent App. 11/105,900.
[53] R. Janzen and S. Mann. Sensory flux from the eye: Biological sensing-of-
sensing (veillametrics) for 3d augmented-
reality environments. In IEEE GEM 2015, pages 1-9.
[54] R. Janzen and S. Mann. Veillance dosimeter, inspired by body-worn
radiation dosimeters, to measure exposure to
inverse light. In IEEE GEM 2014, pages 1-3.
[55] R. Janzen and S. Mann. Vixels, veillons, veillance flux: An extramissive
information-bearing formulation of sensing,
to measure surveillance and sousveillance. IEEE CCECE, pages 1-10, 2014.
[56] D. Jeltsema. Memory elements: A paradigm shift in lagrangian modeling of
electrical circuits. In Proc. 7th Vienna
Conference on Mathematical Modelling, Nr. 448, Vienna, Austria, February 15-
17, 2012.
[57] Juvenal. Satire VI, lines 347-348.
[58] H. G. Kaper, E. Wiebel, and S. Tipei. Data sonification and sound
visualization. Computing in science & engineering,
1(4):48-58, 1999.
[59] P. H. LANGNER. The value of high fidelity electrocardiography using the
cathode ray oscillograph and an expanded
time scale. Circulation, 5(2):249-256, 1952.
[60] B. Latour. Reassembling the social: An introduction to actor-network-
theory. Oxford university press, 2005.
[61] G. M. Lee. A 3-beam oscillograph for recording at frequencies up to 10000
megacycles. Proc., Institute of Radio
Engineers, 34(3):W121¨W127, 1946.
[62] R. Lo, V. Rampersad, J. Huang, and S. Mann. Three dimensional high
dynamic range veillance for 3d range-sensing
cameras. In IEEE ISTAS 2013, pages 255-265.
[63] M. Lynn Kaarst-Brown and D. Robey. More on myth, magic and metaphor:
Cultural insights into the management
of information technology in organizations. Information Technology & People,
12(2):192-218, 1999.
CA 3028749 2018-12-31

[64] D. Lyon. Surveillance Studies An Overview. Polity Press, 2007.
[65] C. Manders. Moving surveillance techniques to sousveillance: Towards
equiveillance using wearable computing. In
ISTAS, pages 19-19. IEEE, 2013.
[66] C. Manders, C. Aimone, and S. Mann. Camera response function recovery
from different illuminations of identical
subject matter. In ICIP, pages 2965-2968, 2004.
[67] C. Manders and S. Mann. Digital camera sensor noise estimation from
different illuminations of identical subject
matter. In IEEE ICSP 2005, pages 1292-1296.
[68] S. Mann. "rattletale": Phase-coherent telekinetic imaging to detect
tattletale signs of structural defects or potential
failure. SITIS 2017, SIVT.4: Theory and Methods.
[69] S. Mann. Wavelets and chirplets: Time-frequency perspectives, with
applications. In P. Archibald, editor, Advances
in Machine Vision, Strategies and Applications. World Scientific, Singapore .
New Jersey . London. Hong Kong, world
scientific series in computer science - vol. 32 edition, 1992.
[70] S. Mann. Intelligent Image Processing. John Wiley and Sons, Nov. 2 2001.
[71] S. Mann. Sets: A new framework for knowledge representation with
application to the control of plumbing fixtures
using computer vision. In Computer Vision and Pattern Recognition, 2001. CVPR
2001. IEEE, 2001.
[72] S. Mann. Sousveillance, not just surveillance, in response to terrorism.
Metal and Flesh, 6(1):1-8, 2002.
[73] S. Mann. Intelligent bathroom fixtures and systems: Existech
corporation's safebath project. Leonardo, 36(3):207-210,
2003.
[74] S. Mann. Sousveillance: inverse surveillance in multimedia imaging. In
Proceedings of the 12th annual ACM interna-
tional conference on Multimedia, pages 620-627. ACM, 2004.
[75] S. Mann. Telematic tubs against terror: Bathing in the immersive
interactive media of the post-cyborg age. Leonardo,
37(5):372-373, 2004.
[76] S. Mann. fl huge uid streams: fountains that are keyboards with nozzle
spray as keys that give rich tactile feedback and
are more expressive and more fun than plastic keys. In Proceedings of the 13th
annual ACM international conference
on Multimedia, pages 181-190. ACM, 2005.
[77] S. Mann. Veillance and reciprocal transparency: Surveillance versus
sousveillance, ar glass, lifeglogging, and wearable
computing. In ISTAS, pages 1-12. IEEE, 2013.
[78] S. Mann. The sightfield: Visualizing computer vision, and seeing its
capacity to" see". In Computer Vision and Pattern
Recognition Workshops (CVPRW), 2014 IEEE Conference on, pages 618-623. IEEE,
2014.
[79] S. Mann. Veillance integrity by design. IEEE Consumer Electronics,
5(1):33-143, 2015 December 16.
[80] S. Mann. Veillance integrity by design: A new mantra for ce devices and
services. 5(1):33-143, 2016.
[81] S. Mann. Surveillance, sousveillance, and metaveillance. pages 1408-1417,
CVPR2016.
[82] S. Mann. Phenomenological Augmented Reality with SWIM. pages 220-227,
IEEE GEM2018.
[83] S. Mann. Wearable technologies. Night Gallery, 185 Richmond Street West,
Toronto, Ontario, Canada, July 1985.
Later exhibited at Hamilton Artists Inc, 1998.
[84] S. Mann. Phenomenal augmented reality: Advancing technology for the
future of humanity. IEEE Consumer Elec-
tronics, pages cover 92-97, October 2015.
[85] S. Mann, S. Feiner, S. Harner, A. Ali, R. Janzen, J. Hansen, and S.
Baldassi. Wearable computing, 3d aug* reality,
... In ACM TEl 2015, pages 497-500.
[86] S. Mann and J. Ferenbok. New media and the power politics of
sousveillance in a surveillance-dominated world.
Surveillance e4 Society, 11(1/2):18, 2013.
[87] S. Mann, J. Fung, and R. Lo. Cyborglogging with camera phones: Steps
toward equiveillance. In Proceedings of the
14th annual ACM international conference on Multimedia, pages 177-180. ACM,
2006.
[88] S. Mann, T. Furness, Y. Yuan, J. Iorio, and Z. Wang. All reality:
Virtual, augmented, mixed (x), mediated (x, y),
and multimediated reality. arXiv preprint arXiv:1804.08386, 2018.
[89] S. Mann and S. Haykin. The chirplet transform: A generalization of
Gabor's logon transform. Vision Interface '91,
pages 205-212, June 3-7 1991. ISSN 0843-803X.
[90] S. Mann and S. Haykin. The Adaptive Chirplet: An Adaptive Wavelet Like
Transform. SPIE, 36th Annual Interna-
tional Symposium on Optical and Optoelectronic Applied Science and
Engineering, 21-26 July 1991.
[91] S. Mann and S. Haykin. Chirplets and Warblets: Novel Time-Frequency
Representations. Electronics Letters, 28(2),
January 1992.
[92] S. Mann and S. Haykin. The chirplet transform: Physical considerations.
IEEE Trans. Signal Processing, 43(11):2745-
2761, November 1995.
[93] S. Mann and M. Hrelja. Praxistemology: Early childhood education,
engineering education in a university, and
universal concepts for people of all ages and abilities. In Technology and
Society (ISTAS), 2013 IEEE International
Symposium on, pages 86-97. IEEE, 2013.
CA 3028749 2018-12-31

[94] S. Mann, J. Huang, R. Janzen, R. Lo, V. Rampersad, A. Chen, and T. Doha.
Blind navigation with a wearable range
camera and vibrotactile helmet. In ACM MM 2011, pages 1325-1328.
[95] S. Mann and R. Janzen. Polyphonic embouchure on an intricately expressive
musical keyboard formed by an array of
water jets. In Proc. International Computer Music Conference (ICMC), August 16-
21, 2009, Montreal, pages 545-8,
2009.
[96] S. Mann, R. Janzen, T. Ai, S. N. Yasrebi, J. Kawwa, and M. A. Al.
Toposculpting: Computational lightpainting
and wearable computational photography for abakographic user interfaces. In
27th IEEE CCECE, pages 1-10. IEEE,
2014.
[97] S. Mann, R. Janzen, C. Aimone, A. Gartenõ and J. Fung. Performance on
physiphones in each of the five states-
of-matter: underwater concert performance at dgi-byen swim centre
(vandkulturhuset). In International Computer
Music Conference, ICMC '07, August 27-31, Copenhagen, Denmark, August 28,
17:00-18:00.
[98] S. Mann, R. Janzen, A. Ali, P. Scourboutakos, and N. Guleria. Integral
kinematics (time-integrals of distance, energy,
etc.) and integral kinesiology. IEEE GEM2014.
[99] S. Mann, R. Janzen, M. A. Ali, and K. Nickerson. Declaration of veillance
(surveillance is half-truth). In 2015 IEEE
Games Entertainment Media Conference (GEM), pages 1-2. IEEE, 2015.
[100] S. Mann, R. Janzen, J. Huang, M. Kelly, J. L. Ba, and A. Chen. User-
interfaces based on the water-hammer effect.
In Proc. Tangible and Embedded Interaction (TEI 2011), pages 1-8, 2011.
[101] S. Mann, R. Janzen, and J. Meier. The electric hydraulophone: A
hyperacoustic instrument with acoustic feedback.
In Proc. International Computer Music Conference, ICMC '07, August 27-31,
Copenhagen, Denmark, volume 2, pages
260-7, 2007.
[102] S. Mann, R. Janzen, and M. Post. Hydraulophone design considerations:
Absement, displacement, and velocity-
sensitive music keyboard in which each key is a water jet. In Proc. ACM
International Conference on Multimedia,
October 23-27, Santa Barbara, USA., pages 519-528, 2006.
[103] S. Mann, J. Nolan, and B. Wellman. Sousveillance: Inventing and using
wearable computing devices for data collection
in surveillance environments. Surveillance & Society, 1(3):331-355, 2003.
[104] S. Mann and R. Picard. Being `undigitar with digital cameras: Extending
dynamic range by combining differently
exposed pictures. In Proc. IS&T's 48th annual conference, pages 422-428,
Washington, D.C., May 7-11 1995. Also
appears, M.I.T. M.L. T.R. 323, 1994, http://wearcam.org/ist95.htm.
[105] W. S. Mann. Slip and fall decetor, method of evidence collection, and
notice server, for uisually impaired persons, or
the like, May 15 2002. US Patent App. 10/145,309.
[106] A. McStay. Profiling phorm: an autopoietic approach to the audience-as-
commodity. Surveillance & Society, 8(3):310,
2011.
[107] M. Meade. Advances in lock-in amplifiers. Journal of Physics E:
Scientific Instruments, 15(4):395, 1982.
[108] F. Melde. Ober die erregung stehender wellen eines fadenformigen
korpers. Annalen der Physik, 187(12):513-537,
1860.
[109] W. Miao. Method and system for transmitting video images using video
cameras embedded in signal/street lights,
2015.
[110] K. Michael and M. Michael. Sousveillance and point of view technologies
in law enforcement: An overview. 2012.
[111] W. C. Michels and N. L. Curtis. A pentode lock-in amplifier of high
frequency selectivity. Review of Scientific
Instruments, 12(9):444-447, 1941.
[112] M. Minsky, R. Kurzweil, and S. Mann. society of intelligent veillance.
In IEEE ISTAS 2013.
[113] S. Mohapatra, Z. Nemtzow, E. Chassande-Mottin, and L. Cadonati.
Performance of a chirplet-based analysis for
gravitational-waves from binary black-hole mergers. In Journal of Physics:
Conference Series, volume 363, page
012031. IOP Publishing, 2012.
[114] E. Morozov. Is smart making us dumb. The Wall Street Journal, 2013.
[115] M. Mortensen. Who is surveilling whom? negotiations of surveillance and
sousveillance in relation to wikileaks' release
of the gun camera tape collateral murder. Photographies, 7(1):23-37, 2014.
[116] R. Munro. Actor-network theory. The SAGE handbook of power. London: Sage
Publications Ltd, pages 125-39, 2009.
[117] B. C. Newell. Local law enforcement jumps on the big data bandwagon:
Automated license plate recognition systems,
information privacy, and access to government information. Me. L. Rev.,
66:397, 2013.
[118] C. Norris, M. McCahill, and D. Wood. The growth of CCTV: a global
perspective on the international diffusion of
video surveillance in publicly accessible space. Surveillance & Society,
2(2/3), 2002.
[119] E. Ongiin and A. Demirag. panoptic versus synoptic effect of
surveillance... JMC, 1(3), 2014.
[120] K. Palmas. Coveillance and consumer culture... Surveillance & Society,
13(3/4):487, 2015.
[121] F. Pasquale. The black box society: The secret algorithms that control
money and information. Harvard University
Press, 2015.
CA 3028749 2018-12-31

[122] D. Payne. NeverSeconds: The Incredible Story of Martha Payne. Cargo
Publishing, 2012.
[123] D. Pye. The nature and art of workmanship. Cambridge UP, 1968.
[124] D. Quessada. De la sousveillance. Multitudes, (1):54-59, 2010.
[125] S. Redfern and J. Hernandez. Auditory display and sonification in
collaborative virtual environments. SFI Science
Summit (Dublin), 2007.
[126] P. Reilly. Every little helps? youtube, sousveillance and the 'anti-
tesco'riot in stokes croft. New Media e4 Society,
17(5):755-771, 2015.
[127] C. Reynolds. Negative sousveillance. First International Conference of
the International Association for Computing
and Philosophy (IACAP11), pages 306 ¨ 309, July 4 - 6, 2011, Aarhus, Denmark.
[128] V. Robertson. Deus ex machina? witchcraft and the techno-world.
Literature E.4 Aesthetics, 19(2), 2011.
[129] M. Roessler. How to find hidden cameras, 2002.
[130] E. Ruppert, P. Harvey, C. Lury, A. Mackenzie, R. McNally, S. A. Baker,
Y. Kallianos, C. Lewis, et al. Socialising big
data: from concept to practice. CRESC Working Paper Series, (138), 2015.
[131] E. S. Ruppert. Rights to public space: Regulatory reconfigurations of
liberty. Urban Geography, 27(3):271-292, 2006.
[132] P. Scourboutakos, M. H. Lu, S. Nerker, and S. Mann. Phenomenologically
augmented reality with new wearable led
sequential wave imprinting machines. In Proceedings of the Tenth International
Conference on Tangible, Embedded,
and Embodied Interaction, pages 751-755. ACM, 2017.
[133] F. Spielman. Infrastructure Trust launches plan to overhaul Chicago's
outdoor lights. Chicago Sun-Times, 2015
September 17.
[134] R. Stivers and P. Stirk. Technology as magic: The triumph of the
irrational. A&C Black, 2001.
[135] C. A. Stutt. Low-frequency spectrum of lock-in amplifiers. 1949.
[136] T. K. Tong, S. Wu, and J. Nie. Sport-specific endurance plank test for
evaluation of global core muscle function.
Physical Therapy in Sport, 15(1):58-63, 2014.
[137] R. Vertegaal and J. S. Shell. Attentive user interfaces: the
surveillance and sousveillance of gaze-aware objects. Social
Science Information, 47(3):275-298, 2008.
[138] H. Wahbeh, C. Calabrese, and H. Zwickey. Binaural beat technology in
humans: a pilot study to assess psychologic
and physiologic effects. The Journal of Alternative and Complementary
Medicine, 13(1):25-32, 2007.
[139] Y. Wang, Y. Zhang, X. He, G. Fang, and H. Gong. The signal detection
technology of photoconductive detector
with lock-in amplifier. In Selected Proceedings of the Photoelectronic
Technology Committee Conferences held August-
October 2014, pages 95220F-95220F. International Society for Optics and
Photonics, 2015.
[140] K. Weber. Mobile devices and a new understanding of presence. In
Workshop paper from SISSI2010 at the 12th annual
ACM international conference on ubiquitous computing, page 101, 2010.
[141] K. Weber. Google glasses: Surveillance, sousveillance, equiveillance. In
5th International Conference on Information
Law and Ethics, Corfu/Greece. downloadable as paper 2095355 from
papers.ssrn.com, 2012.
[142] K. Weber. Surveillance, sousveillance, equiveillance: Google glasses.
Social Science Research Network, Research
Network Working Paper, pp. 1-3 - http://tinyurl.com/6nh74j1, June 30, 2012.
[143] D. Weston and P. Jacques. Embracing the `sousveillance state'. In Proc.
Internat. Conf. on The Future of Ambient
Intelligence and ICT for Security, page 81, Brussels, Nov. 2009. ICTethics,
FP7-230368.
[144] A. Winter. The pencil of nature. People's Journal, 2:288-289, 1846.
[145] D. Wood and S. Graham. Permeable boundaries in the software-sorted
society: Surveillance and the differentiation
of mobility. Mobile technologies of the city, pages 177-191, 2006.
[146] P. Yapp. Who's bugging you? how are you protecting your information?
Information Security Technical Report,
5(2):23-33, 2000.
CA 3028749 2018-12-31

--
,
..!
.- ..' '
. . - = ..,,,, õ
- ,,p,4-'. =I.10 = .
= - 'xi. . 3.d, = 3.,;:d.õ3
== 7 .0 ...---' ,..
.,
-
. , = . .
- - . :it . . I = , Ors j.ti* A," 4,
Tek' ';' ,..'.4._.. = . '' ., : 11:µ:!..;..1 %.
4
...
= g....... = 401.- - =. ,
....=;,;'..,
',i,'...r.:.-4.f...-;;&-,...''....`=?:==2' ". ;A- ,
, ,.4.11. .,. $ =
..,... v.:'
.:.
^1 4*".:....'-''....... --'=%.,-r..9+i -i-,'A
.s.r.44µteate..:'efINii:t.s.
.._,--...,,..
-µ,. ..,';:.0,?7,,,-.:7; 'Z.;
- ... . , -' .1' .õ4iii, i: ., r. . = :÷../i- =t= --
,.. ' ..` ',
' .
' C=.41 ' = '' .2 ',144:.
.4MO=d 11' ' IL'
,1. r-,--- , .; ,.... 4:131,?
,,,=_=,,,1-.,,, , = - ,,
1,,, :-.4' :. r . : -.võ, :' . 4h, ' .V A: = :,-, , . . . ' =
. ,. = , ,i ...gm, .,. , i . : -#7, ,:i. cz r.,::: : '
.., = = = .. 7"=... ,
110.11r.
,..-...,
..L= ; ..
, , ¶: : ., . s , . , i'' ' , ' +'''':' =ri=:-:4..
=== ' '':'' '
= 6: t P`' . , ' 3...
'4=' = i',L.
, , , 3 = ' . : . l''' ' *--
. . .
. = ' t .. *µ : : . , ' =;,3i0 =,' , ..
- ' '. '''. :. =
: t -
' . k ; ,), : ... ',,) -. ' , ,4:4
#
1,, . .../.. -. 7, v , .., =
- .= ''' ' 1.- 3. ' 1 = '= = -7'.-
Y.='-'i'. .' =-
,= == .¨ ' . , 4õ .4-.,-. . - . -- ---- . :r....s.
. , ....,6
Figure 89. World's first underwater AR/VR/MR/XR/XYR/ZR and interactive
underwater multimedia experiences [71, 73,
75, 76, 95, 88]. Top left: world's first underwater AR (Augmented Reality)
headset, completed 1998, used for hydraulo-
phone (underwater pipe organ) training as a form of physiotherapy. Top right:
World's first VR/AR/MR/XR/XYR/ZR
float tanks, wirelessly networked in a worldwide public exhibit on 2003 May
22nd. Center: ICMC 2007 immersive
real/augmented/virtual/multimediated reality music concert. Bottom:
Deadheading as described also in the "Effective-
ness of Integral Kinesiology..." paper of this conference proceedings.
CA 3028749 2018-12-31

,
por
,
, '0/4 :. .).. = -, ' = K.'r ' .. , .. y
., = Pi& .õ .' i -
i''.:A='::.'',*4 ,,141
21111
=. N. :4 =
\ Nt='= - = 4.. ,,..,,-..,..-: rt ,.
. - r.
'---'¨
',..- yl r
. ' 'µ,4. Sr ? .. = , :::
= = .
, t '
411' =
1 ').- '' V= - A''..¨ . ' ''' r
,
F= er' .'-i " ' , , ,, ;r )-q, _': ,
='.,,k,." =ve
0=, z
,..
Figure 90. Underwater virtual reality with the MannLab Mersivitr headset turns
a cheap $20 wading pool into a massive
and compelling fully immersive and submersive MR/ZR experience.
CA 3028749 2018-12-31

¨
......
,
41111111111144144111.161411,0µ W
¨ =' -
" ' \ 0
. '
_
=
, - 116Z.X'..''..--"?'!=-=,`, .. =
''. ..:. '? v ;-.- `....1,1.'. '," -
....- *IP ' .''n, :. = ,
= - - ._ ..
= .., = . . ..,
, :'..-=: ';':'=7=,?.!i' . -= = ;,,.=!, ' ' : ..'
= = . if?',/,' ;...`' '-'''', #.1!"- 1,.';-"' .., I
, ,
I
7,, .:.. = .: . , . . _ ...; 4 '''1,: P=4;e".14
' 1 1 f 1 t ' 1
= = i ;;9=%::.=='....,:.:P;f
\ 1
. .4 Ok = : . NM ' , I .; . ,. = , .. ., ,
, _ . '..
7 .- ,,.......- ...?... = ,,, ;
= it, / , =
.
Li V, ..= = ' " .-. .i .µ ..= , , .= I I ..
i1.3:.=tr:.4 gr.:V... 11, ...,...!!µ - 1 ,'
. = 'it. '= ( f ' 1 i \ , ...., ,,
. - ,"4-- = ., = ,'
,
= ;, ..õ_. - :. ....,Ø- ..,
= =..,=?...- P
' , ^
1...,.."' ' - -
= f;:r, ..
... ; fr.'"' . .:'...4tittiAltidip., ''. , '
...I."1 . ' ' f ' '1
. ..... . - . , . = ..'
*4, :kt, .:=õ'=',_ ,.', =
' 1
" '',=4=.----'21'31' ' '
',.,' = . ====,SL = ,=1! ,f,42, i
:.
r _,,_,...k.õ,...õ1/2"... ..õ,..õ....,4 ...
,. õ..õ.õ... . ,..,.., , .e ZAk t = .. . -, . . q.t.-
,.,( .,,,,
',...,), -1-..' i . ,..- t \ikr ' .."-. - '
' = = = " S.,' .
, = f ` T4'. )-,. . y -.. .-
.. .
- 4 v, -r.1 ===i! .,,
= ,== ... , I.' --r '...i,k \ ' 4 ,
= N
. ' ..
...
1 CA' 7.:.:11ik' _1- =. .. ,... :.
,. .
, ,
¨ -'= -' 'f$*, ',.= 4,1VA : '=!õ
õ,.,,s1 . = = .: .
. =IVit .. ' = '
No . ..,
f vil,....., - õ .
,
-,...,.... == .., , ., , ..
= ',..:
' .
;. - k. 4 .
,,,,
= =
I.
. . ...===, = .., ; = .ri- :- -= ,
,
=
. .
..,
., = . µ" 1 %'', - v. .
....- = = ::*.... ., ' . li .
,
4 ,
.. õ..
== ..
,s,,,, =,,;. 4 . , ',,, '.' .: === = =Joe;r0...÷, ' ''''', ,f.....
========:,4.:,=:.1 . - = , .11 s
'Iv fi-te , '.' ..,,,,,,--1, ,.%?: , , ''' ' .== ..,
:: =!,=;='1,õIr" )0- i'= ii.l.
N.
..,#,, = k =11 -,
1
4;ik ,. õ. . ==.=' :`.0- õ -- 4/.t..??..,,
ei
.''....'"i=!.!:.='" : = . = ''. .,:. ' =.,,.::: = , .,,,,. klie
= : ' 1 ' = .a4 .4.
= = ' , .
'''' 94:: ' '''. NA,
".. ,::-.1%.".kor 4`1.-= eii., ,fi. =r= .
-,
' 440'4 :.=*: '1,,,' .='
P;=: . ' ' - 't
=, =====
- ==;==,,,, = `,
, = 4: .= PI; -',,,,
It;9'... ' ' = ''
..õ. r 14.': 1`.. 0/1 .' J ' , = 1.. = . . . . . #
' ',,. `, , . . ...e. ,, : ',.
1 T., : 4 :: ... ..'0.7 = - ' = : = IP .
1 '., ok.. =4 11,,i. ''''== ''''.µ , --
,24 = 4;s1 .'''.., "I .:# = = . 'µ . F, ...0 . .,' '
' . ' Y r=- - ' ' V , ,,,IP- -, ir ../. , = = . .
.-*4, - ,
C' :i. ='''.1'14* . = = . ' . .1.? .1 P." .. " .4
''' ' .-4 ' ' . = . ..! . re'.:1'" 'e
' r . ...i,'
= 7* ' ;:;4*'=-=:
%..,ri.:=. =''..4==== -, AO; vs' = $ , i
. . ' =.' 4, '.:. = 4
, . .1 .,,.. :e. Ir. . r. z . ''..='k =."."'õ=
= ..
Figure 91. Interacting with water jets. Top left: VR fitness game in which the
objective is to use the hands and fingers to
completely surround the water as close as possible, and run the hands along
the curve of the water jet without ever touching
it. This is a form of Integral Kinesiologr. (see our paper entitled
"Effectiveness of Integral Kinesiology..." in this same
conference proceedings) similar to the "buzzwire" game in which a player moves
a circular metal ring along a serpentine
wire without touching the wire. Upper right and bottom: Deadheading,., as
described in the "Effectiveness of Integral
Kinesiology..." paper.
CA 3028749 2018-12-31

Complex-valued , Ref.
input is where timing capacitor
signal generator Real reference connection would normally go
Imaginary __________ reference y . +5v
8pc 7. = 611 5INC
y 53.4% R
I
Ideally, we'd
like to sever
rind .. ,rxT"
-_- = > __ I ___ 4 vco 1 ______
:õas Q input!
Water outlet . Side- 46.6% R .
discharge -cc) 4 IC I( +5v
(exhaust) =
..
Tee ------_____:.1....,,, Ii NCi2 3 1114
I
i=,, - -,S. , 1 Sig. input
Shedder bar - ''''k...,51,......= . .--'
I Real output
.!
____________________________________________________________________________ >
-eceive
,
o.
Nessonator¨ , hydroph Dne -_.
i -1- LPF
,
i _1_ .
Reducer 1 V +5v
8INC 7g = I _________ 51INc T
_ =
0.534R
Water inlet ,liv, ., r VCO i ______
0.1466R= 1 >
...c
Transmit hydrophone input
01 1 4 Q +5v
20k 5 R s 2k
il NC 2 3 1114
1
> _________________________________ II Pin 3 should bias
. .
Liveheading TM itself to 2.039 volts DC
Imaginary output
with the RL=9212 ohms I
pulls pin 1
LPF
"Stanford Tee" down to T
2.5 volts. ¨
_
Figure 92. Liveheading,. with the "Stanford Tee". System developed by S. Mann,
Visiting Full Professor, Stanford University,
for use with Stanford's Tanner fountain (the large fountain at the main
entrance to Stanford University), in accordance with
Stanford's "fountain hopping" tradition. A 1.5 inch Tee fitting is connected
to a 2 inch to 1.5 inch reducer that fits over
the tallest of the four water jets. Blocking the "Water oulet" diverts water
to the "Side-discharge" past a sheddar bar that
feeds into a Nessonator¨ (hydraulic resonator). A pair of hydrophones, facing
each other, are placed in the water stream
of the side-discharge, and a "Receive hydrophone" picks up the sound of the
water that hits it. Additionally, a "Transmit
hydrophone" transmits a signal into the water, which travels with the water to
the "Receive hydrophone". A low-power
battery-operated (USB powered) lock-in amplifier is made from a pair of LM567
tone decoders. Pin 3 of each tone decoder is
connected through separate coupling capcitors to the "Receive hydrophone". The
tone decoder, in normal operation, using
a capacitor connected to pin 6 and a resistor connected to pin 5 to set the
timing (RC circuit for VCO). Instead, a reference
input is suppied to where the capacitor would normally be connected. The
reference input is capacitively coupled (separate
capactors for real and imaginary tone decoders) to the inputs. The sensitvity
of the device can be greatly enhanced by
biasing pin 6 to about 46.6 percent of the supply voltage. A voltage divider
creates a 46.6 % and 53.4 % split. This can be
done with two resistors .11,,, (up) and Rd (down). A resistor Ru in the 1k ohm
to 10k ohm range works nice, whereas for
Rd a combination from 1k ohm and 12k ohms in paralell, up to about 10k ohms
and 120k ohms in parallel works well. If
it were possible to redesign the chip or modify it by severing the output of
the VCO to disable its effect on the quadrature
multiplier, the apparatus could be made from a single LM567 chip. Pin 1 is
normally used for the output filter capacitor,
but in our use, we take the final output there. Internally there is
approximately 4.7k ohm pullup so there needs to be a
pulldown to get the output nicely centered. Due to asymmetry, we observed the
optimum pulldown resistance to be 9212
ohms, resulting in an output nicely centered at 2.5 volts, so that it can be
easily followed by additional amplification. A
capacitor is used as the LPF (Low Pass Filter) element. Since the resistor
MUST be 9212 ohms, the capacitor is the freely
chosen element to decide on the cutoff frequency, depending on the highest
frequency of the water sound vibrations desired
to be sensed. The real and imaginary outputs then drive a microcontroller that
feeds into the immersive VR game. The
hydrophones are Sensortech Canada 5Q34 which we found to load the signal
generator strongest at 81kHz and 930kHz, but
as a pair, transmit and receive best at 12.5kHz (1%), 23.8kHz (0.9%), 1470kHz
(3%), 2120KHz (3.5%), 3084 (7.8%), and
about 5000kHz (70% of the input voltage received at output).
CA 3028749 2018-12-31

=r = f 1
, tv9,74, - .47,:"-, * õ ;-
;:c"- ¨ ,. ,, / _..or or -
I0
* .,: ' 4 4,:=...., , '.,
= O., ' -10 ' ' - õ .
I
-
. 0
1 r
,.,,-- _ -I. = =Jr . Adifirt04%;. '' '.'1
''''
r
,.."....: ....,, ....4-' ,
, ' ..õi''"I. ' ' , r=
' ------- --4P--- ",
7` ' '*. ' -
.. .
====""41r7 lio... - ,
'' .õ-r :1: - ;"
==:`,7L; -:',1001161..
=
..._ o0=400' , -, 4,..r = = ' '
-
.. - . ll . '
- -,1 .000,= 1 ...+4 =
"2"."*-2,--:' . .. ,. -;., = ,' .,
...,
'" =,,
- µ='= ' " .
: - look
.. = ' õr....,.., = .Ti
. ..41.,
,
.. , ,. .
A...
#
,
, .
.c.., ..
,..,:µ,...z.-.:, µ.4,.._ ...õ,.,., ,
= = = ' d I = fic.' / 7
sAll
4004414(11 . = - , ......
,..
, VIP ...1 iN ,
' .r.!õ ..,. ,...-.. ,..,.,,... ,. .
,,,,Iii,,,
... ,
_____..,õ..õ.µ . . .õ...-,,...,..!= ..!:', , '...= . -
' :-,..., - .3.
IIIIIIIIIIP AZ47-, ''', - ' Ner.111141411P:ro =
.,,, . '
, .
A
. . . .
¨ ,., . , %. =µ,.
.;-.., .4. ..,
-, .,. .. .,.,:,..,igg, ,
.=,6r,":`,.. i=-.., , ' ..... .
Figure 93. MannLab Mersivity- headset
_
11DD c,Ilon Per f.1our (ciPit) Pump
. 4-0 . (2...7,-...:3,11=Br
'' .1, ''S. ,.o . ........ 1 ''''..) 1 3-
.1e' .k= 26 --._
... = -. :-= '4..." . ..,
4:4t I. ,r.voriCiiir .j.-- =.. MN 1'1 . le ,,.,., ., _.- , =
',...,,... 117:: ' , , 4.-- 2- ,,, , w 1
' ` -- ' ) ' , = .,, ..: .....- P . -:, __
yr . i . , i _
, 1 1 4 1 0 1 0
00µ4?-4,ja . " .- - l= . .
Flow / (gallons per minute)
Figure 94. Head Games: measuring flow rate as a function of hydraulic head, to
plot the characteristic curve of a water
jet. Materials required: water pump, long hose (preferably transparent so
students can see the water in it), ruler or tape
measure, and measuring cup/jug (i.e. measuring "gallon"). Using a long hose,
for each height of the far end of the hose,
the time to fill a one-gallon jug is measured. From this measurement is
calculated the flow rate (gallons per minute). This
may be done by either inserting a small low-voltage submersible pump into the
fountain, to supply the hose (here an 1100
GPH Rule ITT pump), or by holding by hand against one of the water jets, as
shown in Fig. 97, or by insertion into a water
jet by careful choice of hose diamter, as shown in Fig. 98. It is interesting
to note, that for many systems, the deadhead
is less than the maximum head (i.e. the curve increases and then decreases
again.). The deadhead point is the point at
which the hose is held so high that no water comes out of it (i.e. the pump is
pushing up to the maximum water column
it can sustain, as shown in Fig. 96). Often there are two points where this
happens, with an even higher amount of head
in between them. The "teach beach" concept makes this teaching fun and playful
in the context of Stanford University's
"fountain hopping" tradition. Pictured here are Prof. Mann's students Cindy
and Adnan (who is also the founder of CG
Blockchain and Blockchain Terminal, http://bct.io).
CA 3028749 2018-12-31

'
= . = . . ' i I,' .;....-..õ-
õ',',=4',...' , = , - - ¨ _A.- --
, , ===1.=-==lit .f .: -P . =
rt, - .. "'"
, = = ..! -.4( Ø ,,I,Or :.,-.
= xt . ...r.-
.; - . -a. 4 -., .. ,z,,11,,, . -
' - ' =
' , I ,µ %. ; %-'-'t ' ;, ; ' .4 .. .
_,:-,,.4.;., _,,,,.,40000'.=.
\ -, , ..t 1 A. =,.. ..--.'-', , ,44,
oadfl . ..74,1 zs,_iti:T_
J;tt
's
==..
4 = 1 ,-- ¨ A. - '
' ' '' ''''';',1 t ;.,'
; ' ' . = ..t..÷ -,..,...', " 1* = lir
' 4 ' ilipook
' 4 7 1..: c,,, i === i. lione,l, , lit hoz ....,.. =
A. ,
A? : ... 5 ' 1 = . = ,. , \I .
..
' I:4 ".... ; I .
' ..* '..: i .19 + = N \ . '
: '', '''.', , ', ;
Figure 95. Stanford University's fountain culture is such that at any time,
people gather around, AND IN the various foun-
tains on campus. We thus leveraged this existing culture to turn it into a
reasearch and teaching lab. Here a hydraulophone
(underwater pipe organ) is being tested and characterized with several low-
voltage submsersible water pumps. Shown here
are some of the test instruments inluding a fully submersible Fluke underwater
multimeter, various pumps, fittings, etc..
r MON .' . ... '....,2',--"=-;;';-'41'.,' icr'Øp.
;J!'"4,,,,,:aL7,.;.g.,,, "'õds:',.,'' '- KIWill,,,110r.-:-..4:.
===:;',1,......,
.,,i = ,"..'-'c'.11: ..:1.45..,
I.Nur.,...,-.:(-I -1.,cir! ,:?..!..."2 ff..1.11I,Lihmr.4., .
7.1s,r.p..f......f,
',.µ.1=12,5SrP.T1441.",:r, , ..0110;i1141.444 VI ' '-7,1,1".t `iirt4
, L., ,,FA-,,,,,; ,,-,..4..r- =
I'LL, LI '41'..41,1';' ' 4.,.!' ,1 ,,...11...µõ=,,,l,At,' v:J.,,..$1*,
= -,e1111 1.5'",.*. -
.PV",1:..e.....Y.,;11%:r.'147.-F;f-vAri;.,:. "1.,;....c-I-s _
,......i,:ki.
= I
1, + 1 r -; ,c - 1,, ,,I.P.,9,1,1-ii',...
:7' ., ...., =,' =:1;::=*"..-;';'.-+:4'.1.17)41`li ..'; ?7:" ''' -- -i. . -
, , ,+,,,,, PI' ' ,4"4,,f!..,1Pr,õ1:. " - .. ,..-
..1......,..,!;.4%.:,r,.zµ14.7.:0,;401,7,4-..,..., ,. - = ,..
'
'' 1 , i.,=',,,,,; ,Ailif* .
t1 ,:.1; . ... - = .-
Arii.41,.....s,.:.1.;.:14-; ..õ., , . = .:
I .,,-:=,,,.;' - -.. :Pit.i:41i:-.er::
11, ,,,10.1,7,;',Eq41 =õ ii .fr:11.,..... ti, ......,.., -.....; 10-4.
' ' `--,,,i,'=Zkr. 140.C.:WiTil:11
v::''........,A1'.i1,..1.12µ. .m.'ttki t'ib.i.'i . J -:.,.,
- ,': ,`"? ':.,,, .:1=,,..4-,,,, -11
.1...;dri:9,='...::;5,...t....;.- ,,;=.,:,4..4...-....2=:..0,-A ,;..,,
' r ' ' ' = r-,,',,.... 'F.. -.1,4., .1. x ilfr,4,0 :`,.:.Pp...
''..1,,,%:,...,'Ailj.lipe.40 '. ... Nfi'iiimoritriq , 1 rt.," ... I." I in
' =,.,11;-",,ev-1.' *,'"i:r.-5t1', PPI'47,".17;.A.g.11,11 I ...=;'741.1,41. i
1 1-' . ','
.õ . 1 ==== . = L. - - .==== ==.=
.... === ^ = ...,,,,
A
.õ, . 4fTkgfivi
' ; :XL,,,L,"?;:f14,,,
liNntii.:Q.',...,...tero-F -.-:,, . 1 .1.1.:0õ.1..r, 6:47:'"
.4. 1.a.?., ,,Joil.. tr!...71i,
' 2 ' = = ' =I
7
,-- ., ,4,?J.õ , 1 .4474:14.$4:Fi,0:1,4.1-2=;z1ii., ,*,-4-4.-.,. - 1.
:'.....- Ai
==="
= ,_ r,' ...r.' '-'.i.'1,1Pill'fr011
lif.;,,,ViV:11,1'::',44r:i'..:":-;"!"t :44;ti i gjr17,'!'3õ (.õ.
14,14jk01120:421,.:.%k.' -.74 .. `,;:-.47-..,.?,::;:f= .,' - ' -
,4\1.. . ri M 'VW
,,,,,..1?=414,- - ", ' / . J'YI y = ""...- t= 1^ '.'' ./
.4.104'
. ''= VV.'. '
4, =
";: - y y+ Y==,' r, '
-
...."-: t = -, . ' =r' y
. ,
. .:7 Deadhead
,... . ..,,..: .õ,,.....
,
4 I
' 'c _______
e
V
õ.,
..
. _
.,
. .
,I .
_
Figure 96. Deadheading by water column: here the pump has pumped as high as it
can, and the water rises no further. We
simply measure the height of this water column to determine the deadhead point
and plot this on the graph. Other data
points require the measurement of flow rate, but here the flow rate is zero.
CA 3028749 2018-12-31

. ..
' .o. =.,', - ,4t- .
. ''..,.,.. ,=:.-
11. =
t el iftli .o i ,
. ,
.1' k .4 =
. .. . ' ====="t
= al . ,
. =
kire: '4?- = ... - .,
..;j.:i=AP = = = *'-
'''S, ..
1111/4. . . = = == .=
=
f , -
. ,
. ;
i
'
. ..
IN
Figure 97. Head Games played by connecting a hose to one of the water jets in
a splash pad aquatic play area. A ruler is
used to measure the hydraulic head (height of water column) and a measuring
cup is used to measure the amount of water
flowing in a given time to determine flow rate. In our final teach beachTM, we
envision that the jets would be designed for
easy coupling to hoses which would be lent to participants so they don't have
to bring their own as we did here. Rightmost:
exploring interference patterns between waves formed by Karman vortex shedding
in bluff bodies (in this case, fingers)
inserted into moving water. Karman votex shedding is the principle of
operation of some hydraulophones. The fingers are
roughly cylindrical, and the Strouhal number (St) (dimensionless quantity
characterizing oscillating flow mechanisms) of a
cylinder is approximately 0.18.
'
= =*."' 4 = i; = - AP:
ti.:.' ' . =;1". . = '4,..;'.A. . , '. =
' Ile .* . " t'' == -0,--7 :66:
,. ='= -- = -- 1. ,' -- P.,,;11., TY.L.: 't Wet.
. . '": 1 =
,W.:4.:, = ; =
= = ' "" ' = - =
'1.... '..-. 1.=.ti '''','' . ' ..-..4.
.V.µ,..-.1,,,,, .74., rx,,,,,.. = N .,.. r
7 = ,_-:,,, ,,,w - : k
,...
.4...=
.i.,...,..,..,=
, 0 I v." =;.' !=-=:-
...->z'..! -, :,1
. ,,*-4,,µ ¨ = --,--,... .
.r..:: f'='..' 4. ., )$'.-.
. . -,. ,, :,µ =I,T;'''-'4 '
- ' ' = ::.=:?. (''47, .=;= '''''' ''1441',. r,
..,",,,,,,,Z.ISS - ...,õ
_.õ., ,'= :P".õ == ..... ' ' ' ' ,' -4fiVi!...'.:=4
= ':,-..f.X!Ar==.'..; , 41.-
-, ". = . , = it= tr. -,,,,.'!:?.....3c.
. .
=;:':,=
.i.4, '= ''' = ' = 410/0 '
9dat:i1/2"
:5 - ':.====*"...*'''µ
4 &===:2
. = --
s
= = fi:
.¨ 4
;.:-...i.
.
70,,i,=?, .R,?,-1 1..:; a. i===-- =="- = 4
Figure 98. Head Games played by inserting a hose into an existing water jet in
one of Stanford University's fountains.
Here a hose of 3/4 inch outside diameter was found to fit perfectly. This is a
nice calm fountain with relatively low head.
Thus Head Gamesmcan be played easily with a modest length of hose. Left:
underwater photograph showing insertion
coupling. Right: Virtual Reality with video-see-through allows information to
be overlaid on the water jet for teaching and
instructional purposes.
CA 3028749 2018-12-31

= - . Zi;', 1:' ,;'- - " .K- - " it =!4.
ie "OW
4' '''' ' .04. !A\4.... i ';?:r:',# illo '.* ,- =
= -911,,
-
,,.....-.......1.1z, 1.# ...x ,:...vs,=,..v _ :
r -
= .....v:.+4,...:-. ' ..,
c - a. P , r".,NtiRt. - ,.''''i 4 =
. ' . .. : Z:=:,=r4; = : = ¨
"`",..tvilp !'. = ' . J. ' .
lek'' ' , ,ti" :=9't -,:i.
'Ay,. . i'' ',- . - 1'J ,=:41. ... 1 '= 1 ' .
' "I ', ,, .. ,,,, - , ,
,...!õ,..1.1.;..,,,.!, , === ' '0'.. ' " , ,õ,== ='4 :,:t,
L ,
" ' '44r :=4:- ie7.4 = = ' , ' , ; ...A.;'%.*1
= = 4 ' , '';`I''' r: `M*"= ; N. ;
tk*N,P,20.1-
'
I õ ",;.., I i=-=%,-4.1 .. . ..1.: ? -% 51...i.'.' ' =
If?. ='?
. : 3 \ -t,1õ.'il' : c...' .=-==== ' = -
,, , ,z..- '
. V`µ,- 9.; v- = . - '====ftnelowi ,
= .. =
,
....
Figure 99. Head GamesTmclimbing wall. Here a participant experiences head by
touching the water jet from various heights.
Calculations are rendered in the virtual world, where the eyeglass actually
operates in a "multimediated reality" mode,
using video see-through and overlay of water jet attributes with actual water.
Left: deadheading the jet results in much
higher head, but coming away from it results in much less head. Middle: the
head increases again as we move down. Right:
further down the head increases, due to further potential energy transferred
to kinetic energy.
,.
¨
' .
, Time Elapsed Absement
,
i-
' 00:04.66 0
-
n
=
t''!,.: 41"0110.16rc
Prese t
. , 11 Pduse
,
15.6
111 SALT
=
'
..
Figure 100. Plank on a MannFitTmBoard with visual-audio feedback.
CA 3028749 2018-12-31

. 1. Lt.:4461'K
/0 I k 44.ri
ar.
. I 1 ' % 01:0001Milledim, JOAO` ,
..-.,
i . IFF
-
AOC WA
= ,
, .
',,, witzfr
11
.,
.116k
I
le!
r . ,
- ' ' .1... . r.
' Ø0" '.I. =
..
1
* - 4 U.,r4. =
, I -s ..-t...!...r4010=11w40,4 .z.,. .
,, ..,-.4. 4,10, - , ,,,, - ..... : .,
5_49satpx ,,' - , ,. , , . kgitiiv, . ...4-,..t:'. ;..1=1 4:
. ,bil 175
, -
It ¨tli ' , .
' . ,,, : =-..- '
-, ..' - . i.= ,
1
,t ?-c.- 4. .ii, ' 1 , = '
µ4 ,-;,',,,, ,...., f_lk,,,,,, b. . - -- , r1, , '''. ,:=%.
- - g.?
tli:n4 ,:y -'
o,
E.--,--,7i,..,:-.:..
P... ..
= .
¨
,
Figure 101. Harbisson and Mann with passports depicting the physical reality
of their bodies as partly computational,
both examples of people who are part technological, through the use of camera-
based computer vision as a seeing aid. The
Veillance Divide (e.g. when surveillance is the only allowable veillance)
renders such people under attack as "existential
contraband" - contraband by their mere existence.
CA 3028749 2018-12-31

. '' f' = :. ' . ' - 1- t Jo - ft.. " . ,
= I
i 0 I
I
I
r
. .
Ii _ _, , ,
\
_,
I.
N---- -= t- , ganiej ,,
,
'
1
i .
.0 4,
,.
_
lit op.
i= .11
1
. -
1
'
-'
1%7
.. ., ,
., It
' =S'= 7! õ,
1.4:
.,
- 4, ' ^ -- _. . - . =
Figure 102. Furniture design with Haptic Augmented Reality Computer Aided
Design. A shape generator
or shape synthesizer, such as a PASCO SCIENTIFIC MODEL WA-9307 FOURIER
SYNTHESIZER, generates shape
information which is visualized in alignment with haptic sensation, so that
shapes can be seen and felt in perfect alignment.
A 3D (3-dimensional) wireless position sensor comprises transducers with phase-
coherent detection. The synthesizer is
connected to a fixed transducer (loudspeaker), and the reference input of a
NARLIA (Natural Augmented Reality Lock-In
Amplifier). A moving transducer (microphone) on a Haptic Augmented Reality CAD
(HARCAD) wand is connected to the
signal input of the NARLIA. An output from the NARLIA is connected to a T-
SWIM, here, 600 LEDs and a vibrotactile
actuator, both part of the HARCAD wand. As the user moves the wand back-and-
forth, a multisensory experience results
in which the user can see, feel, and hear the virtual shapes (e.g. waves and
their harmonics). Arbitrary shapes can be
generated by Fourier synthesis, and these shapes can be touched, felt, and
manipulated.
CA 3028749 2018-12-31

IW. r
'-.11
taavx:r.P.=
-....,
Vt....
1.1.f w a . = = ,
- 4
.....o.
_ =
. _ .
. = - . - ,z2A.L....,.o.1...kdi =
. .- ,
., Iri, . µ4,=.' : 4
*1/4 'et.k , ki""i =
'4
4 .
/ \
,
/
r.
1 .
t" y 4 ii 4
,
' 4 .
-t:
,. .õ... -
=
. .
. t " ''' = -
- -
,
,..
= .. - ,
' . 4, 11:-., ,
, ;._. . ..,=e ...,,
..., "++.=
. == ... =
''µ'':*, '....'' 4,7A,..1.÷;e6;41.1
'
Figure 103. "Wrobot", the wrestling robot. A nozzle, designed in Fusion 360,
is printed on the Autodesk Ember printer,
and then attached to a robot. The robot then takes evasive action as a user
tries to deadhead the water jet.
CA 3028749 2018-12-31

Lege
me.
=== =-
. .7. =
= = .
= = = =
.=:.. ..õ== .... =
.1''= = =. = = 411.;11 :440' , v:'
WArtioteriti. "7^
=;.. . !'.4.16.t.tatil!R:V?T T5.1!
= ='-'r
õ
-
=i,E:g41
Figure 104. Laser cutting of grip handles. Each cut is 1/4 inch thick plywood,
and there are a total of 3 layers for each
handle. The layers are glued together and then attached to the robot.
CA 3028749 2018-12-31

. ,-,-- __ - ¨ - ---
i '
t -
:` , , w =
. .
ii.= ..µ. E. LI
, ' .;
. A
\,.,I.
x
At.
A
m4vw.õ............iik . ...31.1: ., ,
'01.Z.1. 1 = ... , i
t 1 I
= 4 - . ,,t.t.
....
I
A It 1 44 r
,
z t,ttct '
-ii n' %
v iõ...4
ifi)
,
. . . .
,-41 .--`,2: : ..;.! It I; + - 4 ,,,,'. õ,.0=,. :__ : "', ,
T
- "......
= 411,411=1111+
.` .......
421..... 1
. i = = ,
7 , i J , i +,
. = . I . ,..:
I
U ., =
4 = = 11 , ,t
, I ' i
V'1,1 '1. ,t = Pr - '
"i
- .
Figure 105. Haptics, Augmented Reality, and Computer Aided Design. A Tactile
Sequential Wave Imprinting
Machine [Mann 1974, 1979, 1991, 2015] is connected to an antenna moved in
front of a radio transmitter. The actuator is
fed by a signal from a lock-in amplifier connected to the moving antenna plus
another stationary antenna. In this way the
user can grasp, touch, hold, and feel otherwise invisible electromagnetic
radio waves. In an early embodiment (Mann, 1979),
radio waves are picked up (or reflected) by the moving metal bar of a pen
plotter, and the user grasps the pen to feel the
radio waves. A light bulb is also attached to the pen so that the user can
also see the radio waves, through PoE (Persistence-
of-Exposure) of the human eye, or photographic film. In more modern versions
of the apparatus, a linear actuator drives
an LED light attached to the finger. By wrestling with this robotic device,
the user and the device together trace out the
radio wave into the Autodesk Fusion 360 Cloud-Based 3D CAD Platform for
instant collaboration with others (e.g. multiple
people at different geographical locations remotely wrestling with each other
and with nature in order to collaboratively
create new artistic designs). Together with the Metavision Augmented Reality
glasses, multiple people use aluminium foil
brushes to collaboratively sculpt and shape cloud-based waveforms to design
buildings, furniture, automobiles, or other
curvaceous products.
CA 3028749 2018-12-31

,.
1,
?t 7
.,
:
..=
._.,';iZit-1,....= '--'1` .!i. 1 , A - , ( ,
_ , ' :n #
. .
.
....-= ,k = ,
.,
= ,
'tk
,
. ,...
- ..
, 1
..,,..
11111111kk 4
.!..". ...õ, . , ..,.
121 kir.
,= . ,.2-,.., , ......
, . a.
- I i '' ' r.f N IP
. = -
,
-
2E,, A " . = gli.1 . \ i,õ:=. .:,. =J, , . . i ' c ...., .
,14- '.=='=.- = v. ''' J, - ' . : 7;;'=::=1,51,' '''r,14t. !, ,. = =
V
,0 = . . ''.1 y -. '
=.:::=,....f.,,i4 ,:-',.. .
I. = .-,=J-41'-µ4, , ,-':''
, i =====.; .
- 1 , t=
?..t:
,.,:....
,..,
..?
: '," ...=
.-- =
'
H i
i
1
-.1..! =..
III i Ifdi (
. ft
õ,g
9 , Or =
. : . ..., .s;:0,
.1 .. = ...'.4 ...,:'..!!le:11k,,,,:4
Ay.j.i.....C.". .1.. ...i;l: ...woo.
' a, . 'A:. ,I., lti : .1
,..,,=::;..tillip!!"/: .',.
= - If , "
..... .::....
''':::.:;=f;;,..4*1.':'
0 a
i
i -. i
, -
......- 4,,,e 11F,q
.4,4.!4' .
7;11,1.N:.
., .. =
tt=-...= =
= f '
.....: A..õ.. ..-
=t.,...:
.4:61
...........õ..
= ...,- - , _
: , '", -- = , 1 4
''''1;, = le
,..... ..,:,',."-!'-....-
Figure 106. Wrestling with robots as a means for achieving Cyborg Craft:
Robotic-inspired abakographic lightpainting
(top row). Stephanie, age 10, wrestles with the robot while controlling the
spinning of a Potterycraft wheel with her left
foot.
CA 3028749 2018-12-31

!
,
%NW
AMP.
MM.
4 - =
1.111141F
,
Figure 107. Scott creating an artwork at Pier 9, pictured with Mann's
photographic toolpath visualizer. Toolpath previsu-
aliztion using Augmented Reality. The PhotoolbitTmis inserted, or
alterntively, the Photoolpath technology is built into the
device. An exposure is made as the toolpath is run at full speed, allowing the
user to see the toolpath overlaid on reality.
Slight changes in perspective are facilitated through simple 3D modeling,
while at the same time retaining the photo-realistic
rendering. At any time during the work, the cutting may be paused and the
portion cut is compared with the Photoolpath.
After the cut is complete it can again be checked against the AR overlay.
Figure 108. fig:scott6
i
Figure 109. Example of Sousveillant Systems with Epilog laser cutter. A
special AR head attachment is clipped onto the
head of the machine. Here we see an example AR visualization while making a
piece entitled "Crowdfunded Justice" (2016
October 31st art installation about justice, based on coin-operated gallows
spool for the back of a bubblegum machine).
The shape of the gallows rope spool is clearly visible as an AR overlay, and
we can previsualize it, postvisualize it, or see it
during evolution of the toolpath.
CA 3028749 2018-12-31

,
'
,
- 41
1 ie =
I
(.4=11122F:...' . . .
-
C-: -
A
kt,
,
-
I ,
-
ammmiale ' opo ,
.1. .
-
- _
. , ___ =---, ,
õõ.
. .
,
*
,
. ,
,
,
= . . -
- , = ,,
.
4
1
,
(7'
,
# ,
..--
. .... 0, - ..
.
? .
, ...
..
...,
,
., .4.
......
4., ,,.. ...
. . . . õ .
Figure 110. SuperMannFit system, VR (Virtual Reality) flying game. Players
engage with a flight simulator by doing
pushups or planking to simulate flight scenarios where the tilt of the board
is sensed by the smartphone set upon it,
which is wirelessly linked to the VR headset that has another smartphone in
it. The smartphone in the headset tracks
head orientation, and the smartphone on the board tracks board orientation.
The game is reponsive to both of these two
orientation trackers. The game shown is a game of pushups that guide the
flight simulation while developing strong core
muscles. The game is played on a board that is substantially "T" shaped, with
a wide portion for doing wide-grip pushups
if desired. The head is wider than the tail end. The head end is for the
hands. The tail end is for the feet. In other
embodiments it is hourglass shaped with a fatter end for the hands.
CA 3028749 2018-12-31

_________________________________________________________ )....*
ls- LEDs 121
LOCATION SENSING FIELD 150
1c)CATION SENSOR 140
120
<(t)) SI SHAP_Eir\in, j\i
*
WAND--96
4.- -
94F--
:96- POSITION
-:',-.- SENSOR 122
' TACTOR 123
_ -
_______________________________________________________________________________
_ MOVEMENT
_________________________________________________________ 04.,_ PATH 160
SHAPE. SIG.
110 GEN. AMP. 116
111
MULTISENSORY = ..
PROCESSOR
SWIM
9R SPATIAL Tx _.,.. COMP. 115
SENSE
112 113
FIG. 1
4.. -"WAND 220
202 S200
200 ________ * U20
Yt-
-
WAND 1201k
A 201
L %
.:-.,.',. ".. 2. a Ft
0
-T. P201 am. 221
U202 .- A . =
,6
- t:
- i0 i200 140 i ; f 0 .
, -
_
F C
AUTODESK AUTODESK , ' -4UTODESK
1-41D222 FUSION 360 ,J13202 11FUSION 360 -FUSION
360'
-=.=-:
4_ -si000 280
i- "--
SHAPE. SIG.
110 290 ¨
116 A
GEN. AMP.
111
MULTISENSORY ..=
PROCESSOR
SWIM
9R OUPNCraa- OX _____________________________________ COMP.
115
112 113
FIG. 2
_
CA 3028749 2018-12-31

301 S200 302
1
'OintTM CorePointTM
__________________________ 280 _________________________
313
NII4IIS 312
321 290 322 311
4111õ11.00."= 4119 '
353
=*W 0000 =
__________________________________________________________________ II
ABS _von>
- VA
350 -F
L 352 INT.
U201 351 355
I U202
0
310 ABSEMENT
TRACKER
360
= Player 2
10 .1z)reBoarDI
C 1 = 370
---,
FIG. 3
CA 3028749 2018-12-31

S200 401
absement = 5.23
sCore PointstTM= 191
- -
CorePointTM
280
313
290 471
481 4 11 '
.IVNI"k0INION1
00 Nese e
&Qs 0(e
421 0
FIG. 4
CA 3028749 2018-12-31

Ai 501 S200
C. 200
509 0
x\v%)0.4.-., .
U201 551 A P2011
il f
6 ,k\
ic, Absement = 3.25 deg. Sec.
567 '6
PlankPoints"= 308
I i2voo Viaol it ! V
560 4 1 .,\c,
-1.
,
:76(1 lig AI 4%() '6 P202
thl 1 lankPointer" . q 0 si000
501
562 IF Sc=ime axis
y PlankPointTm
575
576 A 54 2801
577 414 54
541
53 FIG. 5
290
54
CA 3028749 2018-12-31

BOW
601 A
602
603
604 ,
605 620
:.." 4 N ' vo:g
, .4 + 0,,
PORT gi . ,,
STARBOARD
...<
606 / .44
µµ....e,
itopoiltili" 671
600
.EsEi0
Illre 602
640
667
642,,643 _________________________
STERN FIG. 6
CA 3028749 2018-12-31

720 720 L
\ , \ ,
760 _ = .¨
\
_ = _ ., ,
--,;=,õ¨ 767 \..
../_
=/
... .0,
¨ = t a
.....,,, ,
\
, =
_==s_ _ =
= _
,_\ w
õ ,,,,
= _,_ 797 , , i =_
, = ,\ ,\ = ,\
,
FIG a
761
CA 3028749 2018-12-31

\ /
_ ,
810 , /\
,
,
-X-
820 _ , = _, /
4 \ /
830 _____________________________ k =
=
/
, =
840 A .
\
850 = _ ., = = = _\._
860
/
= = \ /_
,
_ . , _ =
. = , , _ , _ ,
41
41
FIG. 8
CA 3028749 2018-12-31

Touch Surface 910
Play Surface 920
Device Surface 930 __
Support 940
Motor 950
Sensor 960
Roller
Rollers 972 .)
Spokes 971 _________________________
Fig. 9: Robotic Fitness Trainer
Touch Surface 910
Play Surface 920
Device Surface 930 __
Support 940
Motor 950
Sensor 960
PROCESSOR
1001
Roller NZ._
Rollers 972 -)
Spokes 971 ____________
1002 PROCESSOR
Touch Surface 101
Play Surface 102
Device Surface 103
Support 104
Motor 105
Sensor 10
Roller
Rollers 1072)
Spokes 1071 ________________________
Fig. 10: Robotic Fitness Trainer
Much Surface 910 ________________________
Play Surface 920
Device Surface 930
Support 940
Motor 950
Rollers 972?
Roller./
1002
960
1003 ________________________
1073 ________________________
Tbuctf Surface 1010--
Play Surface 1020
DevIce Surface 1030
Support 1040..,
Motor 1050
Sensor 106171
Roller 00 0
Rollers 107P
Spokes 1071 ___________
Fig. 10a: Possibly Robotic Fitness Gamer
CA 3028749 2018-12-31

== ___________________________________________________________
920
____________________ 116 e
_ 1
960 ________________________________
973
1002-
1091 PROC.
___________________________________________________________________________ A
1003
1073-
1060 ______________________________________
A
TM
1020 m
na IT
Fig. 10b: Mannfit Pong
CA 3028749 2018-12-31

v, 920
1004
1005
1073
1020
Fig. 10c: Integral Kinesiology
with Pong-like game in 3D
CA 3028749 2018-12-31

BUREAU RtGIONAL DE VC 7
TORONTO
,
CIPO REGIONAL OFFICk
'
1110 ball D
1120 user- __________________
EC 3 1 2016
,
1130 rotation sensor
1114500 signaller ________
nalalyler
L1
1160 wearable sensor
= 0
I 1100-i
1170 pool
'
PROCESSOR /
...
180 sensor
70.
Fig. 11, Integral Kinesiology ball
4., o 1210 display
2 1220 cursor
-a 1230 user,--------
as
a) 1240 purn.
r.
11
I 1250 pool
0 U
1260 water
Flow/GPM .II
-.
A C
Fig /. 12: Deadheading studio
PROCESSOR
1280 processor,_/
CA 3028749 2018-12-31

Dessin représentatif

Désolé, le dessin représentatif concernant le document de brevet no 3028749 est introuvable.

États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Le délai pour l'annulation est expiré 2022-06-30
Demande non rétablie avant l'échéance 2022-06-30
Lettre envoyée 2021-12-31
Réputée abandonnée - omission de répondre à un avis sur les taxes pour le maintien en état 2021-06-30
Lettre envoyée 2020-12-31
Demande publiée (accessible au public) 2020-06-30
Inactive : Page couverture publiée 2020-06-29
Représentant commun nommé 2019-10-30
Représentant commun nommé 2019-10-30
Inactive : CIB attribuée 2019-06-11
Inactive : CIB attribuée 2019-06-11
Inactive : CIB attribuée 2019-06-11
Inactive : CIB attribuée 2019-06-11
Inactive : CIB attribuée 2019-06-11
Inactive : CIB attribuée 2019-06-11
Inactive : CIB enlevée 2019-06-11
Inactive : CIB enlevée 2019-06-11
Inactive : CIB attribuée 2019-06-11
Inactive : CIB attribuée 2019-06-11
Inactive : CIB attribuée 2019-06-11
Inactive : Certificat dépôt - Aucune RE (bilingue) 2019-01-14
Inactive : CIB en 1re position 2019-01-09
Inactive : CIB attribuée 2019-01-09
Inactive : CIB attribuée 2019-01-09
Inactive : CIB attribuée 2019-01-09
Demande reçue - nationale ordinaire 2019-01-04
Déclaration du statut de petite entité jugée conforme 2018-12-31

Historique d'abandonnement

Date d'abandonnement Raison Date de rétablissement
2021-06-30

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Taxe pour le dépôt - petite 2018-12-31
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
STEVE MANN
Titulaires antérieures au dossier
S.O.
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document (Temporairement non-disponible). Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(yyyy-mm-dd) 
Nombre de pages   Taille de l'image (Ko) 
Abrégé 2020-06-28 1 3
Revendications 2020-06-28 1 3
Description 2018-12-30 144 14 170
Page couverture 2020-05-31 1 21
Certificat de dépôt 2019-01-13 1 205
Avis du commissaire - non-paiement de la taxe de maintien en état pour une demande de brevet 2021-02-10 1 537
Courtoisie - Lettre d'abandon (taxe de maintien en état) 2021-07-20 1 551
Avis du commissaire - non-paiement de la taxe de maintien en état pour une demande de brevet 2022-02-10 1 552