Language selection

Search

Patent 2316451 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2316451
(54) English Title: MEANS, APPARATUS, AND METHOD FOR ACQUIRING, PROCESSING, AND COMBINING DIFFERENTLY ILLUMINATED EXPOSURES OF THE SAME SUBJECT MATTER
(54) French Title: MOYENS, APPAREIL ET METHODE DE PRISE, DE TRAITEMENT ET DE COMBINAISON D'EXPOSITIONS DU MEME SUJET AVEC DIFFERENTES SOURCES LUMINEUSES
Status: Deemed Abandoned and Beyond the Period of Reinstatement - Pending Response to Notice of Disregarded Communication
Bibliographic Data
Abstracts

English Abstract


A novel means and apparatus for a new kind of photography is described. In
particular, multiple exposures of the same scene or object are acquired by a
camera
at a fixed location, while at the same time, one or more photoborgs
(photographic
cyborgs, photographers, lighting technicians, artists, or engineers) taking
the picture
may freely roam about the scene and differently illuminate various objects in
the
space visible to the fixed camera. A photoborg typically carries one or more
portable
light sources, typically interfaced to a wearable computer system, which is
connected
(wirelessly or otherwise) to a base station computer within or connected to
the fixed
camera. In this way, a photoborg can control the remote camera which gathers
multiple exposures, in a manner analogous to how an artist applies layers of
paint to
a canvas. Typically a photoborg's wearable computer contains a display
(viewfinder)
which shows the state of the image, updating the display with each new
exposure.
An interface (typically taking the form of a chording keyboard built into the
handle
of a flashlamp, or the like) allows a photoborg to erase any desired exposure,
or to
change the intensity or color of any of the exposures and interactively see
the effect,
as it appears from the perspective of the fixed camera. Because of a
photoborg's
ability to constantly see the small incremental effects of the light sources,
the
apparatus behaves as a true extension of the photoborg's mind and body.
2


Claims

Note: Claims are shown in the official language in which they were submitted.


CLAIMS
The embodiments of my invention in which I claim an exclusive property or priv-
ilege are defined as follows:
1. A cybernetic photography system, said cybernetic photography system includ-
ing:
.cndot. a camera controller for a camera, to be placed at an essentially fixed
loca-
tion;
.cndot. an inbound channel, said inbound channel for remotely operating said
cam-
era controller;
.cndot. a user interface for remotely activating said camera by way of said
inbound
channel;
.cndot. a synchronizer, said synchronizer for controlling a portable source of
il-
lumination, such that said source of illumination is illuminated during a
time when said camera is responsive to light,
said user interface operable while wearing or holding said portable source of
illumination.
2. The cybernetic photography system of claim 1, said inbound channel
including
an antenna having a pattern of reception approximately matching a field of
view
of said camera.
3. The cybernetic photography system of claim 1, said inbound channel
including
an antenna having a direction of best reception approximately matching a
center
of a field of view of said camera.
4. The cybernetic photography system of claim 1, said synchronizer including
an
antenna having a direction of best reception approximately matching a center
of a field of view of said camera.
77

5. The cybernetic photography system of claim 1, said synchronizer including a
repeater, said repeater.
6. The cybernetic photography system of claim 1, said synchronizer including a
repeater, said repeater comprising a packet flash synch receiver with an
output
connected to a packet flash synch transmitter.
7. The cybernetic photography system of claim 1, said synchronizer including a
repeater, said repeater comprising a packet flash synch receiver with an out-
put connected to a packet flash synch transmitter both operating at the same
frequency and same packet code.
8. The cybernetic photography system of claim 1 further including said
portable
source of illumination, wherein said synchronizer is built into said source of
illumination.
9. The cybernetic photography system as described in claim 1 where said user
interface is a switch affixed to said source of illumination, where said
switch
provides means for repeated activation of said camera.
10. The cybernetic photography system as described in claim 1 where said
source
of light is a flash and where where said user interface is a switch affixed to
said
flash, where said switch provides means for repeated activation of said
camera.
11. The cybernetic photography system as described in claim 1 where said
source
of light is a plurality of flashes ganged together to fire simultaneously and
where
where said user interface is a keyer affixed to a grip for holding said
plurality of
flashes and simultaneously aiming said plurality of flashes.
12. The cybernetic photography system as described in claim 11, where said plu-
rality of flashes are mounted in a ringlike configuration with a central
opening
for providing a user an unobstructed view of subject matter to be illuminated
with said source of light.
78

13. The cybernetic photography system as described in claim 1, where said
source
of light is a flash supplied with series connected energy modules.
14. The cybernetic photography system as described in claim 1, where said
source of
light is a flash supplied with series connected condenser banks, each
condenser
bank comprised of at least one capacitor, and each condenser bank having at
least one battery electrically isolated and insulated from the other batteries
associated with the other condenser banks.
15. The cybernetic photography system as described in claim 1, where said
source
of light is a flash equipped with an angle-cut shroud.
16. The cybernetic photography system as described in claim 1, where said syn-
chronizer includes at least two wearable antennas.
17. The cybernetic photography system as described in claim 1, where said syn-
chronizer includes at least two wearable antennas having different reception
patterns.
18. The cybernetic photography system as described in claim 1, where said syn-
chronizer includes at least two wearable antennas each associated with a radio
flash synch receiver providing a signal that is either on or off, and a
combiner
that performs a logical OR operation on the outputs of the radio flash synch
receivers.
19. The cybernetic photography system of claim 1 wherein said illumination con-
troller is a sequencer for sequencing through a plurality of portable light
sources.
20. A method of photography including the steps of:
.cndot. placing a camera at a fixed location;
.cndot. directing a source of illumination at various portions of subject
matter in
view of said camera;
79

.cndot. repeatedly remotely operating said camera in order to take a a
plurality
of pictures differing primarily in illumination of subject matter in view of
said camera;
.cndot. combining said differently illuminated pictures into a single picture.
21. The method of photography of claim 20, where said combining is by way of a
CEMENTer.
22. A cybernetic photography system, said cybernetic photography system includ-
ing:
.cndot. a camera controller for a camera, to be placed at an essentially fixed
loca-
tion:
.cndot. an inbound channel, said inbound channel for remotely operating said
cam-
era controller;
.cndot. an illumination controller;
.cndot. an outbound channel, said outbound channel for carrying a signal from
said camera controller to said illumination controller,
said illumination controller for controlling a portable source of
illumination, said
cybernetic photography system for remote activation of said camera by a user
while wearing or holding said source of illumination.
23. The cybernetic photography system of claim 22 wherein said illumination
con-
troller is a sequencer for sequencing through a plurality of portable light
sources.
24. The cybernetic photography system of claim 22 further including said
portable
source of illumination, wherein said illumination controller is built into
said
source of illumination.
25. The cybernetic photography system of claim 22 where at least one of:
80

.cndot. said illumination controller;
.cndot. said camera controller,
includes synchronization means, said synchronization means comprising means
of causing said source of illumination to produce light during the time
interval
in which said camera becomes responsive to light.
26. The cybernetic photography system as described in claim 22 further
including a
switch affixed to said source of illumination, where said switch provides
means
for repeated activation of said camera.
27. The cybernetic photography system as described in claim 26, where said
source
of illumination is an electronic flash.
28. The cybernetic photography system as described in claim 22 where said out-
bound channel includes a machine-readable signal sent to at least one WearComp
wearable by said at least one user of said cybernetic photography system.
29. The cybernetic photography system as described in Claim 22, further
including
a display, said display for viewing by a user of said cybernetic photography
system, and said display being responsive to an output of said camera.
30. The cybernetic photography system as described in claim 29 where said
display
means includes means of displaying a result of a photoquantigraphic summation.
31. The cybernetic photography system as described in claim 26 further
including
a display, said display for viewing by a user of said cybernetic photography
system, and said display being responsive to an output of said camera, where
said display means includes means of displaying a result of a
photoquantigraphic
summation of a plurality of exposures due to said source of illumination, at
least some of said exposures being in response to said repeated activation of
said camera.
81

32. The cybernetic photography system as described in claim 29, in which said
display means is affixed to said source of illumination.
33. The cybernetic photography system as described in claim 29, in which said
display means is wearable by a user of said cybernetic photography system.
34. The cybernetic photography system as described in Claim 29 including means
of updating an image displayed on said display means each time said camera is
activated.
35. The cybernetic photography system as described in claim 34 where said
means of
updating said image includes the computation of a photoquantigraphic quantity
q(x, y) determined by applying the inverse response function of said camera to
a picture output from said camera.
36. The cybernetic photography system as described in claim 34 where said
means
of updating said image includes the computation of a photoquantigraphic sum
from a picture taken when said camera is activated and at least one other
picture
taken during previous times said camera was activated.
37. The cybernetic photography system as described in claim 34 where said
means
of updating said image includes the computation of a photoquantigraphic vec-
torspace from a picture taken when said camera is activated and at least one
other picture taken during previous times said camera was activated.
38. A photorendering system for computing an output picture from a plurality
of
input pictures, said plurality of input pictures having been derived from the
same subject matter under differing illumination, said photorendering system
including the steps of:
.cndot. computation of photoquantigraphic quantities q1, q2,..., for each of
said
input pictures,
82

.cndot. computation of a weighted sum, q = w1q1 + w2q2 + ...,
39. A cybernetic photography system as described in Claim 34 including a
method
of updating said image comprising steps of:
.cndot. determining from said camera a spatially varying quantity linearly
propor-
tional to the photoquantigraphic quantity, q(x, y), over spatial coordinates
(x, y) of light falling on the image plane or image sensor of said camera for
each of a plurality of exposures;
.cndot. computing a weighted sum q(x, y) over said plurality of exposures,
said
weighted sum being given by q(x, y) = w1q1(x, y) + w2q2(x, y) + ...;
.cndot. applying an essentially semi-monotonic transfer function, f(q), to
said
sum, q(x, y), to obtain a picture f(q(x, y)), where said essentially semi-
monotonic transfer function, f(q) has essentially semi-monotonic slope;
.cndot. displaying said picture f(q(x, y)) on said display.
40. A cybernetic photography system including:
.cndot. a light source;
.cndot. an activator for signalling a remote camera to take an exposure each
time
said activator is activated;
.cndot. a synchronizer for flashing said light source in synchronism with
exposures
of said camera.
41. The system of Claim 40 including means to temporarily disable said light
source,
while leaving said activator enabled.
42. A cybernetic photography system including:
.cndot. a controller for a camera, said camera for being placed at a fixed
location;
.cndot. a portable light source;
83

.cndot. a portable user actuator which, when actuated by a user, sends a
signal to
said controller causing said camera to take an exposure;
.cndot. means to synchronize said light source with said camera such that said
light source flashes when said camera takes an exposure.
43. The system of Claim 42 wherein said portable user actuator and said
portable
light source comprise an integral unit.
44. The system of Claim 42 wherein said portable user actuator is voice
actuated.
45. A cybernetic photography system as described in Claim 42, where said
camera
takes at least one picture of said subject matter with no use of said source
of
illumination, and then where said cybernetic photography system uses said at
least one picture of said subject matter to compare with further pictures of
said subject matter to determine whether or not said subject matter is being
illuminated with said source of illumination.
46. A cybernetic photography system as described in Claim 45 where said
cybernetic
photography system includes means of recording pictures that are determined
to have been pictures of said subject matter illuminated with said source of
illu-
mination, and not recording pictures that are determined to have been pictures
of said subject matter not illuminated with said source of illumination.
47. A cybernetic photography system as described in Claim 42 where said camera
is a video camera, and where said source of illumination flashes repeatedly at
the frame rate of said video camera.
48. A cybernetic photography system as described in Claim 47, including means
of
turning said source of illumination on and off, where said source of
illumination
produces repeated rapid bursts of light when it is turned on, and no light
when
it is turned off, and where said video camera records while said source of
illu-
84

mination is turned on, and stops recording during at least some of the time
for
which said source of illumination is turned off.
49. A cybernetic photography system, said cybernetic photography system includ-
ing:
.cndot. a lock-in camera to be placed at a fixed location;
.cndot. at least one source of illumination, said source of illumination being
one
of:
- a hand-held light source carried by said photoborg; and
- a wearable light source worn by said photoborg,
where said source of illumination produces a periodically varying level of
inten-
sity, and where said cybernetic photography system includes means of taking
at least one picture with said lock-in camera.
50. A photoquantigraphic flashlamp, where said photoquantigraphic flashlamp in-
eludes means of producing at least three flashes of different strengths in
rapid
succession while remotely activating a camera controller for remote control of
a
camera to be synchronized with each of said flashes.
51. A phlashlamp photography system, including a phlashlamp as described in
Claim 50, where said phlashlamp includes remote control means for a camera,
said remote control means including means for taking at least three pictures
in rapid succession, where said three at least three pictures are pictures of
the
same subject matter exposed to different quantities of light.
52. A cybernetic photography system including a photorendering system as de-
scribed in Claim 38 where said cybernetic photography system further includes
a virtual control panel presented upon a video display means, and where said
virtual control panel comprises lightmodule weight selection means.

53. A cybernetic photography system including color coordinate transformation
means, together with brightgrey warning means, said brightgrey warning means
including means of indicating image areas that correspond to regions of col-
orspace at the gamut boundary of the domain of said color coordinate transfor-
mation, but not at the gamut boundary of the range of said color coordinate
transformation.
54. A cybernetic photography system including color coordinate transformation
means, together with brightgrey reduction means, said brightgrey reduction
means including means of identifying regions of colorspace at the gamut bound-
ary of the domain of said color coordinate transformation, but not at the
gamut
boundary of the range of said color coordinate transformation, said cybernetic
photography system including means of adjusting said color coordinate trans-
formation means to reduce the amount of brightgrey image content.
55. A cybernetic photography system as described in Claim 54, where said
adjust-
ment of said color coordinate transformation includes at least one of:
.cndot. deliberate distortion of color hue; and
.cndot. deliberate destruction of highlight detail by clipping.
56. A method for facilitating combining pictures of a given scene or object,
com-
prising:
.cndot. capturing photoquantigraphic quantities, q1, q2, . . . , one from each
of a
plurality of pictures of said given scene or object, at least some of said
pictures taken under different illuminations.
57. The method of Claim 56, further comprising:
.cndot. computing a weighted sum from said photoquantigraphic quantities, said
weighted sum given by q = w1q1 + w2q2 + . . . .
86

58. A cybernetic photography system as described in Claim 42, including optics
to
effectively locate said source of light near the center of a lens of the eye
of said
photoborg.
59. A cybernetic photography system as described in Claim 42, where said
source of
light is a phlashlamp, and where said cybernetic photography system includes
optics to effectively locate said source of light near the center of a lens of
the
eye of said photoborg.
60. A wearable photography apparatus , comprising:
.cndot. headgear;
.cndot. a camera borne by said headgear;
.cndot. optics borne by said headgear and arranged to locate the effective
center of
projection of said camera near the center of a lens of the eye of the wearer
of said wearable photography apparatus.
61. A wearable photography apparatus as described in Claim 60, where said head-
gear is a pair of eyeglasses.
62. A cybernetic photography system as described in Claim 42, where said
portable
light source comprises a pushbroom light, said pushbroom light including a
plurality of light emitting elements of separately controllable intensity
mounted
to a frame such that a photoborg may grasp said frame and move about with
it, said cybernetic photography system including means of dynamically varying
the output level of each of said plurality of light emitting elements.
63. A cybernetic photography system as described in Claim 42, further
including
worklights, said worklights allowing said photoborg to see, said cybernetic
pho-
tography system further including means of turning off said worklights during
said time interval in which said camera becomes sensitive to light.
87

64. A cybernetic photography system as described in Claim 22, further
including
room light controlling means in a working environment such as a photographic
studio, where the room lighting itself may be controlled by an electric
circuit,
said room light controlling means including means of automatically turning
said
room lighting off during at least one time interval in which said camera
becomes
sensitive to light.
65. A cybernetic photography system as described in Claim 22, further
including at
least one indicator lamp fixed in the vicinity of said camera, said indicator
lamp
viewable by said photoborg when said photoborg is within the field of view of
said camera.
66. A cybernetic photography system as described in Claim 22, further
including
at least one indicator light source fixed in the vicinity of said camera, said
indicator light source having an attribute viewable by said photoborg when
said photoborg is within the field of view of said camera, and said attribute
of
said light source not viewable by said photoborg when said photoborg is not
within the field of view of said camera.
67. A cybernetic photography system as described in Claim 66, in which said at-
tribute is a color of said light.
68. A cybernetic photography system as described in Claim 42, where said cy-
bernetic photography system further includes a hiding test light, and remote
activation means of said hiding test light operable by said photoborg.
69. Apparatus for processing a plurality of exposures of the same scene or
object,
comprising:
.cndot. image buffers each for storing one of said plurality of exposures,
.cndot. means for obtaining photoquantigraphic quantities, one for each of
said
plurality of exposures; and
88

.cndot. means for producing a weighted photoquantigraphic summation of said
photoquantigraphic quantities.
70. A cybernetic photography system for acquiring a plurality of pictures of
the
same subject matter under differently illuminated conditions, said cybernetic
photography system including a fixed camera and a plurality of flash lamps,
together with means for sequentially activating each of said flashlamps each
time one of said plurality of pictures is taken, said each of said flashlamps
activated during the time interval in which said camera is sensitive to light.
71. A cybernetic photography system, as described in Claim 70, including means
of
sequentially firing a plurality of flashlamps, sequencing from one of said
flash-
lamps to the next at a video rate, and where said camera is a video camera,
and
where said cybernetic photography system further includes means of recording
video output from said video camera.
72. Means and apparatus as described in Claim 69 where, prior to computing
said
weighted photoquantigraphic summation, at least some of said exposures may
photoquantigraphically blurred.
73. The apparatus of Claim 29, where said display is a first display for
viewing by
a photoborg, and further including at least a second display for viewing by a
second photoborg, said second display also being responsive to an output of
said camera.
74. A controller for a camera and at least one light source, such that said
camera
acquires a pair of pictures in rapid succession with at least one picture
acquired
with illumination from said light source, and at least one other other picture
acquired without illumination from said light source.
75. An apparatus which includes a camera and light source, where said
apparatus
includes means for acquiring a pair of images in rapid succession where one
89

image is acquired with greater influence from said light source than the other
image, and where said influence is judged in comparison to a somewhat constant
degree of illumination which is external and not controllable by the
apparatus.
76. An apparatus which includes a camera and a plurality of light sources,
where
said apparatus includes means of acquiring a plurality of images where said
images differ primarily in the relative amount of influence that each of said
plurality of light sources has had on each of said images.
77. A flashlamp for use in production of lightvectorspaces, where said
flashlamp
includes a synchronization input, where said flashlamp is responsive only to
every nth signal received by said synchronization input, and where the first
m < n synchronization signals are ignored by said flashlamp, and where m and
n are user selectable.
78. A flashlamp as described in Claim 77. where n may be set to 2, and where m
may be set to 0 or 1, so that when m = 0 said flashlamp fires on even numbered
pulses and when m = 1 said flashlamp fires on odd numbered pulses.
79. A cybernetic photography system using two flashlamps as described in
Claim78,
together with a video camera, where one of said flashlamps is activated when
even fields of said video camera are acquired, and the other of said
flashlamps
is activated when odd fields of said video camera are acquired.
80. A controller for a camera and a plurality of light sources, where said
camera
acquires a plurality of pictures in rapid succession while said controller
activates
different combinations of one or more of said light sources during the
exposure
of each of said plurality of separate pictures.
81. A means of combining a plurality of images of the same subject matter
where
said images differ primarily due to changes in illumination of said subject
mat-
ter, and where said means comprises the following steps:

.cndot. application of a pointwise nonlinear function to each of said images,
where
said function is approximately monotonically increasing and has an ap-
proximately monotonically increasing slope;
.cndot. pointwise addition of the results obtained from the above step;
.cndot. applying a different pointwise nonlinearity to said sum, where said
different
pointwise nonlinearity is approximately monotonically increasing and has
an approximately monotonically decreasing slope.
82. Means and apparatus including a motion picture camera and a light source
fixed
to said camera, together with means of controlling said light source in such a
manner that it flashes periodically with a period of one half the field rate
or
frame rate of said motion picture camera, such that said light source affects
even frames or fields to a different degree than it affects odd frames or
fields,
and where said means and apparatus also includes means of producing a new
image sequence from the image sequence acquired by said camera, where said
new image sequence is made at half the field or frame rate of the original
image
sequence by pairwise processing of adjacent pairs of pictures, said pointwise
processing including at least one of the following:
.cndot. a photoquantigraphic summation;
.cndot. an implementation of split diffusion in lightvectorspace;
.cndot. calculation of a photoquantigraphic vectorspace.
83. A cybernetic hand-held flashlamp including means of synchronizing the
flash
from said flashlamp with a remote camera, and further including means of
repeatedly activating said remote camera.
84. An apparatus which comprises a means of activating a fixed camera by a
remote
control attached to a hand-held flash unit, and where each time said remote
control is activated, said camera briefly admits light to an image recording
91

medium and said flash unit is activated by said apparatus with the correct
timing such that said flash unit illuminates at least a portion of the subject
matter of said camera during the brief time that said camera admits light to
said image recording medium.
85. A cybernetic flashlamp where said cybernetic flashlamp includes a
viewfinder
means through which a user may look to determine the extent of illumination
of said flashlamp.
86. A lightsweep where said lightsweep includes a frame upon which a plurality
of light sources is mounted, and means to vary the quantity of illumination
produced by each of said light sources through a data entry device affixed to
said lightsweep.
87. A camera for use at a fixed location, including a visual indication means
by
which a person may discern whether or not he or she is within the field of
view
of said camera, where said means includes sources of light visible from a
distance
of at least 1000 meters from said camera.
88. A cybernetic photography control system, comprising:
.cndot. a first interface, said first interface for activating said light
source,
.cndot. a second interface, said second interface for said remote camera, said
re-
mote camera being remote from said light source,
.cndot. a third interface, said third interface for accepting input from said
person,
said person being near said light source and said person being remote from
said remote camera,
said control system operating said remote camera. in response to input from
said third interface, said control system also activating said light source
during
a time at which said remote camera is responsive to light.
92

89. The control system of Claim 88, where said second interface comprises two
parts:
.cndot. an input part of said second interface,
.cndot. an output part of said second interface,
said output part of said second interface for causing said remote camera to
make an exposure, said input part of said second interface for accepting a
flash
synchronization signal from said remote camera.
90. The control system of Claim 88, where said second interface is an output
of
said control system, said output to activitate said remote camera, said
control
system performing the following steps in the following order:
.cndot. activating said output,
.cndot. waiting for a brief period of time, such period of time being
sufficient for
said remote camera to begin being responsive to light,
.cndot. activating said light source.
91. The control system of Claim 88, where said light source is an electronic
flashlamp
and said activating causes the flashing of said electronic flashlamp.
93

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02316451 2000-08-02
FIELD OF THE INVENTION
Generally this invention pertains to photographic methods, apparatus, and
systems
involving multiple exposures of the same subject matter to differing
illumination.
BACKGROUND OF THE INVENTION
In photography (and in movie and video production), it is desirable to capture
a
broad dynamic range from the scene. Often the dynamic range of the scene
exceeds
that which can be captured by the recording medium. Therefore, it is not
possible to
rely on a light meter or automatic setting on the camera. Even if the
photographer
takes a reading from various areas of the scene that he/she considers
important, it is
seldom that the estimate of exposure will lead to an optimum picture. Results
from
cameras that attempt to do this automatically (e.g. by assuming the central
area is
important, and maybe measuring a few other image areas) are usually even
worse.
Still-photographers attempt to address this problem by a process called
bracketing
the exposures. This process involves measuring (or guessing) the correct
exposure,
and then taking a variety of exposures around this value (e.g. overexposing
one by
a factor of two, one by a factor of four, and underexposing one by a factor of
two,
etc). From this set of pictures, they select the single picture that has the
best overall
appearance. A photographer might typically take half a dozen or so pictures of
each
pose or each scene. These pictures are usually taken in rapid succession, and
the
aperture is opened one stop (or 1/3 of a stop) between each exposure and the
next,
or the shutter speed is equivalently adjusted between one exposure and the
next.
When the pictures are developed they are usually arranged in a row (say left
to right)
ordered from lightest to darkest, and one of them is chosen by visual
comparison to
the others. The remaining pictures are usually disposed of or not used at all.
In situations where there is high contrast, extended-response film may be
used.
Many modern films exhibit an extended response and are capable of capturing a
broad dynamic range. Extended response film was invented by Charles Wyckoff,
as
3

CA 02316451 2000-08-02
described in U.S. Pat. No. 3,663,228. In variations of the Wyckoff film in
which
the different exposures are separately addressable, it is also possible to
apply a new
imaging processing means and apparatus, as described in U.S. Pat. No.
5,828,793.
Often in an indoor setting, there is a window in the background, and we wish
to capture both the indoor foreground (lit by low-power household lamps) and
the
outdoor scene, (which might be lit by bright sunlight). This situation is
usually dealt
with by adjusting the lighting. Often a fill-flash is used, sometimes leading
to unnat-
ural pictures. It is difficult to tell exactly how much fill-flash to use, and
excessive
fill flash leads to visually unpleasant results, while insufficient fill-flash
fails to reduce
the dynamic range of the scene sufficiently. Still-photographers address this
problem,
again, by bracketing, but now they must bracket over two variables: (1) the
exposure
for the background lighting; and (2) the exposure for the flash. This is
generally done
by noting that the shutter speed does not affect the flash exposure but only
affects
the exposure to background light, while the aperture affects both. Thus the
photog-
rapher will expose for a variety of both shutter speeds and apertures.
Alternatively, a
flash with adjustable output may be used, and the photographer will make a
variety
of exposures attempting to bracket through all possible combinations of flash
output
and exposure to natural light. While there are many automatic systems that
combine
"intelligent" flash and light meter functionality, the results are often
unacceptable, or
at best, still fall short of the results that can be obtained by bracketing
over the two
variables - flash and natural light.
Alternatively, especially in commercial photography, movie production, or
video
production, great effort is expended to reduce the dynamic range of the scene.
Large
sheets of light-reducing dark grey transparency material are used to cover
windows.
Powerful lamps are temporarily set up in office or home interiors. Again, it
is very
difficult to adjust the balance between the various lamps. Most professional
photog-
raphers bracket the exposures over a variety of combinations of lamp output
levels.
As one can imagine, the number of possible permutations grows astronomically
with
4

CA 02316451 2000-08-02
the number of separate lights. Furthermore, the dimension of color is often
involved.
It is common, for example, in pictures used in annual reports or magazine
adver-
tisements, for difFerent colored filters to be placed over each lamp. For
example, it
is common to cover the lamps in the background with strongly colored (e.g.
dark
blue) filters. The exact effect is not predictable. Most professional
photographers
test with Polaroid film first but Polaroid is not good enough for the final
use. A
high-quality film is inserted in place of the Polaroid, and the same exposure
is made
there. Because of differences between the response of the two films, it is
still neces-
sary to bracket over the balance of the various lights. Furthermore, it is
impossible to
predict the exact wishes of the client, or other possible end uses of the
image, and it is
usually necessary to take both ''normal" pictures with no filters on the
lights, as well
as ''dramatic" pictures with colored lights. Therefore, it is also common to
bracket
over colors (e.g. take one picture with a bright yellow background, another
with plain
white, and another with deep blue, etc). Thus the resulting shot can appear in
both
a more traditional publication and a more artistic/advertising-related
publication.
This dual-use can save the photographer from having to do a re-shoot when
another
use of the image arises. or should the client have a slight change of heart,
as is often
the case.
As an alternative to using many different lights, it is common in commercial
photography to leave the shutter open and move around through the scene with
a hand-held flash unit. The photographer aims the flash unit at various parts
of
the scene and triggers the flash manually. Apart from its economy (only one
flash
lamp is needed, rather than tens or hundreds of flashlamps that would be
needed to
create the same effect with a single short exposure), the method is often
preferred for
certain applications even when more flash lamps are available to the
photographer.
The method, which is called painting with Light, or, me>re succinctly,
Lightpainting,
has a certain expressive artistic quality, that makes it popular among
commercial
photographers, particularly in the advertising industry. The reason for this
appeal is

CA 02316451 2000-08-02
that the light sources can be placed right in view of the camera. At one
instant the
photographer may stand right in full view of the camera and point the light to
the
side, flashing on some object in the scene. The photographer does not show up
in
the picture because the light is aimed away from his/her body, and the camera
only
''sees" the object that the flash is aimed at. If that were the only flash of
light, the
picture would be entirely black except for that one object in the scene.
However, the
photographer moves through the scene and illuminates many different parts of
the
scene in a similar way. Thus, using a single flash lamp, the scene can be
illuminated
in ways that are simply not possible in a single short exposure, even with
access to an
unlimited number of flash lamps. This is because a plurality of lamps placed
in the
scene at the same time would illuminate each other, or, for example, light
from one
flashlamp may illuminate the light stand upon which another flashlamp is
attached.
Often, in lightpainting, various colored filters are held over the lamp each
time it
is flashed.
Soft-focus and diffusion filters are frequently used in commercial
photography.
These filters create pleasing halos around specular highlights in the scene.
They may
or may not reduce resolution (e.g. the ability to read a newspaper positioned
in the
scene), since the image detail can remain yet be seen through a "soft and
dreamy"
world. It is often desirable to either blur or diffuse some areas of the scene
but not
others. Sometimes a soft-focus filter with a hole in the middle is used to
blur the
edges of the image (usually corresponding to background material) while
leaving the
center (usually the main subject matter) unaffected.
Another creative effect that has become quite popular in commercial
photography
is called split difficsion. Split-diffusion is created by separating the
control of the
foreground lighting from the control of the background lighting. The
foreground
lights are turned on and one exposure is made. The foreground lights are
turned off,
and the background lights are turned on. A diffusion filter is placed over the
lens and
a second exposure is made on the same piece of film. The split-diffusion
effect may
6

CA 02316451 2000-08-02
also be created with flash. The foreground flashlamps are activated, the
diffusion
filter is moved over the lens, and the background flashlamps are then
activated.
Split-diffusion is also routinely applied within the context of lightpainting.
The
diffusion filter is often moved by an assistant, or electrically, back and
forth in front
of the lens or away from the lens, while the photographer flashes at different
parts of
the scene, some flashes with and some without the diffusion.
SUMMARY OF THE INVENTION
The invention facilitates a new form of visual art, in which a fixed point of
view is
chosen for the base station camera, and then, once the camera is secured on a
tripod,
a photoborg can walk around and use various sources of illumination to
sequentially
build up an image layer-upon-layer in a manner analogous to paint brushes upon
canvas, and the cumulative effect embodied therein. To the extent that the
artist's
light sources can be made far more powerful than the natural ambient light
levels,
the artist may have a tremendous degree of control over the illumination in
the scene.
The resulting image is therefore a result of what is actually present in the
scene,
together with a potentially very visually rich illumination sculpture
surrounding it.
Typically the illumination sources that the artist carries are powered by
batteries,
and therefore, owing to limitations on the output capabilities of these light
sources,
the art is practiced in spaces that may be darkened sufficiently, or, in the
case of
outdoor scenes, at times when the natural light levels are least.
By "photoborg", what is meant is one who is either a photographic cyborg
(cyber-
netic organism), a lighting technician, a photographer, or an artist using the
apparatus
of the invention. By virtue of the communications link between the photoborg
and
the base station, the photoborg may move through the space, including the
space in
view of the camera, and the photoborg may selectively illuminate objects that
are at
least partially within the field of view of the camera. Typically the
photoborg will
produce multiple exposures of the same scene or object. These multiple
exposures
7

CA 02316451 2000-08-02
are typically each stored as separate files, and are typically combined at the
base
station, either by remote control of the photoborg (e.g. by way of wearable
computer
remotely logged into the base station computer), or by a director or manager
at the
base station.
In a typical application, the artist may, for example, position the camera
upon a
hillside, or on the roof of a building, overlooking a portion of a city. The
artist may
then roam about the city, walking down various streets, and use the light
sources to
illuminate various buildings one-at-a-time. Typically, in order that the
wearable or
portable light sources be of sufficient strength compared to the natural light
in the
scene (e.g. so that it is not necessary to shut off the elecaricity to the
entire city to
darken it sufficiently that the artist's light source be of greater relative
brightness)
some form of electronic flash is used as the light source. In some embodiments
of the
invention, an FT-623 lamp is used, housed in a lightweight 30 inch highly
polished
reflector, with a handle which allows it to be easily held in one hand. The
commu-
nications infrastructure is established such that the camera is only sensitive
to light
for a short time period (e.g. typically approximately 1/500 of a second),
during the
instant that the flash lamp produces light. In this manner a comparatively
small
lamp (e.g. a lamp and housing which can be held in one hand) may illuminate a
large
skyscraper or office tower in such a manner that, in the final image, the
flashlamp
is the dominant light source, compared to fluorescent lights and the like that
might
have been left turned on upon the various floors of the building, or to
moonlight, or
light from streetlamps which cannot be easily turned off.
Typically, the photoborg's wearable computer system comprises a visual display
which is capable of displaying the image from the camera (t;ypically sent
wirelessly over
a data communications link from the computer that controls the camera).
Typically,
also, this display is updated with each new exposure. The display update is
typically
switchable between a mode that shows only the new exposure, and a cumulative
mode
that shows a photoquantigraphic summation over time to show the new exposure
8

CA 02316451 2000-08-02
photoquantigraphically added to previous exposures. This temporally cumulative
display makes the device useful to the photoborg because it helps in the
envisioning of
a completed lightmodule painting. The temporally cumulative display is also
useful
in certain applications of the apparatus to gaming. For example, a game can be
devised in which two players compete against each other. One player may try to
paint the subject matter before the camera red, and the other will try to
paint the
subject matter blue. When the subject matter is an entire cityscape as seen
from a
camera located on the roof of a tall building, the game can be quite
competitive and
interesting. Additionally, photoborgs can either work cooperatively on the
same team,
or competitively, as when two teams each try to paint the city a different
color, and
''claim" territory with their color. In some embodiments of the game the
photoborgs
can also shoot at each other with the flashguns. For example, if a photoborg
from
the "red" team "paints" a blue-team photoborg red, he may disable or "kill"
the
blue-team photoborg, shutting down his flashgun. In other embodiments, the
"kill"
and ''shoot" aspects can be removed, in which case the game is similar to a
game like
squash, where the opponents work in a collegial fashion, getting out of each
other's
way while each side takes turns shooting. The red team flashguns) and blue
team
flashguns) can be fired alternately by a free running base--station camera, or
they can
all fire together. When they fire alternately there is no problem
disambiguating them.
When they fire together, there is preferably a blue filter over each of the
flashguns
of the blue team, and a red filter over each of the flashguns of the red team,
so that
flashes of light from each team can be disambiguated.
The wearable computer is generally controllable by the photoborg through a
chord-
ing keyboard mounted into the handle of each light source, so that it is not
necessary
to carry a separate keyboard. In this manner, whichever light source the
photoborg
plugs into the body-worn system becomes the device for controlling the
process. Typ-
ically, also, exposures are maintained as separate image files in addition to
a combined
cumulative exposure that appears on the photoborg's screen. The exposures
being
9

CA 02316451 2000-08-02
in separate image files allows the photoborg to selectively delete the most
recent ex-
posure, or any of the other exposures previously combined into the running sum
on
the screen. This capability is quite useful, compared to the process of
painting on
canvas, where one must paint over mistakes rather than simply being able to
turn
off brushstrokes. Furthermore, exposures to light can be adjusted either
during the
shooting or afterwards, and then re-combined. The capability of doing this
during
the shooting is an important aspect of the invention, because it allows the
photoborg
to capture additional exposures if necessary, and thus to remain at the site
until a
satisfactory final picture is produced. The final picture as well as the
underlying
dataset of separately adjustable exposures, and the weighting that was
selected to
generate the final picture, is typically sent wirelessly to other sites (e.g.
on the World
Wide Web) so that others (e.g. art directors or other collaborators) can
manipulate
the various exposures and combine them in different ways, and send comments
back
to the photoborg by email. This additional communication facilitates the
collection
of additional exposures if it turns out that certain areas of the scene or
object could
be better served if they were more accurately or more expressively described
in the
dataset.
Each of these exposures is called a lightstroke. A lightstroke is analogous to
an
artist's brushstroke, and it is the plurality of lightstrokes that are
combined together
that give the invention described here it's unique ability to capture the way
that a
scene or object responds to various forms of light.
Furthermore, a particular lightstroke may be repeated (e.g. the same exposure
may be repeated in almost exactly the same way, holding the light in the same
position, more than once). These seemingly identical lightstrokes may be
averaged
together to obtain a single lightstroke of improved signal to noise ratio.
This signal
averaging technique of repeating a given lightstroke may also be generalized
to the
extent that the lamp output may be varied for each repetition, but otherwise
held
in the same position and pointed in the same direction at the scene. The
resulting

CA 02316451 2000-08-02
collection of differently exposed pictures may be combined to produce a
lightstroke
that captures a broad dynamic range.
A typical attribute of the images produced using the apparatus of the inven-
tion is that of extreme exposure. Some portions of the image are often
deliberately
overexposed by as much as 10 f-stops or more, while other areas of the image
are
deliberately underexposed. In this way, selected features of the scene or
object are
emphasized. Typically, pictures produced using the apparatus of the invention
span
a very wide range of colorspace. Typically the deliberate overexposure is
combined
with very strongly saturated colors, so that the portions of the image extend
to the
boundaries of the color gamut. Accordingly, what is observed in some areas of
the
images is extreme shadow detail that would not show up in a normally exposed
pic-
ture. In other areas of the picture, one might see extreme highlight details
that would
not show up in a normally exposed picture. Thus in order to capture
information
pertaining to the extreme dynamic range necessary to be able to render images
of
such extreme exposure range, lightstrokes of extended dynamic range are
extremely
useful. Moreover, lightstrokes of extended dynamic range may be useful for
other
reasons such as the synthesis of split-diffusion effects which become more
numerically
stable and immune to quantization noise or the like, when the input
lightstrokes have
extended dynamic range.
Finally, it may, at times, be desirable to have a real or virtual assistant at
the cam-
era, to direct/advise the photoborg. In this case, the photoborg's viewfinder
which
presents an image from the perspective of the fixed carr~era also affords the
photo-
borg with a view of what the assistant sees. Similarly, it is advantageous at
times
that the assistant have a view from the perspective of the photoborg. To
accomplish
this, the photoborg may have a second camera of a wearable form. Through this
second camera, the photoborg allows the assistant to observe the scene from
the pho-
toborg's perspective. Thus the photoborg and assistant may collaborate by
exchange
of viewpoints, as if each had the eyes of the other.
11

CA 02316451 2000-08-02
The photoborg's camera may alternatively be attached to and integrated with
the
light source (e.g. flashlamp), in such a way that it provides a preview of the
coverage
of the flashlamp. Thus when this camera output is sent to the photoborg's own
wearable computer screen, a flashlamp viewfinder results. The flashlamp
viewfinder
allows the photoborg to aim the flashlamp, and allows the photoborg to see
what
is included within the cone of light that the flashlamp will produce.
Furthermore,
when viewpoints are exchanged, the assistant at the main camera can see what
the
flashlamp is pointed at prior to activation of the flash.
Typically there is a command that may be entered to switch between local mode
(where the photoborg sees the flash viewfinder) and exchanged mode (where the
photoborg sees out through the main camera and the assistant at the main
camera
sees out through the photoborg's typically wearable camera.
In many embodiments of the invention the flashlamp is wearable. The flashlamp
may also be an EyeTap (TM) flashlamp. An EyeTap fla,shlamp is one in which the
effective source of light is co-incident with an eye of the wearer of the
flashlamp.
One aspect of the invention allows a photographer to use a flashlamp and
always
end up with the ability to produce a picture where there is just the right
proportion of
flash in relation to the total exposure, and where the photographer may even
change
the apparent amount of flash after a set of basis pictures has been taken.
Using
the apparatus of the invention, the photographer simply pushes a button and
the
apparatus takes, for example, a picture at a shutter speed of 1/250 sec with
the flash,
then automatically turns off the flash and quickly takes another picture at
1/30 sec.
The look and feel of the system is no different than an ordinary camera and
the fact
that two or more pictures are taken need not be evident to those being
photographed,
or to the photographer, since the flash will only fire once, and the second
click of the
camera shutter if it is of a mechanical variety is seldom perceptible if it
happens
quickly after the first. Preferably a non-mechanical camera is used so that a
possibly
distracting double or multiple clicking is not perceptible.
12

CA 02316451 2000-08-02
After acquiring this pair of "basis pictures" , various combinations of the
flash and
non-flash exposures may be synthesized and displayed on a computer screen,
either
after the camera is brought to a base station for processing, or directly upon
the screen
of a wearable computer that the photographer is using, or perhaps directly
inside the
viewfinder of the camera itself, if it has an electronic viewfinder. The
picture that
best matches personal preference may be selected and printed. Thus the desired
ratio
of flash to ambient light can be selected AFTER the basis pictures have been
taken.
Furthermore, color correction can be done on the flash and ambient components
of
the picture separately (automatically or manually). If the picture was taken
in an
office, the greenish cast of the fluorescent lights can be removed without
altering the
face of someone lit mostly by the flash.
Furthermore, the background may be colored for interesting effects. For exam-
ple suppose the background is mostly sky. The flash image may be left
unaltered,
resulting in a normal color balance for flesh tones, and the sky may be made a
nice
blue color, even though it might have been grey in reality. This effect works
really
nicely for night time portraits where the sky in the background would
otherwise tend
to appear green or dark brown, and changing it to a deep blue by traditional
global
color balance adjustment of the prior art would lend an unpleasant blue cast
to the
faces of the people in the picture.
Each of the two basis pictures may be generated in accordance with a Wyckoff
principle ( ''definition enhancement" ) as follows: the flash may be activated
multiple
times. Without loss of generality, consider an example where the flash is
activated 3
times with low, medium and high output levels, and where 3 non-flash pictures
are
also taken in rapid succession with three different exposures as well. Two
basis images
of extended dynamic range are then synthesized from each set of three pictures
using
the Wyckoff principle.
More generally, any number of pictures with any particular ratio of flash and
ambient exposure may be collectively used to estimate a t;wo dimensional
manifold in
13

CA 02316451 2000-08-02
the NI N dimensional picture space defined by a picture of dimensions ll~l by
N.
The major aspect of this invention involves the lightpainting method described
earlier. The invention permits the photographer to capture the result of
exposure
to each flash of light (called a ''lightstroke" , analogous to an artist's
brush stroke)
separately. The lightstrokes can be electronically combined in various ways
before or
after the photographer has packed up the camera and left the scene. In
lightpainting,
photographers often place colored filters over the flash, to simulate a scene
lit by
multiple sources of different colored lights. Using the apparatus of the
invention, no
filters are needed, because the color of each lightstroke may be assigned
electronically
after the photographer has left the scene, although optional filters may still
be used in
addition to electronic colour selection. Therefore, the photographer is not
necessarily
committed to decisions about the choice of color, or the relative intensity of
the
various lightstrokes, and is also free to make decisions regarding whether or
not to
apply, and in what extent to apply split-diff~esion, after leaving the scene.
These collections of lightstrokes are referred to as a "lightspace'' . The
image pairs
in the above flash/no-flash example are a special case of a lightspace where
the flash
picture is one lightstroke and the non-flash picture is another. In the case
of black
and white (greyscale) images, the lightspace is homomorphically equivalent to
a vector
space, where the coefficients in the vector sum are a scalar field. This
process is a
generalization of homomorphic filtering, where a pointwise transfer function
is applied
to each entire image, a weighted sum is taken, and then the inverse transfer
function
is applied to this sum. In practice, with typical cameras, a sufficiently
pleasing image
results if each image is cubed, the results added together with the desired
weighting,
and the cube root of the sum is computed. In the case of color images, the
vector
space is generalized to a module space, for colour coordinate transformations
and
various filtering, blurring, and diffusion operations. Alternatively the
process may be
regarded as three separate vector spaces, one for each colour channel.
Another aspect of the invention is that the photographer need not work in
total
14

CA 02316451 2000-08-02
darkness as is typically the case with ordinary lightpainting. With a typical
elec-
tronic flash, and even with a mechanical shutter (as is used with photographic
film)
the shutter is open for only 1/500 sec or so for each ''lightstoke". Thus the
lightpaint-
ing can be done under normal lighting conditions (e.g. the room lights may
often
be left on). This aspect of the invention pertains to both traditional
lightpainting
(where the invention allows multiple flash-synched exposures to be made on the
same
piece of film, as well as to the use of separate recording media (e.g.
separate film
frames or electronic image captures) for each lightstroke. The invention makes
use of
innovative communications protocols and a user-interface that maintain the
illusion
that the system is immune to ambient light, while requiring no new skills
beyond
that of traditional lightpainting. The communications protocols typically
include a
full-duplex radio communications link so that a button on the flash sends a
signal to
the camera to make the shutter open, and at the same time, a radio wired to
the flash
sync contacts of the camera is already "listening" for when the shutter opens.
The
fact that the button is right on the flash gives the user the illusion that he
or she is
just pushing the lamp test button of a flash as in normal lightpainting, and
the fact
that there is really any communications link at all is hidden by this
ergonomic user
interface.
The invention also includes a variety of options for making the lightpainting
task
easier and more controlled. These include such innovations as a means for
photoborg
to determine if he or she can be "seen" by the camera (e.g. means to indicate
extent
of camera's coverage), various compositional aids, means of providing
workspace-
illumination that has no effect on the picture, and some innovative light
sources.
Other innovations such as EyeTap cameras, EyeTap light sources, etc.. and
further
means of collaboration among a community of photoborgs are also included in
the
invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in more detail, by way of examples which
in

CA 02316451 2000-08-02
no way are meant to limit the scope of the invention, but, rather, these
examples will
serve to illustrate with reference to the accompanying drawings, in which:
FIG. la is a diagram of a typical illumination source used in conjunction with
the
invention, comprising a data entry or command entry device such as a
pushbutton
switch, which, when pressed, causes the lamp to flash, but not directly;
instead the
lamp flashes as a result of a bi-directional communications protocol.
FIG. 1b is a diagram showing multiple (e.g. six) lamps mounted to a single
hand
grip system.
FIG. lc is a diagram showing multiple series connected capacitor banks each
with
separate oscillator and power source.
FIG. 1d is a diagram showing a flashlamp with keyer.
FIG. 1e is a diagram showing a diversity receiver and eyeglass display.
FIG. 2a is a diagram of the camera and base station that receives the signal
from
the light source shown in FIG. 1, wherein the signal causes the camera shutter
to
open, and it is the opening of the camera shutter which sends back a
confirmation
signal to the illumination source of FIG. 1, causing the flash to be
triggered.
FIG. 2b is a diagram of the camera and base station that uses a flash detector
instead of an explicit inbound radio channel for synchronization.
FIG. 2c is a diagram of typical base station outbound transmitter setup.
FIG. 3 shows a typical usage pattern of the invention in which the fixed
(static)
nature of the camera and base station is emphasized by way of concentric
circles
denoting radio waves sent to (inbound) it and radio waves sent from (outbound)
it,
and where a hand-held version of the light source depicted in FIG. 1 is
flashed three
times at three different locations.
FIG. 4 shows a detailed view of the user-interface typical of an instrument
depicted
in FIG. 1, where the data entry device comprises a series of pushbutton
switches and
where there is also a data display screen affixed to the source of
illumination.
FIG. 5 shows an example where pictures of dimension 640x480 are captured by
16

CA 02316451 2000-08-02
repeated flashing of the apparatus of FIG. 1 at different locations, where the
picture
from each exposure is represented as a point in a 307200 (640x480) dimensional
photoquantigraphic imagespace.
FIG. 5b shows how a CEMENTer works.
FIG. 5c shows how a CEMENTer works with files that are already photoquanti-
graphic.
FIG. 6 shows how some of these pictures, which are called lightvectors when
represented in photoquantigraphic imagespace, may fall in a subspace of the
307200
dimensional photoquantigraphic imagespace.
FIG. 7a shows an example of a two dimensional subspace of the 307200 dimen-
sional lightvectorspace, where the corresponding pictures associated with the
lightvec-
tors in this space are positioned on the page to correspond with their
coordinates in
the plane of the page.
FIG. 7b shows how the three pictures along the first row of the
photoquantigraphic
subspace of Fig. 7a are generated.
FIG. 8 shows a photoquantigraphic coordinate transformation, which appears as
a coordinate transformation of the two dimensional space depicted in FIG. 7a.
FIG. 9a shows the method by which pictures are converted to lightvectors by ap-
plying a linearizing inverse transfer function, after which the lightvectors
are added
together (possibly with different weighting) and the resulting lightvector sum
is con-
verted back to a picture by way of the forward transfer function (inverse of
that used
to convert the incoming images to lightvectors).
FIG. 9b shows the calculation of a photoquantigraphic sum in pseudocolor mod-
ulespace.
FIG. 9c shows photorendering (painting with lightmodules), e.g. calculation of
a
photoquantigraphic sum in pseudocolor modulespace.
FIG. 9d shows a phlashlamp made from 8 flashlamps, used to generate some of
the lightvectors of Fig. 9c.
17

CA 02316451 2000-08-02
FIG. 9e shows lightspace rendering in CMYK colorspace.
FIG. 9f shows the inverse gamut warning aspect of the invention.
FIG. l0a shows a general philter operation, implemented by applying a photo-
quantigraphic filter (e.g. by converting to lightvectorspace, filtering, and
converting
back) .
FIG. 10b shows the implementation of split diffusion using a philter on one
lightvectorspace quantity and no filter on the other quantity.
FIG. 10c shows an image edit operation, such as a pheathering operation, imple-
mented by applying a photoquantigraphic edit operation (e.g.
photoquantigraphic
feathering) .
FIG. lOd shows a philter operation applied over an ensemble of input images.
FIG. 11 shows how the estimate of a single lightvector (such as v4 of Fig. 5)
may
be improved by analyzing three different but collinear lightvectors.
FIG. 12a shows the converse of Fig. 11, namely to illlustrate the fact that to
generate a Wyckoff set (as strongly colored lightvectors approximately do over
their
color channels), one desires to begin with a great deal of dynamic range, as
might be
captured by a Wyckoff set.
FIG. 12b attempts to make this point of Fig. 12a all the more clear by showing
that a strongly colored filter exhibits an approximation to the Wyckoff effect
by virtue
of the different degrees of attenuation in different spectral bands of a color
camera.
FIG. 12c shows a true Wyckoff effect implemented for a scene that is monochro-
matic and a color camera with strongly colored filter.
FIG. 13a shows the EyeTap (TM) flashlamp or phlashlamp aspect of the inven-
tion.
FIG. 13b shows a wide angle embodiment of the EyeTap (TM) flashlamp or
phlashlamp.
FIG. 14a shows an EyeTap (TM) camera with planar diverter.
FIG. 14b shows an EyeTap (TM) camera with curved diverter which is also part
18

CA 02316451 2000-08-02
of the optical system for the camera.
FIG. 15 shows an embodiment of the finder light or hiding light, which helps a
photoborg determine where the camera is, or whether or not he or she is hidden
from
view of the camera.
FIG. 16 shows an embodiment of the lightsweep (pushbroom light).
FIG. 17a shows an embodiment of the flash sequencer aspect of the invention.
FIG. 17b shows an embodiment of the invention for acquiring lightvector
spaces,
using special flashlamps that do not require a sequencer controller.
FIG. 18 shows the user interface to a typical session of the Computer Enhanced
Multiple Exposure Numerical Technique (CEMENT) program.
While the invention shall now be described with reference to the preferred em-
bodiments shown in the drawings, it should be understood that the intent is
not to
limit the invention only to the particular embodiments shown, but rather to
cover all
alterations, modifications and equivalent arrangements possible within the
scope of
appended claims.
In all aspects of the present invention, references to "c;amera" mean any
device or
collection of devices capable of simultaneously determining a quantity
proportional
to the amount of light arriving from a plurality of directions and or at a
plurality of
locations.
References to "photography" , "photographic" , and the like, may also be taken
to include "videography" , "videographic" , and the like. Thus the final
result may
be a video or other sequence of images, and need not be limited to a single
picture.
Indeed, the term "picture" may mean a motion picture, in addition to just
simply a
still picture.
Similarly references to "data entry device" shall not be limited to
traditional
keyboards and pointing devices such as mice, but shall also include input
devices
more suitable to the "wearable computers" of the invention, as well as to
portable
devices. Such input devices may include both analog and digital devices as
simple
19

CA 02316451 2000-08-02
as a single pushbutton switch or as sophisticated as a voice controlled or
brainwave,
respiration, or heart rate controlled device, or devices controlled by a
combination
of these or other biosignals. The input devices may also include possible
inferences
made as to when to capture a picture or trigger an event, in a manner that
does not
necessarily require or involve conscious thought or effort.
Moreover, references to "inbound channel" shall not be limited to radio com-
munications devices as depicted in the drawings through the use of the
standard
international symbol for antenna, but shall also include communications over
wire
(twisted pair, coax, or otherwise), infrared communications, or any other
communi-
cations medium from a user to the camera base station. References to base
station
also do not limit it to a station that is permanent or semi-permanent; base
stations
may include mobile units mounted on wheels or vehicles, and units mounted or
carried
on other persons.
Similarly, references to "outbound channel" shall not be limited to radio
commu-
nication, as depicted in the drawings, but may also include other means of
commu-
nication from the camera to the user, such as the ability of a user to hear
the click of
a camera shutter, perhaps in conjunction with steps taken to make the sound of
the
shutter louder, or to add other audible, visual, or the like, events to the
opening of
the shutter. The "outbound channel" may also include means by which a
photoborg
can confirm that a camera shutter is open for an extended period of tirrze, or
means
of making a photoborg aware of the progression of time for which a shutter is
open.
The use of "shutter" is not meant to limit the scope of the invention. While
the
drawings depict a mechanical shutter with solenoid, the invention may be (and
is
more often) practiced with electronic cameras that do not; have explicit
shutters, but,
rather, the shuttering operation may comprise electronic control of a sensor
array, or
simply the selection of the appropriate frames) from a video sequence. Thus
when
reference is made to the time during which the camera is "sensitive to light"
, what is
meant is that there is an intent or action that collects information from the
camera

CA 02316451 2000-08-02
during that time period, so that this intent or action itself serves to take
the place of
an actual shutter.
Likewise, while the drawings and explanation involve two separate communica-
dons channels for the inbound and outbound channels, operating at different
radio
frequencies, it will be understood that the invention is typically practiced
using a
single bidirectional communications link implemented via TCP/IP communications
protocols between a wearable computer system and a stationary computer at the
base
station, but even this method of communication is not meant to limit the scope
of the
invention. The communications channel could comprise, for example, a single
piece
of string or rope, where the user tugs on the rope to cause a picture to be
taken, and
the rope is tugged back by the camera to activate the user's light source.
Moreover,
this communication need not be bidirectional, and may, for example, be
implemented
simply by having suitable timing between the flash and the camera, so that a
signal
need only be sent from the flash to the camera, and then the flash may be
fired at
the appropriate interval, by a timing circuit contained therein, so that there
is no
explicit outbound communications channel or need for one. In this case, it
will be
understood that the outbound communications channel may comprise synchronized
timing devices, at least one of which is associated with a photoborg's
apparatus and
at least one of which is associated with a camera at the base station.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS,
WITH REFERENCE TO DRAWINGS
In using the invention to selectively and sequentially illuminate portions of
a scene
or object, a photoborg (e.g. a photographer, artist, lighting technician,
hobbyist,
professional or other user of the invention) will typically point a light
source at some
object or portion of the scene in view of the camera, and issue a command
through
a wearable computer system, or through a portable system of some sort, to
acquire a
picture which is affected, at least in part, by the light source.
21

CA 02316451 2000-08-02
In the embodiment presented in Fig 1 this command is issued by pressing button
110. Button 110 turns on radio transmitter 120, causing it to send a signal
out
through its antenna 130. Transmitter 120 is designated as "iTx" where "i"
denotes
inbound. The inbound pathway is the communications pathway from a photoborg to
the base station.
The signal from iTx 120 is received at a base station and remote camera,
depicted
in Fig 2a, by antenna 210, where it is demodulated by inbound receiver iRx
220.
Receiver 220 may be something as simple as an envelope detector, or an LM567
tone decoder that activates a relay. In some embodiments it is a
communications
protocol running over amateur packet radio, using a terminal node controller
in KISS
mode (TCP/IP). In a commercially manufactured embodiment, however, it would
preferably be a communications channel that does not require a special radio
license,
or the like. In the simple example illustrated here, inbound receiver 220
activates
shutter solenoid 230.
What is depicted in this drawing is, for illustrative purposes, approximately
typical
of a 1940s press camera fitted with the standard 6 volt solenoid shutter
release.
Preferably, however, the synchronization does not involve a mechanical
shutter, and
thus no shutter contacts are actually involved. Instead a sensor array is
preferably
used. A satisfactory camera having a sensor array is the Kodak (TM) DCS-260.
Continuing with the illustrative embodiment, Shutter flash synchronization con-
tads 240 (denoted "X" in Fig. 2a) activate outbound transmitter oTx 250
causing
a signal to be sent out antenna 260. This outbound signal from the base
station
is received by a photoborg by way of antenna 140. This received signal causes
the
outbound receiver oRx 150 of the outbound channel to activate an electronic
flash,
typically via an optocoupler such as a Motorola MOC3020 or the like, which is
typi-
cally connected to the synchronization terminals 160 (denoted "X" in Fig. 1)
of the
flash unit 170. An opening 180 on the light source allows light to emerge to
illuminate
the scene or objects that the source is pointed at.
22

CA 02316451 2000-08-02
It should be noted that many older cameras have a so-called "M" sync contact
which was meant for firing magnesium flashbulbs. This sync contact fires
before the
shutter is open. It is often useful to use such a sync contact with the
invention as it
may account for delay in the outbound channel, or equivalently allow for a
form of
pulse compression such as a chirp to be sent over the outbound channel; so
that the
invention will enjoy a greater robustness to noise and improved signal range.
Similarly
where the camera is an electronic imaging device, it may be desirable to
design it so
that it provides a sync signal in advance of becoming sensitive to light. The
advance
sync may then be used together with pulse compression.
Fig. lb illustrates the ganging together of multiple lamps 186 held by a
single
hand grip 191. Satisfactory lamps are small battery powered lamps as
manufactured
by Lumedyne. Lamps are preferably fitted with a good directional reflector 2H.
A
satisfactory reflector is the Sports Reflector manufactured by Norman.
Pushbuttons
110 form a keyer. Preferably there are a plurality of pushbuttons 110 so that
a user
holding hand grip 191 can key in any alphanumeric command sequence, to a
wearable
computer system, including a sequence such as CONTROL-ALT-DEL to reboot the
wearable computer if necessary, without having to put down the apparatus
depicted
in Fig. lb. The keyer preferably comprises at least 5 pushbutton switches 110,
which
provide possibly thousands of different symbols and commands, although only a
few
such commands will commonly be used. The buttons 11(I are preferably located
in a
nice handle, one for each finger, and one or more for the thumb.
A hand strap 111 helps the user hold and operate the pushbuttons 110 to key in
various commands while maintaining a good grip on the situation.
The lamps are plugged into a backpack containing three or six energy packs. A
satisfactory energy pack is the Lumedyne 0672 pack fitted with booster
capacitors,
so that each lamp operates at around 1600) (well under the 2400) limit,
allowing for
a margin of safety). Preferably there are six packs arranged in a row in the
backpack
with the connectors facing away from the body. A wide backpack (of the type
used
23

CA 02316451 2000-08-02
for canoe trips, and the like), will nicely fit six Lumedyne packs to run the
six lamps.
After walking around with the pack for some time, the user may feel tired and
wish to rest. Accordingly, posts 113 extend from the hand grip 191, far enough
in
the direction the lamps face, so that the rig can be set down on the ground
without
the lamps hitting the ground. Preferably each reflector 2H has a shroud around
it to
control stray light. Preferably each such shroud extends out not quite as far
as each
post 113, but far enough to prevent stray light from excessively illuminating
posts
113. Posts 113 and the shrouds are preferably dark black. The grip 191 and the
backs
of reflectors 2H are also preferably black.
Grip 191 preferably has a region 112 for being grasped, so that two hands can
be
used to securely hold grip 191. Region 112 is preferably a soft textured
absorbent
surface with a clothlike feel and rubberlike grip.
Lamps 186 are preferably hex packed for maximum density. Seven lamps may be
densely packed, but preferably there are only six present, with a central lamp
missing
to allow the user's head to fit where the seventh lamp would have gone, in
order for
the user to be able to see through the middle of the rig. The central optional
lamp,
has an optional reflector 2H0, and is mounted to an optional bracket 1910,
when a
little more light is needed. Otherwise, it is removed so the user can grasp
grip 191
and look straight through the center of the rig to see the target subject
matter being
dusted.
The hex packed lamp set of Fig. lb is suitable for dusting large skyscrapers
in a
large city, or other scene where it is desired to overpower ambient
illumination and
light up large subject matter. However, even when the subject matter is at a
smaller
scale, the use of multiple lamps ganged together is still of merit, because it
results in
a shorter flash duration, for a given total output level, than would have been
obtained
with a single lamp.
An alternative means of reducing flash duration, for the same output level, is
to use
higher voltages. Early electronic flash systems of the 1930s and 1940s tended
to use
24

CA 02316451 2000-08-02
high voltages. For example, the 1930s Kodatron systems used two 28 microfarad
3000
volt condensers (capacitors) in parallel. The system used to illuminate the
beaches
of Normandy for reconnaissance prior to the D-Day invasion operated at 4000
volts
(with a 24 to 30 kilovolt trigger derived therefrom). This system used five
boxes each
containing ten 100 microfarad capacitors, each of the five boxes being
relatively heavy
(having four handles, and requiring four strong people to lift each one, thus
requiring
twenty strong people to lift the entire set of capacitors). Thus to use such a
system at
the full 40kJ capacity (800J per capacitor; 8kJ per box) generally requires a
team of
twenty persons to carry the capacitors, and a 21st person to actually do the
dusting
(e.g. operate the lamp and wear the computer and controller, etc.). The 4000
volt
system is ordinarily used with an FT623 or FT17-30 flashlamp, which was and
still
is the most powerful flashlamp in the world.
It is often desired to use the FT623 or FT17-30 flashlamp for dusting large
sub-
ject matter such as the Brooklyn bridge, or the streest of Times Square in
order to
overpower the high light levels in Times Square and still obtain the painting
with
lightvectors effect. Although one person can easily aim an FT623 or FT17-30
flash-
lamp housed in a 30 inch parabolic mirror, it would take an additional twenty
persons
to carry the capacitors. This is owing to the fact that the oil filled
capacitors are
heavy. Electrolytic capacitors allow much higher energy density, but are only
avail-
able in lower voltages. Accordingly, modern electronic flash systems operate
at much
lower voltages, and therefore the flash durations are much longer even for the
low
output (e.g. maximum of 3200J) typical of modern flash systems like Speedotron
and
the like.
It is desired that the apparatus of the invention can allow a person to dust
large
subject matter like Times Square, the Brooklyn bridge, the grand canyon, or
the like,
with an apparatus that can be worn and carried by a single individual. There
is
something very powerfully gratifying about being able to single-handedly
illuminate
an entire city or other large subject matter.

CA 02316451 2000-08-02
Connecting electrolytic capacitor banks in series has associated with it many
prob-
lems, such as balancing the voltages across independent banks. Equalizing
resistors
would eat up precious battery power, while zener diodes also create problems.
More-
over, providing a 4000 volt supply from a battery has various problems
associated
with it.
Accordingly, Fig. lc illustrates a solution to some of these problems by way
of
separate isolated energy modules 186M. Each energy module 186M comprises a con-
denser bank (capacitor bank) having at least one capacitor 186C1, and possibly
N
capacitors, the Nth being denoted 186CN. These are electrolytic capacitors.
Satis-
factory capacitors are commercially produced capacitors having a capacitance
of 450
microfarads and a voltage of 500 volts, manufactured by Mallory of Canada,
part
number MA 98-007. These each provide approximately ~~OJ such that eight
provide
400J. Plastic boxes of eight such capacitors are available from Lumedyne with
ap-
propriate connectors for the Lumedyne 0672 pack. Therefore, stacking up
sufficient
numbers of these boxes, a simple rig with eight banks in series can be loaded
into a
backpack for a total of 3840 volts (480*8) which is enough to run the 4000
volt tube
and be worn comfortably by one person.
If switchable energy modules 186M are used, diodes 186D prevent reverse bias
in
the event one or more energy modules 186M are accidentally switched to a much
lower
setting than the others. Switching to different settings is preferably
accomplished
by switchers 1865 that are operable by isolators 186I. Preferably also
isolators 186I
operate switches 1865 for turning the power on and off to all batteries. Each
energy
module 186M has its own battery 186B and its own DC'-DC converter or
oscillator
186E for charging its own condenser bank. These components are all isolated
from
each other, and switching is preferably also isolated by isolators 186I.
Preferably capacitor switchers 186CS are used to switch live capacitors in and
out of each bank. Satisfactory capacitor switchers 186CS can be made from
silicon
controlled rectifiers (SCRs) with appropriate opto isolators for added safety.
For
26

CA 02316451 2000-08-02
simplicity the eight energy modules 186M are shown directly connected in
series, but
in actual practice, they are preferably connected through capacitor switchers
186CS
so that the number of capacitors in each bank can be live-switched. Live
switching
means that the capacitors can be switched in and out when the circuit is live
and
operating at high voltage.
A flashlamp 186L is connected to the series of energy modules 186M, for the
approximately 4000 volts across its anode and cathode. A 24,000 volt trigger
is
derived either from the approximately 4000 volts of the pack or from a lesser
number
of the energy modules 186M, such as from just one energy module 186M, since
480
volts is sufficient for trigger. Preferably the trigger is reduced to around
350 volts in
the primary of the 24000 volt trigger transformer, so that an SCR of type
C106B1 or
the like (400 volts max) can trigger it. Preferably an opto isolator is used
to protect
the radio receiver 150 that will trigger the flash, and to protect the user
who might be
touching the antenna. Since the antenna is preferably mounted to a hat or
eyeglasses,
it is desirable to have the isolation for safety of the user as well. These
matters are
handled by trigger circuit 186T.
With reference to Fig ld, the keyer comprised of buttons 110 is preferably ef-
fectively hands free. The effective hands free attribute comes from having the
keyer
borne by the handle of an object that would need to be carried by the user
anyway. In
this example, the instrument is a hand-held flashlamp that is being used by
the user
for the painting with lightvectors ( "dusting" ) genre of photographic or
experiential
imaging. Since the flashlamp handle 125 must be held to direct the reflector
100 of
lamp housing 115 at subject matter of interest, no additional hand is needed
for the
keyer. Alternatively the keyer may be built into the grip of another light
source such
as a pushbroom array of lights or the like.
A preferred embodiment of the keyer has five keys, one for each of the four
fingers,
and a fifth one for the thumb, so that characters are formed by pressing the
keys in
different combinations. A computer or processor reads when each key is
pressed, and
27

CA 02316451 2000-08-02
when each key is released, as well as how fast the key is depressed. A
satisfactory
processor is the 6502 processor of Rockwell corporation. There are four large
switches:
~ a thumb switch, SWt;
~ an index finger switch, SWi;
~ a middle finger switch, SWm;
~ a ring finger switch, SWr.
A conditional modifier switch 190 has a long lever 199 that makes it easy for
the
smallest finger to press it. In this embodiment, it is used less often than
the other
four switches.
A five switch keyer such as disclosed here, is called a pentakeyer. The
pentakeyer
is an example of a multiambic keyer, wherein a multiambic keyer is a keyer
having
more than two switches. In the pentakeyer, all five switches of this keyer are
affixed
to the handle 125 of the fiashlamp, which may be detached from the lamp
housing
115 by way of a long screw through the entire handle from the bottom to the
top,
into a 1/4 20 screw thread in the housing 115. Housing 115 is made of metal
and
grounded, or is made of plastic, to separate the high voltage in housing 115
from the
handle and its associated low voltage switches.
The handle unscrews and is attached to any of a large number of different
instru-
menu, such as may be tapped with a 1/4 20 thread to accept the pentakeyer. The
pentakeyer will screw onto the bottom of almost any camera, whether it be a
35mm
still camera, a video camera, or the like. In this way the camera can be used
while
keying. Therefore, the result is an effective hands-free keyer. The fact that
the keyer
doubles as a handle for something makes it operationally hands free, and thus
it is
said to be "operationally hands free" or "effectively hands free" .
Lamp housing 115 has a cursor pointing device comprised of housing 135 with
four
potentiometers 131, two of which are connected to a resistor capacitor timing
circuit
28

CA 02316451 2000-08-02
into a wearable computer for cursor control of the wearable computer. The
control
arm 145 is operable by the thumb of the user, so that the user can type,
control the
cursor, and aim the flashlamp with one hand.
Velocity sensing capability may arise from using both the naturally closed
(NC)
and naturally open (NO) contacts, and measuring the time between when the
common
contact (C) leaves the NC contact and meets the NO contact. The velocity
sensing
timing circuit is similar to the potentiometer timing circuit, so that seven
timing
circuits in total are needed (five for the five switches, and two for the
cursor, one for
each of its x and y axes).
A shroud 101 keeps stray light from getting back into the camera when dusting
to the side or when dusting back towards the camera slightly. Preferably the
shroud
101 is of cylindrical shape cut on the diagonal, so that the opening is
angled, and the
light can be angled back toward the camera without getting light into the
camera or
showing the shroud 101 in view of the camera even when dusting backwards
toward
the camera. The off axis cut is more readily visible in the top view of Fig.
ld, where
it can be seen that it is not cut straight across, but, rather, is shorter on
one side
than the other, so that the opening is elliptical not circular. The shroud
1005 swivels
at join 100J so that the direction of aim can be decided by the end user for
each dust.
Fig. le shows a diversity receiver in which there are a plurality of receive
antennas
(in this case 2). A right antenna 1408 and left antenna 140L receive the
signal from
the flash sync slightly differently, and therefore right receiver 1508 and
left receiver
150L have different points of reception, such that if one antenna works out to
be is
in a node, the other will likely be in an antinode of reception. Additionally,
since
the two are not too far apart, owing to the small size of eyeglass frames
140F, they
are made to respond differently, either by operating in different frequency
bands, or
by differently shaped antennas. In a preferred embodiment, one of the
receivers such
as 1508 has a ground plane comprising a boom 140B for a microphone 140M, which
is part of a separate transmitter. Thus receive antenna 1408 will party
include the
29

CA 02316451 2000-08-02
boom 140B of the microphone 140M as its ground plane and therefore it will
have a
different reception pattern than antenna 140L. Antenna 140L may additionally
use
the eyeglass frames 140F if metal, or a separate metal portion 1406 thereupon,
as a
ground plane to get a different reception pattern.
Satisfactory receivers 1508 and 150L are those manufactured by LPA design
under
the trade name "Flash Wizard" or "Pocket Wizard" . These may be combined by
connecting the flash synch outputs in parallel, such that either one receiving
a valid
packet of flash synch data will trigger an output event. The combiner is
denoted
150C.
The eyeglass frames 140F preferably comprise a video display to show the
wearer
the current lightvector and the Computer Enhanced Multiple Exposure Numerical
Technique (CEMENT) output, as well as various forms of status, such as lamp
voltage,
charge, battery levels, and the like, by way of an instrument panel located in
the
eyeglasses. Thus antennas 1408 and 140L may carry both inbound and outbound
information, and additional antennas elsewhere on the body may be also used.
Typically, the components of Fig 1 are spread out upon the body of the
photoborg
and incorporated into a wearable computer system or the like, but
alternatively, if
desired, the entire source may be contained inside a single box, together with
all
the communications hardware, power source, and perhaps a display means. In
this
alternative hand-holdable embodiment, a photoborg will then not need to wear
any
special clothing or apparatus.
A photoborg will typically wear black clothing and hold the light source
depicted
in Fig 1 using a black glove, although this is not absolutely necessary.
Accordingly,
the housing of the apparatus 190 will typically be flat black in colour, and
might have
only two openings, one for the button 110, and one for the light opening 180,
thereby
hiding the complexity of the contents from the photoborg so as to make the
device
more intuitive for use by those not technically skilled or .inclined.
The purpose of this aspect of the invention, illustrated in Fig. 1 and Fig. 2a
is to

CA 02316451 2000-08-02
obtain a plurality of pictures of the same subject matter, where the subject
matter is
differently illuminated in each of the pictures. There are various other
embodiments
of this aspect of the invention, which also allow this process to be
performed. For
example, the camera set up at the base station may be a video camera, in which
case the photoborg can walk around with a flashlamp and flash the lamp at
various
portions of the subject matter in the scene.
Afterwards, the photoborg or another person can play back the video, and
extract
the frames of video during which a flashlamp was fired. Alternatively, a
computer can
be used to analyze the video being played back, and can automatically detect
which
frames include subject matter illuminated by flash, and can then mark the
locations of
these frames or separate them from the entire video sequence. If a computer is
going
to be used to analyze the video afterwards, it may also analyze the video
during
capture. This analysis would greatly reduce the storage space needed because
the
system could just wait until a flashlamp was fired, and then automatically
store the
pictures in which subject matter was illuminated (in whole or in part) by a
flashlamp.
Fig. 2b depicts such a system. Video camera 265 is used to take the pictures.
Video
camera 265, denoted CAM, is connected to video capture device 270, denoted
CAP.
The capture device 270 captures video continuously, and sends digitized video
to the
processor, PROC, 275. A satisfactory connection between CAP 270 and PROC 275
is an IEEE 1394 connection. In particular, a satisfactory unit that can be
used as
a substitute for both CAP 270 and PROC 275 is a digital video camera such as a
SONY PC7, which outputs a digital video signal.
Processor 275 originally captures one or more frames of video from the scene
under ambient illumination. If more than one frame is c<~,ptured, the frames
may be
photoquantigraphically averaged together. By this means, or by other similar
means,
a background frame is stored in memory 280, denoted MEM. The stored image can
then be compared against further incoming images to see if there are regions
that
differ. If there is sufficient difference over a region of a new incoming
frame of video
31

CA 02316451 2000-08-02
to overcome a certain noise threshold setting, then a the new frame is
captured. The
comparison frame may also be updated between flashes, in case the ambient
light is
slowly changing. In this case, PROC 275 processes with an assumption that
flashes
are occasional, e.g. that there will likely be many frames of video before and
after
each flash, so that changes in ambient light can be tracked. This tracking can
also
accommodate sudden changes, as when lights turn on inside a building by timer
control, since the changes will be more like a step function, while the
flashlamp is
more like a Dirac delta measure (e.g. the ambient lights may quickly change
state
they don't usually go on and then off again in a very short time).
In this way, there is no need for any explicit radio communications system or
the
like. The photoborg simply sets up the camera and leaves it running, and then
takes
an ordinary flashlamp and walks around lighting up different parts of the
scene.
Optionally the photoborg may have a one-way communications system from the
camera so that he can see what effect his flashlamp is having on the scene.
This
communications system may be as simple as having a television screen at the
base
station, where the television screen is big enough for hire to see from some
distance
away. Alternatively, an actual transmitter, such as a radio transmitter, may
be used
to send signals from the base station to a small television or EyeTap (TM)
display
built into his eyeglasses.
Each image, for which a flashlamp illumination was detected, may optionally be
sent back to the photoborg by way of communications system 285, denoted COMM,
and connected to antenna 290. Images due to each flash of light, as well as
one or
more images due only to ambient light, are saved on disk 295, denoted DISK.
The components of Fig. 2b may be spread out over the Internet, if desired. The
base station is free-running and does not require any human operator, although
a
manager may remotely log into the base station if it is connected to the
Internet. In
this case a manager can cement the images together into a photorendering, and
she
can select certain lightvectors of particular interest to send to a photoborg.
There
32

CA 02316451 2000-08-02
may also be more than one photoborg working together on a project.
A satisfactory camera for camera 265 is an ordinary video camera. Preferably,
however, camera 265 would be a specially built camera in which each pixel
functions as
a lock-in amplifier, to lock onto a known waveform emitted from a specially
designed
electronic flashlamp. The camera of this embodiment of the invention is then
referred
to as a lock-in camera.
Fig 2c depicts a typical base station outbound transmitter for a large scale
project
such as dusting the Brooklyn bridge and the New York skyline behind it. Camera
265 has a field of view denoted by frame 265F. Preferably an outbound transmit
antenna 260Y having a coverage approximately equal to this field of view is
used.
A satisfactory antenna 260Y is a Yagi antenna, being directional in the
direction
needed to cover subject matter in frame 265F. Processor 275 connects to this
antenna.
Additionally or alternatively, further antennas 260L and 2608 may be placed on
masts
just outside either side of the field of view of frame 265F. These are
preferably driven
by separate transmitters 250L and 2508. For flash sync a satisfactory
transmitter is
the omnidirectional "Pocket Wizard" manufactured by IPA design. Some distance
away, repeaters may be set up. A satisfactory repeater can be prepared from a
receiver
connected to a transmitter. A receiver 250B or 250D can receive on the same
channel
as 250L and 2508 and connect to another transmitter 250A or 250B at a
different
frequency, and provide a frequency diversity system. Alternatively, receiver
250B or
250D can receive on the same channel as 250L and 2508 and connect to another
transmitter 250A or 250B also at that same frequency, if, for example, a short
packet
size is used, like that of the "Pocket Wizard" manufactured by LPA design. In
this
way, the packet is sent by transmitter 250L or 2508 and rebroadcast again
slightly
later by 250A and 250B. since the flashlamp cannot respond and flash twice in
such a
short time interval, the double packet will not be a problem, and the presence
of the
extra delayed data packet from the other pair of transmitters will provide a
greater
chance that at least one packet is received to activate the flashlamp.
33

CA 02316451 2000-08-02
Fig 3 depicts the typical usage pattern of the source depicted in Fig 1. A
fixed
camera 300 (depicted here with a single antenna as may be typical of a system
running
with a terminal node controller over a TCP/IP communications link), is used
together
with a hand-held illumination source which is flashed at one location 310,
then moved
to a new location 320, flashed again, and so on, ending up at its final
location 330
where it is flashed for the last time. Alternatively, a number of photoborgs,
each
carrying a flashlamp, may selectively illuminate the scene.
Fig 4 depicts a view of a self contained illumination source. While the art is
most
frequently practiced using a wearable computer with head-up display or the
like, it is
illustrative to consider a self contained unit with a screen right on it
(although there
is then the problem that the screen is lit up and may spoil a picture if it
becomes
visible to the camera whereas a head mounted display painted black and
properly
fitted to the eye with suitable polarizers will produce much less
environmental light
pollution).
This source has pushbuttons 410 denoted by color (e.g. "R" for red, where the
button may be colored or illuminated in red), "G" for green, etc.. These
pushbuttons
may be wired so that they take exposures directly in the indicated color (e.g.
so that
pushing 410 R will cause the apparatus to request a red exposure of the
camera), or
they may be wired so that pushing R marks the system for red, but does nothing
until FILM WRITE (W) 420 is pressed. Pressing W will then send a signal to the
camera requesting a red exposure, which typically happens via a spinning
filter wheel
in front of the camera, wherein the camera shutter opens but the flash sync
pulse is
not sent back right away. Instead the base station waits until the instant
that the
red filter is in front of the lens and then at that exact instant, sends back
a flash
sync pulse, activating flash 430 so that it sends a burst of illumination out
opening
440. Alternatively, these color selections may be made electronically, wherein
the
only difference between pressing, for example, R, and pressing G is in the
form of
information appended to the header of the image file in the form of a comment.
For
34

CA 02316451 2000-08-02
example, if the captured image is a Portable PixMap (PPM), the image content
is
exactly the same except that the comment
#dustcolo 1 0 0
is at the beginning of the image file, as opposed to
#dustcolo 0 1 0
if the green button were pressed.
The effect of pressing the red button might show up on screen 450 in the form
of an
additional red splotch of light upon the scene, if, for example, there is a
single object
illuminated with the white flash of light to which the color red has been
assigned.
In this way, it is possible to watch the lightpainting build up slowly. In the
case of
an electronic imaging system, any of the effects of these flashes (called
lightstrokes)
may be deleted or changed in colour after they have been made (e.g. sinc;e the
colour
choice is just header information in the computer file corresponding to each
picture).
However, in the case of a film-based camera, it is not possible to change the
color,
so a great deal of film could be wasted were it not for the preview button on
control
panel 420. In particular, the preview button (P) performs the operation on a
lower
resolution electronic camera, to allow the photoborg to see its effect, prior
to writing
to film (W).
Also, in practice, not every flash exposure need be written to a separate
frame
of film, and in fact, if desired, all of the flash exposures can be written to
a single
frame of film, to avoid the need to scan the film and combine the exposures
later. In
this way, a photoborg can preview (P) over and over again, and then practice
several
times each lightstroke of a long lightpainting sequence, view the final
result, and then
execute the exact same lightpainting sequence onto a single frame of film.
In practice, when using film, the art of this invention is practiced somewhere
between the two extremes of having a lightpainting on one single frame and
having
each lightstroke on its own frame. Typically portions of the lightpainting are
practiced

CA 02316451 2000-08-02
using P 420 and then written onto a frame of film using several presses of W
420.
Then the film is advanced one frame (by pressing all three buttons on panel
420
together, or by some other special signal sent to the camera) and another
portion of
the lightpainting is completed. A typical lightpainting comprises then less
than 36
frames of film and can thus be captured on the same negative strip, which
simplifies
the mathematical analysis of the film once it is scanned, since each
lightvector will
have undergone the exact same film aging prior to development, as well as the
exact
same development.
In the situation where film is not being used (e.g. in an embodiment of the
invention using a completely electronic camera), lightvectors may still be
grouped
together if it is decided that they never need to be independently accessed.
In this
case, a new single image file is created from a plurality of lightvectors.
Typically a
new file format is needed in order to preserve the full dynamic range,
especially if
a Wyckoff effect (combining differently exposed lightvectors) is involved.
Typically
these combined files will take the form of a so-called "Portable Double Map
(PDM)" ,
or a Portable Lightspace Map (PLM). A PDM file for example might have a file
header of the form
P8
#photoborg 8.5 has image address space v850 to v899
#photoborg 8.5 selected:
#v852 . ppm 0 0 1 f 7
#v853.ppm 0 0 1 f /usr/local/filters/myfilt.txt
#v854.ppm 0 0 1 g
#v855.ppm 0 0 1 y
#cement to cementout.pdm
1536
1024
255
36

CA 02316451 2000-08-02
where the P8 indicates image type PDM, lines beginning in the # symbol are com-
ments, and in particular, v852.ppm is the file name of the file containing
both the ascii
header and the raw binary data, and the numbers following the filename
indicate how
the image is to be cemented into the viewfinder with the other images. In
particular,
the first three numbers after the filename indicate the color blue (in RGB),
the next
symbol, "f" indicates that a filter is used, and since no filter name is
specified, the
default filter (gaussian blur) is used with a blur radius of 7, as indicated.
The next image file v853.ppm is cemented in with the specified filter
filename,
filters being either pdm or plm files if inseparable, or a simple ascii text
file with one
number per line of text, if separable (e.g. applied as a tensor outer product
of the
filter with itself).
The next image file v854.ppm was applied with green channel only, but being
mapped into the blue channel of the output contribution. Only the green
channel of
the input image is to be considered (e.g. instead of mapping the whole image
to blue
which would only consider the blue channel, the green channel is mapped to
blue).
The next image file v855.ppm was converted to greyscale by YI(~ weights of Y
channel, and contributed to the blue channel of the output image.
This next line indicates that the input images were combined to make the new
file cementout.pdm.
After that, the next two lines indicate the file dimensions, and the last line
of the
header indicates the default range of the image data. Since PDM files are of
type
double (e.g. the numbers themselves range up to more than 10308) this number
does
not indicate the limit of the data type, but, rather, the limit of the data
stored in the
datatype (e.g. the maximum entry in the image array).
Each combined (PDM) lightvector (e.g. v855.pdm) of the above file size
occupies
36 megabytes of disk space. Therefore, preferably, when a large number of
lightvectors
are being acquired, the JPEG file format is used to store the individual
lightvectors.
A new file format is invented for storing combined lightvectors. This new
format is
37

CA 02316451 2000-08-02
a compressed version of the PD1VI file format, and normally has the file
extension
".jdm". Thus the above file would become "cementout.jdm" in a combined form.
Alternatively, a cement.txt file is created, but there is still an image
modification
history inserted into the output file as comments.
An example of a cement.txt file follows:
v286.jpg great on sonytv
0 1 0
#1/1000
too dark
(not
discernible
on xyberdisp);
#v287.jpg0 0 3 #1/500 too dark (barely discernible); Sony tv
great on
#v288.jpg1/250 nice cloud texture; too bright on sony
tv
v289.jpg
0 0 1
b 21
# 1/125
#v290.jpg1/60
#v291.jpg1/30
#v292.jpg1/15
#v293.jpg1/8
#v294.jpg1/4
#v295.jpg1/2
#v296.jpg1 0 0 # nice dark sky image, but house too ony tv
brigt on S
#v297.jpg0 0 9 # nice dark sky image, but house too sony tv
bright on
#v298.jpgbright sky image during sunset behind camera
#v299.jpgbright sky image during sunset behind camera
#v300.jpgdidnt sync?
#v301.jpgicicle lights 1/8sec, very nice
#v302.jpgicicle lights 1/2sec, very nice
v303.jpg 1 1 #icicle lights 2sec, very nice: they lightnicely at top
1 house
#v304.jpg# icicle lights short exposure
#v305.jpg# candles
#v306.jpg# candles
v307.jpg 1 0 # candles
1
#v308.jpg# candles dark
38

CA 02316451 2000-08-02
#v309.jpg # interiour lights too bright
v310.jpg 1 1 0 # interiour lights just right
#v311.jpg # interiour lights a little dim
v312.jpg 1 1 0 # coachlights bright
#v313.jpg # coachlights dim
v314.jpg 1 1 0 #foreground driveway retain wall good exposure
#v315.jpg # foreground driveway retaining wall too bright
#v317.jpg # didnt sync?
#v318.jpg # driveway and driveway retaining wall too bright and blurry
#v319.jpg # all pixels uniform grey; not responsive to subject matter
#v320.jpg # didnt sync
#v321.jpg light on driveway overexposed foreground tree and lights whole house
v323.jpg 0 1 0 # nice grass at top of driveway retaining wall
v324.jpg 1 0 0 # middle part of house
v325.jpg 0 0 1 # front of r.o.g.
#v326.jpg # left part of house
#v327.jpg 0 0 1 # lower part of house including trees+ugly tree shadows
confuse
v328.jpg 0 1 0 # ugly shadows of retaining wall on driveway
v330.jpg 1 1 0 # inside front porch
v331.jpg 0 3 0 # tree behind house
v332.jpg 0 3 0 # tree behind house
#v333.jpg # foreground tree too bright
v334.jpg 0 1 0 # foreground tree quite bright
v335.jpg 1 0 0 # front of house
#v337. jpg # front of house me shown
v338.jpg 0 0 1 # stepped back for top of house to left of r.o.g.
v339.jpg 0 0 1 # alley to right of garage
v340.jpg 0 0 1 # alley and spill to garage
39

CA 02316451 2000-08-02
v341.jpg 0 0 1 # alley and small spill to garage
#v342.jpg # top of driveway and bottom of house
#v343.jpg # blurry bright driveway foreground
#v344.jpg # driveway and retain wall
v346.jpg 1 0 0 # somewhat high from right of garage
v347.jpg 0 0 9 # nice driveway
v348.jpg 1 0 5 # right side widow's walk bright
v349.jpg # right side widow's walk dark
Comments can be included in the cement.txt file as indicated. Also, note for
example that the sky image v289.jpg is blurred with a blur radius of 21
pixels, using
the default (guassian blur) filter. The blurring is always done in lightspace
(e.g. the
image is converted to a photoquantity, and then filtered).
Note that the last image has no coefficients. This is allowed, and the cement
program just ignores images with no coefficients specified. Therefore the user
can
simply create the cement file using the command:
is *.jpg > cement.txt
and then proceed to add in the details.
In this example, all of the inputs are jpeg images, although there can be, in
general,
a mixture of jpegs, ppm, pdm, plm, and gzipped files that the cement program
autodetects and cements together.
The cement program can also take a portable lightspace map (PLM) as input, or
a picture as input. It generally produces a PLM as output. A PLM is like a PDM
except that it begins with P9 for greyscale and PA for colour. A PLM
incorporates
the transfer function of the imaging system if any, such that a PLM contains
pho-
toquantigraphic information and therefore each entry is proportional linearly
to the
photoquantity. The cement program autodetects and allow mixture of JPEG, PPM,
PGM, PDM, and PLM.

CA 02316451 2000-08-02
Fig 5 depicts lightvectors in the multidimensional photoquantigraphic space
formed
by either converting a picture to lightspace coordinates (by applying the
inverse non-
linear transfer function of the camera to the picture) or by shooting with a
camera
that provides lightspace (linearized) output directly, and then unrolling a
picture so
taken into a long vector. For example, a picture of pixel dimension 640x480
may be
regarded as a single point in a 307200 dimensional lightvectorspace. Likewise
contin-
uous pictures on film may be transformed (by application of the inverse
response of
film and scanner) to points in infinite dimensional lightvectorspace, but for
purposes
of illustration, consider the case when the pixel count is finite while the
pixel values
remain unquantized. The first lightvector, vl (short for vector of
illumination number
1) 510 is depicted together with v2 520, v3 530, etc.. It so happens that v2,
v3, and
v4 are collinear in this example, owing to the fact that they were captured
under
exactly the same lighting condition (e.g. the light did not move) but where
only the
exposure was varied (e.g. the light output was varied, or equivalently, the
camera
sensitivity was varied). Similarly lightvectors v6 560 and v7 570 are
collinear and
correspond to two pictures that differ only in exposure. As many other
lightvectors
as there are separate exposures, are present, and there is no particular
reason why the
number of lightvectors need be limited by the dimension of the space, e.g. the
ellipsis
580 ("...") denote a continuation up to v307200, denoted 590, v307201, denoted
591, and beyond to, for example, v999999, denoted 599 in the figure. However,
in
practice, due to limitations of film there are typically far fewer
lightvectors than the
dimension of the space, e.g. 36 lightvectors if using a typical 35mm still
film camera,
or in the case of a motion picture film camera or most electronic cameras, the
number
of lightvectors is typically not more than 999 in many applications of the
invention.
Accordingly, in many previous embodiments of the invention where lightvectors
were
each stored as a separate file on a hard disk, the filenames of these
lightvectors were
numbered with three digit alphanumeric filenames of the form, v000, v001, . .
. v123,
if, for example, there were 124 lightvectors, so that they would list in
numerical order
41

CA 02316451 2000-08-02
on a typical UNIX-based computer using the "ls" command. For each scene or
object
being photographed, a new directory was created to hold the corresponding set
of
lightvectors.
Fig 5b depicts this process, wherein a plurality of input pictures 510PPM,
520PPM,
. . . 590PPM pass through expanders 1020. Expanders are devices or processes
that
expand the dynamic range of the pictures by darkening the midtones relative to
the
shadows and highlights. Expanders have transfer functions that are generally
concave
upwards. An adder 525 adds the expanded images to get qTpT 531. The result is
then
compressed by dynamic range compressor 555. A compressor is a device or
process
that compresses the dynamic range of the picture by lightening the midtones
relative
to the shadows and highlights. A compressor has transfer function concave
down.
Output image 565 is said to comprise input images 510PPM, 520PPM, . . . 590PPM
after having been CEMENTed together. Therefore the process of Fig. 5b depicts
the
process of CEMENTing. Images that are expanded, added, and then compressed
again, are said to have been CEMENTed together.
If the images started out as PLM (Portable Lightspace Map) in the sense that
they were already indicative of a photoquantity, or already showing a quantity
roughly
proportional linearly to the quantity of light received, then the process of
CEMENT
comprises addition, followed by compression, without the prior expansion.
Addition-
ally, compressors 522 display the PLM images on display medium 589 so that a
user
can see these images along with the CEMENTed version, and compare each of
these
with the CEMENTed version. Thus PLMs 510PLM, 520PLM, . . . 590PLM are visi-
ble as images and in cemented form for comparison. Such an apparatus of Fig.
5c is
called a lightvector gallery comparator.
A CEMENTer is a device, process, apparatus, or the like that CEMENTs images
together by first expanding them if they are not already showing a quantity
roughly
proportional linearly to the quantity of light received, and then adding them,
and then
compressing the sum. Preferably the CEMENTer autodetects PLM versus PPM so
42

CA 02316451 2000-08-02
that it can accept a mixture of images already showing a quantity roughly
proportional
linearly to the quantity of light received, and those that require expansion.
Fig 6 depicts some pictures that represent linear combinations of two light
sources
and are therefore in a two dimensional lightvector subspace of the 307200
dimensional
space. Lightvector 610 denotes the vector spanned by 520 530 and 540 of Fig 5.
Lightvector 630 denotes the lightvector spanned by 560 and 570 of Fig 5.
Lightvectors
620 and 640 denoted in bold together with 610 and 630 span a two dimensional
lightvector subspace.
Fig. ?a depicts two pictures, 710 taken with a slow shutter (long exposure)
and no
flash to record the natural light and 720 taken with a fast shutter and flash
to record
the response of the scene to the flash. In 710, the entire image is properly
exposed,
while in 720 the foreground objects are properly exposed while the background
objects
are underexposed. These two images represent two points in the 307200
dimensional
space of Fig. 5 and Fig. 6. Any two such noncollinear (e.g. corresponding to
differently
lit pictures) points span a two dimensional space, depicted by the plane of
the paper
upon which Fig. 7a is printed. The two axes of this space are the ambient
light axis
730 (labeled 740 with numerals 750) and the flash axis 760 (labeled 770 with
numerals
780).
The manner in which the other images are calculated will be now be described
with reference to Fig. 7b which depicts the top row of images in Fig. 7a,
namely
images 790, 792, and 794.
The basis image of Fig. 7a denoted 710 is depicted as function f 1 in Fig. 7b.
The basis image 720 is denoted as function f2. These functions are functions
of the
quantity of light falling on time image sensor due to each of the sources of
light.
The quantity of light falling on the image sensor due to the natural
illumination
is ql(x, y). That due to the flashlamp is q2 (x, y). Thus the picture 710 is
given by
fl(x, y) = f (ql(x, y)). Similarly, picture 720 is given by f2(~, y) = f
(q2(~, y)). Thus,
referring to Fig. 7b, passing fl and f2 through the inverse camera response
function,
43

CA 02316451 2000-08-02
f -1 results in ql and q2, which are then distributed through vectorspace
weights wll
through w32. These vectorspace weights are denoted by circles along the signal
flow
paths in Fig. 7b.
The vectorspace weights w2~ map bases q~ to lightvectors qli to form the first
(top)
row of pictures depicted in Fig. 7a, according to the following equation:
fll wll w12 _
f 1(fl)
f12 - f w21 w22 _
f 1(~~2)
f 13 w31 w32
Which corresponds to the operation performed in Fig. '7b. The linear
vectorspace
spanned by qli is called the lightvector space. The nonlinear space spanned by
f lZ is
called the lightstroke space.
The other two rows of pictures in Fig. 7a are formed similarly. Clearly we
need
not limit ourselves to a 3 by 3 grid of pictures in Fig. 7a. In general we
will have
a continuous two-dimensional lightstroke space, from which infinitely many
pictures
can be generated within a continuous plane like that of Fig. 7a.
Any image that would have resulted from any amount of flash or natural light
mixture falls somewhere on the page (plane) depicted in Fig. 7a. For example,
at
coordinates (0,2) which correspond to zero ambient exposure (fast shutter) and
two
units of flash, we have an image 790 where the foreground objects are
overexposed
while the background objects are still underexposed. At (1,2) we have an image
792 in which the foreground is grossly overexposed and the background is
normally
exposed, while at (2,2) we have an image 794 in which the foreground is so
heavily
overexposed that these objects are completely white, while the background
subjects
matter is somewhat overexposed. Therefore, lightvectors 710 and 720 are all
that are
needed to render any of the other lightvectors (and thus to render any of the
other
images). Thus a photographer who is uncertain exactly how much flash to use
may
simply capture a small number (at least two) of noncollinear lightvectors
(e.g. take
two pictures that have different ratios of flash and natural light) and can
later render
44

CA 02316451 2000-08-02
any desired fill-flash ratio. Thus we see that 710 and 720 form a basis for
rendering any
of the other nine images presented on this page, and in fact any of the other
infinitely
many images at other coordinates on this two dimensional page. In practice,
however,
due to noise (quantization noise owing to the fact that the camera may be
digital,
as well as other forms of noise) and the like, a better image will be rendered
with a
more accurate form of determining the two lightvectors than just taking one
picture
for each one (two pictures total). By taking more than just one picture each,
a
better estimate is possible. For example, taking ten identical pictures for
710 and
averaging them together, as well as taking another ten identical pictures for
720 and
averaging them together will result in much better lightvectors. Moreover,
instead
of merely taking multiple identically exposed images for each lightvector, the
process
will be better served by taking multiple differently exposed images for each
lightvector.
This is in fact the scenario depicted in Fig 5 where for example, lightvector
v2,3,4 is
determined from three lightvectors 520, 530, 540, using the Wyckofi principle.
The
Wyckoff principle is a generalization of signal averaging known to those
skilled in the
art of image processing.
Finally, a further generalization of signal averaging, which is also a
generaliza-
tion of the Wyckoff principle, is the improved estimation of the lightstroke
subspace
through capture of lightvectors off the axes. For example, suppose that image
794
was not rendered from 710 and 720, but instead suppose that image 794 was
captured
directly from the scene with two units of flash and 2 units of ambient light.
From
these three pictures, 710, 720, and 794, any other picture in the two
dimensional
lightvector subspace can be rendered. These three pictures form "basis"
images.
Since the "bases" are overdetermined (e.g. three vectors that define a plane),
there
is redundant information, and redundant information assists in combating
noise, just
as the redundant information of signal averaging with identical exposures
(identical
points in the 307200 dimensional space) or implementing the Wyckoff principle
with
collinear lightvectors (collinear points in the 307200 dimensional space) did.
Thus

CA 02316451 2000-08-02
the photographer may capture a variety of images of different combinations of
flash
and natural illumination, and use these to render still others, having
combinations of
flash and natural illumination different from any that were actually taken.
Often it is not possible to have the shutter be fast enough to completely
exclude
background illumination. In particular, lightpainting is practiced normally in
dark
places, and it would be desirable that this art could be practiced in places
that cannot
be darkened completely as might arise when streetlamps cannot be shut off, or
when a
full moon is present, or when one might wish to have the comfort and utility
of working
in an environment that is not totally dark. Accordingly, a coordinate
transformation
may be applied to all lightvectors with respect to the ambient lightvector.
The
ambient lightvector may be obtained by simply taking one picture with no
activation
of flash (to capture the natural light in the scene). In practice, many
identical pictures
may be taken with no flash, so that photoquantigraphic signal averaging can be
used
to determine the ambient lightvector. Preferably various differently exposed
ambient
light pictures are taken to calculate an extended response picture for the
ambient
light.
The photoquantigraphic subtraction of the ambient light image fo from another
image fi is given by the expression: f ( f -1 ( fi) - f -1 ( f«)) where f is
the response
function of the camera. More generally, an entire lightvector space may be
photo-
quantigraphically coordinate transformed.
An example of a photoquantigraphic coordinate transformation is depicted in
Fig 8 where the ambient light axis 810 remains fixed, but the space is sheared
along
the flash axis owing to the change to a new axis 820 now called "Total
illumination"
830. The numerals now extend further 840 owing to the fact that the new coordi-
nates capture the essence of images such as 850 that is now the greatest along
axis
820. Mathematically, the example coordinate transformation given in Fig. 8 may
be
46

CA 02316451 2000-08-02
written:
ambient - f, 1 0 f -1 (ambient) (2)
total 1 1 f-1(flash)
which, through example, illustrates what is meant by a ''photoquantigraphic
coordi-
nate transformation" .
Fig 9a illustrates what is meant by a photoquantigraphic summation, and illus-
trates through example, one of the many useful mathematical operations that
can
be performed in lightspace. Images 910 are processed through inverse ( f -1)
transfer
functions 920 to obtain photoquantigraphic measurements (lightvectors) 930.
These
photoquantigrahic measurements are denoted ql through qN.
Typically the inverse transfer functions 920 expand the dynamic range of the
images since most cameras have been built to compress dynamic range of typical
scenes onto some recording medium of limited dynamic range. In particular,
inverse
transfer functions 920 are preferably such that they undo the inherent dynamic
range
compression process performed by the specific camera in use. In the absence of
knowledge about the specific camera being used, the inverse transfer function
920
may be estimated by comparing two or more pictures that differ only in
exposure, as
described in U. S.Pat.No.S, 828, 793. Alternatively, a generic inverse
transfer function
may be used. A satisfactory generic inverse transfer function is the function
f -1 ( f i ) _
f i . Thus a satisfactory operation is to cube each of the incoming images,
although
it would be preferable to try to actually estimate or determine f -1.
The transfer functions drawn in boxes 920 are actually a plot of the function
f ( f 1 ) = f i , simply because the parabolic shape is one of the easiest
concave-upweards
plot to draw by hand, and is typical of the shape of many inverse transfer
functions,
e.g. it is visually similar to the actual shapes of curves typically used. In
this illus-
tration, then, every pixel of each incoming image 910 is squared to make a new
image
930 which is the lightvector. These squared images are summed 940 and the
square
root of the sum is computed by transfer function 950 to produce output image
960.
Optionally, the photoquantigraphic summation may be a weighted summation, in
47

CA 02316451 2000-08-02
which case weights 935 may be adjusted as desired.
Suppose that each of the input images in Fig. 9 corresponded to a set of
pictures
that differed only in illumination, and that each of these pictures
corresponded to a
picture taken by the apparatus of Fig. 3 with the flashlamp in each of the
positions
depicted in Fig. 3. Then image 960 would have the visual appearance of an
image
that would have been taken if three flashes depicted in Fig. 3 were
simultaneously
activated at the three locations of Fig 3, rather than in sequence.
It should be noted that merely adding the images together will not produce the
desired result because the images do not record the quantity of light, but,
rather,
some compressed version of it.
Similarly, the example depicts a squaring, when in fact the actual inverse
function
needed for most cameras is closer to raising to an exponent between about
three
(cubing) and five (raising to the fifth power).
As stated above, inverse function 920 might cube the images, and forward
function
950 might extract the cube root of the sum, or in the case of a typical film
scanned
by PhotoCD, it has been found by experiment that the exponent of 4.22 for 920
and
(1/4.22) for 950 is satisfactory. Moreover, a more sophisticated transfer
function other
than simply raising images to exponents is often used when practicing the
invention
presented here. Typically the curves 920 are monotonic and also of monotonic
slope.
Lastly if and when cameras are made to directly support the art of this
invention,
these cameras would provide measurements linearly proportional to the quantity
of
light received, and therefore the images would themselves embody a lightvector
space
directly.
In general, the input images will typically be color pictures, and a the
notion
of photoquantigraphic vectorspace implicit in Fig. 9a is replaced with that of
pho-
toquantigraphic modulespace. Typically a color camera involves the use of
three
separate color channels (red, green, and blue). Thus the inverse transfer
functions
920 will apply to each of the three channels a separate inverse transfer
function. In
48

CA 02316451 2000-08-02
practice, the tree separate inverse transfer functions for a particular camera
are quite
similar, so it may be possible to apply a single inverse transfer function to
each of
the three color channels. Once these color inverse transfer functions are
applied,
then quantities of light 930 are color quantities, and weights 935 are colour
weights.
In general weights 935 will be three by three matrices (e.g. matrices with
nine ele-
ments). Thus instead of a single scalar constant as with greyscale images,
there are
nine scalar constants for color images. These constants 935 amount to a
generalized
color coordinate trasnformation, with scaling of each of the colour
components. The
resulting color quantities are then added together, where adder 940 is now a
three
channel adder. Forward transfer function 950 is also a color transfer function
(e.g.
comprises three scalar transfer functions, one for each channel). Output image
960 is
thus a color image.
In some cases, it is preferable to completely ignore the color information in
the
original scene, while still producing a color output image. For example, a
photoborg
may wish to ignore color information present in some of the lightstrokes,
while still
imparting a strong color effect in the output. Such a lightstrokes will be
referred
to as pseudocolor lightstrokes. An example of when such a lightstroke is
useful is
when shooting late at night, when a one wishes to have a blue sky background
in
a picture, and the sky is not blue. For example, suppose that the sky is
green,
or greenish/reddish brown is is typically the case for a night time sky. An
color
image of the sky is captured, and converted to greyscale. The greyscale image
is
converted back to color by repeating the same greyscale entry three times. In
this
way the file and data type is compatible with color images but contains no
color
information. Accordingly, it may be colorized as desired, in particular, a
weighting
causing it to appear in or affect only the blue channel o.f the output image
may be
made, notwithstanding the fact that there was little if any blue content in
the original
color image before it was converted to greyscale. An example in which two
greyscale
images are combined to produce a pseudocolor image is depicted in Fig. 9b.
49

CA 02316451 2000-08-02
Specifically, Fig. 9b depicts this variation of the photoquantigraphic
modulespace
in which the color coordinate transformation matrices 935 (of Fig. 9a) are:
wR 0.299 0.587 0.114
wG [ 1 1 1 ] 0.299 0.587 0.114 (3)
wB 0.299 0.587 0.114
where the square matrix is formed by repeating the standard YI(a
transformation
three times. Thus it is clear that this matrix will destroy any color
information
present in the input image, yet still allow the output image to be colorful
(by way of
the ability to adjust weights wR, wG, and wB).
In Fig. 9b there is depicted a situation involving two input images, so the
corre-
sponding mathematical operation is that of a photoquantigraphic pseudocolor
mod-
ulespace given by:
wRl 0.299 0.587 0.114 wR2 0.299 0.587 0.114
f wGl [1 1 1, 0.299 0.587 0.114 f 1(fi)-f- wG2 [1 1 1] 0.299 0.587 0.114 f
1(f2)
wBl 0.299 0.587 0.114 wB2 0.299 0.587 0.114
(4)
This mathematical operation can be simplified by just using a greyscale
camera.
In fact it is often desirable to use only a greyscale camera, and simply paint
the scene
with pseudocolor lightvectors. This strategy is particularly useful when the
scene
includes apartment buildings or dwellings, so that an infrared camera and
infrared
flashlamp may be used. In this way a colorful lightvector painting can be made
without awakening or disturbing residents. For example, it may be desired to
create
a colorful lightvector painting of an entire city, by walking down each street
with a
flashlamp and flashing light at the houses and buildings along the street,
without
disturbing the residents. In this situation, a satisfactory camera is the
Kodak (TM)
DCS-460 in which the sensor array is specially manufactured with no color
filters
over any of the pixel cells, and with no infrared rejection filter. Such a
specially
manufactured camera will have a tremendously increased sensitivity compared to

CA 02316451 2000-08-02
the standard DCS-460, allowing a small handheld flashlamp to illuminate a
large
building.
A satisfactory flashlamp is a specially modified Lumedyne 468 system in which
a
quartz infrared flashlamp is fitted, and in which an infrared filter is placed
over the
lamp head reflector. A satisfactory reflector is the Norman-2H sports
reflector which
will also fit on the lumedyne lamp head. Preferably a cooling fan is installed
in the
reflector to dissipate excess heat buildup on account of the infrared filter
that makes
the lamp flashes invisible to the human eye.
In Fig. 9b, what is shown is two input images that have either already been
converted to greyscale, or were greyscale already, on account of their being
taken
with a greyscale system, such as the infrared camera and flashlamp described
above.
These two greyscale input images are denoted fyl and fy2 in Fig. 9b. The
images then
pass through inverse transfer function denoted by fy 1 producing qyl and qy2.
These
quantities contain no color information from the original scene. However, it
is desired
to colorize them into a color output image. Accordingly quantity qyl is spread
out into
three identical copies, each passing through weights wRl, wcu and wBl.
Similarly,
quantity qy2 is spread out into three identical copies, each passing through
weights
wR2, wGZ, and wB2. a total quantity of red is obtained at qB, a total quantity
of green
at qB, and a total quantity of blue at qB. These total quantities are then
converted
into a picture by passing them through three separate forward transfer
functions f .
In practice each of these three transfer functions is similar enough that they
may be
regarded as identical, but if desired, may also be calculated independently if
there
is reason to believe that the camera compresses the dynamic range of its three
color
channels differently.
In a typical scenario, an image of a building interior may be taken with the
infrared
camera and infrared flashlamp described above. This interior may, for example,
be
the stairwell of an apartment building in which there are glass windows
showing the
stairs to the outside. It is desired to capture an expressive architectural
image of the
51

CA 02316451 2000-08-02
building.
A photoborg climbing the stairs flashes a burst of infrared light at each
floor, to
light up the inside stairs. The images arising from these bursts are captured
by an
infrared camera fixed outside. The camera is preferably fixed by a heavy cast
iron
surveyor's tripod registered on three stakes driven into the ground, or the
like. After
the photoborg has done each floor, the resulting images are
photoquantigraphically
averaged together as was shown in Fig. 9a. The photoquantigraphic average is
the
image fyl depicted in Fig. 9b.
Then the photoborg leaves the building and illuminates the exterior. Again, an
infrared flashlamp is used so as not to awaken or disturb residents of the
building.
A large number of exterior pictures are taken, while the photoborg walks
around
and illuminates the outside concrete structure of the building. These images
of the
exterior are photoquantigraphically averaged to obtain f',~2 depicted in Fig.
9b.
In the case of a small building, a single shot may provide sufficient coverage
and
Signal to Noise Ratio (SNR), but often multiple shots are
photoquantigraphically
averaged as described.
Then the photoborg selects the weights. A common selection for the weights in
the scenario described above is wul = l, wGl = 1, wBl = 0, to give the
building
interior a welcoming yellow appearance, and wR2 = 0, wG2 = 0, wB2 = 1, to give
the exterior a "midnight blue" appearance. Thus, although the camera captured
no
color information from the scene, a colorful expressive image as might be
printed on
the cover of an architectural magazine using high quality color reproduction
may be
produced.
The above scenario is not entirely ideal because it may be desired to mix
color
lighstrokes with pseudocolor lightstrokes in the same image. Accordingly, a
more
preferable scenario is depicted in Fig. 9c.
Fig. 9c depicts a simplified diagram showing only some of the steps involved
in
making a typical lightmodule painting.
52

CA 02316451 2000-08-02
The process begins by calculating one or more ambient lightmodules. This esti-
mate is useful either for photoquantigraphically subtracting from each image
that will
later be taken, or simply to fill in a background level of detail. In the
latter case, the
ambient lightmodule typically comprises a daytime estimate multiplied by the
color
blue, added to a night time estimate multiplied by the color yellow, and added
to the
overall image in addition to the lightstrokes made with the photoborg's
flashlamp.
There may be more than one ambient lightvector, as indicated here (e.g. one
for
daytime lighting to create a blue sky in the final picture, and one for
nighttime lighting
to create yellow lights inside all the buildings in the picture). Sometimes
there are
hundreds of different ambient lightvectors computed as the sun passes through
the
sky, so that each time of day provides different shadow conditions from which
other
desired lightmodule spaces are computed.
In this simple example, it is assumed that only one ambient lightmodule is to
be
computed. This ambient lightmodule is typically computed as follows: A
photoborg
first issues a command from his WearComp (wearable computer) to the base
station
to instruct it to construct an estimate of the background ambient
illumination. The
computer at the base station directs the camera at the base station to acquire
a
variety of differently exposed pictures. In this simple example, sixteen
pictures are
captured at 1/2000th of a second shutter speed.
These pictures are stored in files with filenames ranging from v000.jpg to
v015.jpg.
Note that v000.jpg, ete., are not usually lightvectors until they pass through
the cam-
era's inverse transfer function, unless the camera already shoots in
lightvectorspace
(e.g. is a linearized camera). The signal v000 in Fig. 9c denotes the image
stored in
file v000.jpg, and the signal v001 in Fig. 9c denotes the image stored in file
v001.jpg,
and so on. These sixteen images are photoquantigraphically averaged. By photo-
quantigraphic averaging, what is meant is that each is passed through an
inverse
transfer function, f -1 to arrive at the quantities of light falling on the
image sensor,
and then these quantities are averaged. These values are denoted qooo through
qm5
53

CA 02316451 2000-08-02
in Fig. 9c. Each of these values may be stored in a double precision image
array,
although preferably the process is done pixelwise or in smaller blocks so that
the
amount of memory required in the base station computer is reduced. The average
of
these sixteen photoquantigraphic quantities is denoted v«_15 in Fig. 9c. It
should be
noted that average and sum are conceptually identical, and that the extra
factor of
division by 16 may be incorporated into the weight wo_1, to be described
later.
Then the base station computer continues to instruct; the camera to acquire
six-
teen pictures at a shutter speed of 1/250sec. The picture signals associated
with these
sixteen pictures are denoted v016 through v031 in Fig. 9c. These signals are
used
to estimate the photoquantigraphic quantities qols through qo3l. These
photoquanti-
graphic signals are averaged together to arrive at lightmodule vls-su
Then the base station computer continues to instruct, the camera to acquire
six-
teen pictures at a shutter speed of 1/8sec. The picture signals associated
with these
sixteen pictures are denoted v032 through v047 in Fig. 9c. These signals are
used
to estimate the photoquantigraphic quantities qp32 through qo47. These
photoquanti-
graphic signals are averaged together to arrive at lightmodule v3z-47.
The three lightmodules vo_15, ys-si, and v3z-47 are further processed by
weighting
each of them in accordance with the shutter speeds. Thus vo_15 is multiplied
by
23 = 8, while v3z-47 is multiplied by 2-5 = 1/32. Lightmodule vls-3i is
multiplied by
1 (e.g. it is left as it is, since it has been selected as the reference
image).
In this way, all lightmodules are scaled according to the shutter speeds, so
that
each will be an equivalent estimate of the quantity of light arriving at the
image
sensor, except for the fact that quantization noise, and other forms of noise,
and the
like, will tend to cause the highlight detail of vo_15 to be best, while the
shadow
details will be best captured by lightmodule v3z-4~.
This preference for highlight detail from vo_15, midtone detail from vls-31
and
shadow detail from v3z-47 is captured by certainty functions co_15, cis-si,
and c3z_47,
shown in Fig. 9c. After applying these certainty functions, a weighted
summation is
54

CA 02316451 2000-08-02
made, to arrive at lightmodule signal vo which is the estimate of the ambient
light.
Lightmodule vo is typically a double-precision 3 channel (color) array of the
same
dimensions as the input images. However, vo is in photoquantigraphic units
(which
are neither irradiance nor illuminance, but, rather, are characterized by the
spectral
response in each of the three color bands over which they are taken).
Typically, lightmodule vo is actually computed over many more exposure steps,
e.g. 256 pictures at every possible shutter speed the camera is capable of.
The gain
or sensitivity of the camera may also be varied under program control to
obtain far
greater range and far finer range than just three different exposure levels as
illustrated
in Fig. 9c. It should also be noted that generally the exposures begin with
the higher
shutter speeds and progress downwards, as this ordering has been found to
result in
far less saturation of the CCD sensor arrays or the like. Otherwise, it is
common
for entire rows or columns to white out on account of bright lights shining
into the
camera.
Each exposure, or at least some of the longer exposures, is tested for
whiteout,
to make sure that the exposure is intact. This test is denoted by, for
example, T032
in Fig. 9c, where T032 tests image v032 to make sure that the exposure was not
so
long that a complete row or column was white or washed out beyond the location
of
a bright source of light.
After the ambient light quantity vo is determined, control of the camera is
returned
to one or more photoborgs who can then select a portion of the scene or
objects in
view of the camera to illuminate. A photoborg generally illuminates the scene
with
a flashlamp or phlashlamp. A phlashlamp is a photoquantigraphic flashlamp, as
illustrated in Fig. 9d.
Fig. 9d shows the Medusa8 (TM) phlashlamp which is made from eight ordinary
flashlamps. A satisfactory configuration is made from eight of the most
powerful
Metz (TM) flashlamps mounted to a frame with grip handles MBG. Grips M8G are
preferably smooth and easy to grab onto. A satisfactory material for handles
M8G

CA 02316451 2000-08-02
is cherry, or other hardwood. Grips MHG allow the photoborg operator to hold
the
bank of eight flashlamps and aim the entire bank of flashlamps at the subject
matter
of interest.
One of the grips MHG preferably contains a chording l~eyboard built into the
grip,
which allows the photoborg to type commands into a wearable computer system
used
together with the apparatus of Fig. 9d. When the photoborg has selected
subject
matter of interest, and aimed the phlashlamp at this subject matter, the
photoborg
issues an acquire lightmodule command. This command is transmitted to the base
station computer causing four pictures to be taken in rapid succession. Each
of these
pictures generates a sync pulse transmitted from the base station to the
photoborg.
There is contained in the phlashlamp a sequencer computer, MBC, which fires
the
four flashlamps designated M8F when the first synchronization pulse is
received. Al-
ternatively, the sequencing may be performed on the body--worn computer
(WearComp)
often worn by a photoborg. The sequencing computer M8C fires the two
flashlamps
M8T when the next sync pulse is received. It then fires the single one
flashlamp des-
ignated M80 when the third sync pulse is received. Finally it fires the
flashlamp M8H
at half power when the fourth sync pulse is received. In order for this
sequencing to
take place, the eight flashlamps are connected to the sequencing computer M8C
by
way of wires M8W leading from each of the "hot shoe" connectors MBHS ordinar-
ily found on many flashlamps. Typically hot shoe connectors MBHS are located
on
the bottom of the flashlamp bodies MBB. The flashlamp bodies M8B can usually
be
folded to one side, so that the flashlamp heads MBF, MBT, M80, and M8H can all
be clustered together in approximately the same space.
In this way, the Medusa8 (TM) phlashlamp causes there to have been taken a
plurality of pictures of different exposure levels. This plurality of pictures
(in this
case, four differently exposed pictures) is designated v049 through v052 in
Fig 9c.
Alternatively (and often preferably), a phlashlamp comprises a single lamp
head,
with a single flashtube, which is flashed at a plurality of different levels
in rapid
56

CA 02316451 2000-08-02
succession, by, for example, rapidly switching differently sized capacitor
banks into
the system. In this way, all flashes of light come from exactly the same
direction, as
opposed to the Medusa8 approach in which flashes of light come from slightly
different
directions, owing to the fact that different flashtubes are being used.
A phlashlamp may be fired repeatedly as a photoborg walks around and illu-
urinates different objects in the scene. Alternatively, several photoborgs
carrying
phlashlamps may illuminate the scene either at the same time (synchronized to
each
other), or may take turns firing their respective phlashlamps. For example,
another
object is selected by another photoborg, and this photoborg aims the
phlashlamp at
this other object, and another acquire lightmodule command is issued. Another
four
pictures are taken in rapid succession, and these are designated as v812
through v815
in Fig 9c.
Typically each photoborg is given a range of images, so, for example,
photoborgl
may have image space from v100 to v199, and photoborg 8 will have image file-
names v800 to v899. Alternatively, the photoborg's UID and GID may be inserted
automatically in each filename header, together with his or her heart rate,
physical
coordinates, etc., and any other information which may help to automate the
process.
In this way, the base station may also, through Intelligent Signal Processing,
as de-
scribed in Proc. IEEE, Vol. 86, No. 11, make an inference as to which
lightmodules
are most important or most interesting to each of the photoborgs.
In the situation depicted in Fig 9c, photoborgl has decided to cement his
lightvec-
for into the sum with weight wl, while photoborg8 has selected weight wz.
Addition-
ally, photoborg8 has decided to cement his contribution into the sum as a
greyscale
image but with color weight w2. As will be seen, although w2 affects the color
of the
lightmodule as it appears in the final Output Image, no color information from
v2
gets to the Output Image.
The lightmodule from photoborgl is computed automatically by setting wf =
1/4, wt = 1/2, wo = 1, and w~ = 2. In this way, the four-flash image is scaled
57

CA 02316451 2000-08-02
down four times, the two-flash image down two, and the half-power-flash image
up
2, so that all four photoquantigraphic estimates qloo to qoos are brought into
tonal
register. Then certainty functions are applied. The four--flash certainty
function, cf,
weights the darker pixels in qooo more heavily. The two-flash certainty
function ct
weights the darker midtones most heavily, while the one-flash certainty
function co
weights the brighter midtones most heavily. The half-power-flash certainty
function
ch weights the highlights (brightest areas of the scene) most heavily. The
result is
that the weighted sum of these four inputs gives lightmodule vl. Lightmodule
vl then
continues on toward the total photoquantigrahic sum, with a weighting wl
selected
by photoborgl.
In the situation in which a pseudocolor lightmodule is desired, as is
illustrated
in Fig. 9c with q812 through q815, the color lightmodule v2 is computed just
as in
the above case, but instead, this lightmodule is converted to greyscale and
typecast
back to color again as follows: Lightmodule v2 passes through color separater
CS and
is broken down into separate Red (R), Green (G), and Blue (B) channels. Each
of
these has a weight associated with it. The weights are designated wR, wG, and
wB
in Fig. 9c. The default weights are those of the standard YI(a transformation
if none
are specified by the photoborg.
Color depth is often expressed in bits per pixel, e.g. 8 bit precision is
often
referred to as "24 bit color" (meaning 24 bits total over all three channels).
Likewise,
a double precision variable (e.g. a REAL*8 variable, in standard IEEE floating
point
arithmetic), occupies 64 bits for each of red, green, and blue channels, and
is thus
designated as 192 bit color. Hence the designations in Fig 9c showing where
the
signals have a color depth of 192 bits. After the color separator CS, the
signals in
each channel have a depth of 64 bits (greyscale), passing through the weights.
After
passing through the weights, certainty functions are computed based on
exposure in
each color band. Thus, for example, if the red channel is overexposed as is
often the
case where tungsten lights are concerned, then the highlight details can come
from
58

CA 02316451 2000-08-02
the blue channel. Photoborg8 may also deliberately use a colored gel over a
flashlamp
in addition to or instead of using a plurality of flashlamps as in using a
phlashlamp.
For example a red gel over an ordinary flashlamp with a deliberate
overexposure
will conveniently overexpose the red channel. Typically the blue channel will
be
underexposed. The green channel will typically fall somewhere in between.
Accordingly, certainty functions cR, cG, and cB will often help extend the
dynamic
range of the greyscale image through the process of deriving a greyscale image
from
a color image. A weighted sum, including weighting by these certainty
functions,
is produced at v2y which is still a 64 bits per pixel greyscale image. This
image is
replicated three times and supplied to color combiner CC. Ordinary a color
combiner
will take separate R, G, and B inputs and combine there into a single color
image.
However, when fed with three identical inputs color combiner CC simply
converts the
greyscale image into a datatype that is compatible with color images. The
result,
vy2~, is a 192 bits per pixel greyscale image in a format in which the subject
matter
is simply repeated three times. This signal may now be passed through weight
w2
where it may be assigned to the final output image. Weight w2 might, for
example,
be ~550~ in which case lightmodule vy2~ will appear as yellow in the final
image, with
a strength five times the default strength.
Ordinarily images are produced in 24 bit Red Green Blue (RGB), and converted
to 32 bit CMYK (Cyan, Magenta, Yellow, blacK) for printing. However, if
printing is
desired, it will be advantageous to do the conversion to CMYK in lightspace
prior to
converting back to a 32 bit CMYK picture. Accordingly, The three lightmodules
vo,
vl, and v2y~ are weighted as desired (these final weights are selected for the
desired
visual effect), a weighted sum is taken in 192 bit color, and converted to 256
bit color
CMYK colorspace by the block denoted by RGBtoCMYK in Fig. 9c.
Ordinarily there is some color shift in conversion from RGB to CMYK, and most
conversion programs are optimized for mid-key, e.g. fleshtones, or the like.
However,
a feature of the images produced by the apparatus of the invention is that
much
59

CA 02316451 2000-08-02
of the image content exists at extremes of the color gamut, so it is desirable
that
when converting to CMYK colorspace, that the resulting image stretch out
toward
the different boundaries of the CMYK space. The CMYK space is quite different
than RGB space in the sense that there are colors that can be obtained in RGB
that
cannot be obtained in CMYK and vice-versa. However, what is desired is an
image
that hits the edges of whatever colorspace it is to exist in. Most notably,
color hue
fidelity is typically less important than simply the fact that the image
should touch
the boundaries of the colorspace. Thus it will typically be desired to convert
from
RGB to LAB, HSV, or HSL space, and then increase the saturation, and then
convert
to CMYK, where colors will be clipped off for being out of gamut.
Ironically, it is preferable that colors be clipped ofi as being out of gamat,
rather
than having them fall in within the color gamut boundaries after having been
previ-
ously clipped in the old colorspace. Thus the block denoted SAT in Fig. 9c
will in
fact use the output of block GBM (Gamut Boundary Manager) which detects where
the colors were originally at the boundaries of the RGB colorspace. In this
way, block
SAT will adjust the CMYK input in accordance with where the RGB signals were
at their extrema and ensure that none of these colors get mapped to the
interior
of the CMYK space. The block denoted SAT performs an optimization in 256 bit
lightspace and attempts to map any images that were clipped to a clipped part
of the
new gamut.
Fig 9f depicts a color coordinate transformation from domain DOM to range
RAN, where the domain DOM may, for example, be an RGB colorspace, or a higher
dimensional colorspace as might be captured using a filter wheel over a
camera. The
range, RAN is typically a CMYK colorspace suitable for normal printing, or
another
colorspace, such as the Hexachrome (TM) colorspace described in U.S. Pat. No.
5,734,800, "Six-color process system", issued Nov. 29, 1994, invented by
Richard
Herbert and Al DiBernardo, and assigned to Pantone, Inc.
Ordinarily the two colorspaces have different color gamuts, so that there will
be

CA 02316451 2000-08-02
some colors in the domain DOM that will get clipped (d:istorted) when
converted to
the range RAN. Colors that are clipped are denoted CL in Fig. 9f.
Conversely, there will also be colors BC that are not necessarily distorted by
the
colorspace conversion, but were at the gamut boundaries in the domain DOM and
exist inside the boundaries in the range RAN. Consider two colors in
lightspace, BCl
and BC2, where BC1 is just at the boundary of the domain DOM, while BC2 is
beyond the domain boundary DOM. The camera will map both of these colors to
BC1, since BC2 is beyond its gamut. For example, both may be bright blue, and
both may get mapped to RGB = (001. However, in the conversion to colorspace
domain DOM, both will appear within the boundary of what the new colorspace
could achieve.
Colors BC1 and BC2 represent single pixels, in isolation. In practice,
however,
it is evident from a picture, by the context of surrounding pixels, when a
region of
the picture goes beyond the gamut of the colorspace. For example, an extremely
bright red light in a picture will often register as white, and then have a
yellow halo
around it, and then bloom out to red further out. When such an image is
rendered
anywhere other than at the boundary of the new colorspace RAN, the appearance
is
not visually appealing, and will be referred to as "brightgrey" . The term
brightgrey
denotes colors that should be bright but register as greyish, for example,
colors that
were bright and vibrant in DOM, but may appear greyish in RAN, especially when
RAN is a CMYK colorspace or the like. For example, a bright magenta in RGB may
register as a dul greyish magenta in CMYK, even though the color is not
distorted.
In fact it is the very fact that the color is not distorted that is the
problem, e.g. since
CMYK is capable of producing a very strong magenta, there is a perception of
the
magenta being weak when it is faithfully reproduced in C."MYK. Instead of
faithfully
reproducing it in CMYK, it is preferable, within the context of the invention,
to
distort the magenta from its original RGB value to something that is much
stronger
than it was in RGB. Typically this may be done by intensifying the magenta and
61

CA 02316451 2000-08-02
reducing the amount of cyan, or the like, that might be causing the cyan to
appear
brightgrey. (Cyan and black tend to darken certain colors.)
When the camera is a lightspace camera, e.g. one that implements a Wyckoff
effect, or is otherwise based on a plurality of differently exposed images, it
is possible
to determine the actual quantity of light arriving in each of the three color
spec-
tral bands, and therefore it is possible to identify colors that are outside
the RGB
colorspace one would ordinarily have for taking a picture.
When these colors would be further distorted by clipping, in conversion to the
new colorspace, the appearance is not so bad as when they would fall in the
interior
of the new colorspace, so the emphasis of this invention is to address the
darkgrey
colors (colors denoted BC, or BC2 having been clipped to BCl and then existing
in
the interior of RAN).
Most notably, there are two ways, within the context of the present invention,
to
obtain a vibrantly colored lightmodule painting:
~ use a plurality of input images, preferably differing only in exposure, to
calcu-
late each lightvector, and then do all calculations and colorspace conversions
in lightspace, prior to converting back to an image by applying a pointwise
nonlinearity, f ;
~ accept the fact that incoming lightstrokes will have been limited by domain
DOM, and attempt to stretch them out in colorspace so that regions such as
BC1 will be stretched out further toward the boundaries of the range RAN (e.g.
BCl would move out toward BC2 or beyond).
It is understood that this second method will involve some distortion of the
colors,
and it is understood that this distortion is acceptable because often the
apparatus of
the invention is used to create expressive lightmodule paintings in which
colors are
not natural to begin with.
Fig. 9c includes the saturation booster SAT and the Gamut Boundary Manager
62

CA 02316451 2000-08-02
GBM. The effect of the SAT block with the GBM input is to ensure that, for
example,
a portion of the image that a photoborg deliberately overexposed by 12 f stops
and
then mapped through a dark blue filter weight, e.g. wl = (00212, will not come
out with a greying effect in the CMYK space. It is not uncommon to
deliberately
overexpose by a dozen or so f stops when using a dark blue (e.g. pure blue as
in
RGB = (0, 0, 1)) filter. Ordinarily such an image is shot overexposed in order
to
deliberately blow away any appreciable detail. Thus a textured door, or rough
wall
will have an appearance as if a blob of deep blue paint were splashed on the
image to
obliterate any detail. Such an image creates the visual percept of something
that is
extremely bright. Thus should it land anywhere but at the outer edge of the
CMYK
gamut, it will create a very unsightly appearance. This appearance is hard to
describe,
other than by saying it looks "bright bluish grey" . Obviously such a bright
splotch of
lightmodule paint should no be printed in any way that contains grey (e.g.
contains
black ink in CMYK). Thus SAT together with GBM must ensure, at all costs, that
such a color maps to something at the outer boundary of CMYK space, even if it
means that the hue must be shifted. Indeed, it is preferable that the hue does
shift.
For example, it would be preferable that the blue be shifted to pure cyan,
rather than
risk having it fall anywhere but at the extreme outer boundary of CMYK space.
It is understood and expected that additional information will be lost when
con-
verting to CMYK. In fact, it is the very fact that methods of converting from
RGB to
CMYK of the prior art try to preserve information that leads to this problem.
Thus
an important aspect of the present invention is a means of converting from RGB
to
CMYK where hue fidelity is of relatively little importance, and where
maintaining
detail in the image is of relatively little importance compared to the
importance of
maintaining extremely bright vibrant colors.
Once the image has been adjusted in 256 bit CMYK lightspace, so that all
colors
that were bright and vibrant in the input image are also bright and vibrant in
the
CMYK lightspace (even if it was necessary to distort their hues, or destroy
large
63

CA 02316451 2000-08-02
amounts of highlight detail to do so), then the lightspace is passed through a
nonlin-
earity f which compresses its dynamic range. The nonlinearity f may be the
forward
transfer function of the camera itself, or some other desired transfer
function that
compresses the dynamic range of the image in a slightly different way. After
passing
through f, the result is quantized to 32 bit color, so that it can be saved in
a standard
CMYK file format, such as TIFF. In this way, it can be sent either to a
digital press,
such as a Heidelberg digital press, or it can be used to make four color
separation
films which in turn can be used to prepare four metal plates for a traditional
printing
press. The resulting image will thus have very rich vibrant colors, exhibit no
no-
ticeable quantization contouring (e.g. have no solarized appearance or contour
line
appearance). Typically the resulting images, when printed on a high quality
press,
such as is used for a magazine cover, will have a much richer tonal range, and
much
better color, than is possible with photographic film, because of the
capabilities of
the lightspace processing of the invention.
In practice, only some of the lightstrokes are offenders containing
contributing
to or containing darkgrey portions. Accordingly, it is preferable to alter
only the
offending lightvectors, or to alter the worse offenders more severely.
Accordingly,
Fig. 9e depicts a plurality of quantities of light arriving from a cybernetic
photogra-
phy system. These quantities are denoted qRCSO, qRCSn ~ ~ . qRGBN and are
linearly
proportional, in each of a plurality of spectral bands (typically at least
three spec-
tral bands, such as red, green, and blue) to the scene radiance integrated
with the
spectral response in each of these spectral bands. Such quantities are
referred to as
photoquantigraphic.
These quantities, qRGBO~ qRGBI. and qRCBN, may be arrived at by applying the
inverse camera response function, f -1, to each of a plurality of pictures, or
alterna-
tively, a photoquantigraphic camera may be constructed, in which the output of
the
camera is in these photoquantigraphic units.
Each of these photoquantigraphs is typically due to a different source of
light, e.g.
64

CA 02316451 2000-08-02
qaceo might be a photoquantigraph taken in the daytime, qRCBi, a long exposure
photoquantigraph taken at night, and qRCBN, taken wii;h a flashlamp over a
short
(e.g. 1/500sec) exposure.
Each photoquantigraph is first converted immediately to CMYK. In Fig. 9e, the
conversion from qRCao to a cmyk is denoted by CMYKO, and the result of the con-
version is denoted by qC,NYxo, the conversion from qRCm to a cmyk is denoted
by
CMYKl, and the result of the conversion is denoted by qC~TYKI, . . . the
conversion
from qRCSN to a cmyk is denoted by CMYKN, and the result of the conversion is
denoted by qC,~,IYKN. Typically CMYKO, CMYKl, . . . CMYKN are identical con-
version processes even though the input photoquantigraphs (and hence the
outputs)
are typically different. Each of these conversion processes is done
independently in
such a way as to minimize brighgrey, and thus contribute to a vibrant
lightmodule
painting. Thus qCMYKO is generated from qRCSO bY also looking at the gamut
bound-
aries. Gamut boundary manager 0, denoted GBMO looks at the gamut boundaries of
qRCSO, with particular emphasis on where the gamut boundaries are reached in
the
domain colorspace but not the range. Thus GMBO controls SATO to resaturate,
and
expand the gamut of qCMYKO, as well as deliberately distort the hue, and
deliberately
truncate highlight detail as needed to boost the brightgrey regions out to the
edges
of the new CMYK gamut. Similarly, GBMl controls SATl to resaturate, and expand
the gamut of qc,NYxi, . . . and GBMN controls SATN to resaturate, and expand
the
gamut of qC,~,IYxN. The gamut boundary managers and saturation boosters are
also
responsive to the overall photoquantigraphic sum, e.g. t,o the sum in
lightspace af-
ter it has passed through a forward transfer function, fC,~,IYK. The forward
transfer
function fC,~~Yx is semimonotonically increasing, but is level or concave down
(has
nonpositive second derivative) in each of the four C, M, Y, and K channels.
Fig. l0a depicts a photoquantigraphic filter, which will be referred to as a
"philter" . A philter is obtained by first computing the photoquantagraphic
quantity
1030, denoted q, from the input image 1010, by way of applying the inverse
response

CA 02316451 2000-08-02
function of the camera 1020. Then an ordinary filter, such as a linear time
invariant
filter, 1040 is applied. The result, 1045, is passed through camera response
function
1050, to produce output image 1060.
Fig. 10b depicts an implementation of split diffusion in lightvectorspace.
Split
diffusion is useful because it is often desired that some lightvectors will
contribute in
a blurry fashion while others will contribute in a sharper fashion. More
generally, it is
often desirable that there will be different filters associated with different
lightvectors
or sets of lightvectors.
Referring to Fig. lOb, one or more lightvectors to be blurred, 1031, are
passed
through blurring filter 1040. It is assumed that lightvector(s) 1031 is (are)
already in
lightvectorspace (e.g. already passed through an inverse transfer function, or
taken
with a camera that outputs in lightspace).
The output of blurring filter 1040 is added to one or more other lightvectors
by
adder 1042, and the result is passed through a semimonotonic function of
nonpositive
second derivative 1050, giving an output image 1060.
A drawback of traditional image editing, where a feather radius is used to
move
pieces of an image from one place to another, is that there is a "brightgrey"
effect
when a bright area overlaps with a dark area. Typically, the bright area gets
added
to the dark area in regions of overlap, and what results is a grey area with
clipped
highlight details. Clipping is visually acceptable at the upper boundary of
greyvalues,
but when values are clipped and then reduced from white to grey, the
appearance of
bright lights or the like (such as in a region of the picture where bright
lights were
shining into the camera) is unacceptable when mixed with dark areas. For
example,
a ''brightgrey" rendering of a portion of the scene where there was a bare
lightbulb
blasting light into the camera at very high intensity is not acceptable.
Accordingly, the notion of philters may be used to build up a complete image
processing toolkit, which might, for example, include image editing tools,
etc.. These
tools can all be adopted to lightspace, so that, for example, during image
editing,
66

CA 02316451 2000-08-02
the selection of a region may be done in lightvectorspace,so that if a feather
radius is
used, the feathering happens in lightvectorspace. Such a feathering operation
will be
referred to as a pheathering operation.
The pheathering operation is depicted in Fig. lOc. Here the philter operation
is
denoted 1041, and comprises the editing of the image in lightvectorspace.
When filtering operations, editing operations, or split diffusion operations
are
tonally drastic, e.g. when one wishes to perform strong sharpening or blurring
op-
erations on images, often the effects of limited dynamic range become evident.
In
this case, it is preferable that the inputs have extended dynamic range.
Accordingly,
Fig. lOd shows an example of a set of collinear lightvectors 1010, 1011, and
1012
which are processed by Wyckoff Principle block 1025 which implements the
Wyckoff
Principle as described in U.S. Pat. No. 5,828,793.
The result, qTOT, denoted 1031 in the figure, is passed through the filter
1040.
Since filter 1040 is operating in lightvectorspace, it is a philter. The
result of is then
converted to an image with semimonotonic function of nonpositive second
derivative
1050, giving an output image 1060.
Fig. 11 gives an illustration of the Wyckoff principle, and in particular, the
fact
that taking a plurality of differently exposed pictures gives rise to a
decomposition of
the light falling on the image sensor into a plurality of collinear
lightvectors, denoted
by Wl, W2, and W3 in this figure.
Typically when practicing the invention, very strong deliberate overexposure
is
used for at least some of the lightvectors (in greyscale images) or
lightmodules (in color
images). For example, a photoborg may deliberately overexpose a section of the
image
and then apply a very strong color such as pure red or pure blue to this
lightvector.
Portions spilling over into the adjacent color channels will thus be
moderately exposed,
or underexposed. Thus there is an inverse Wyckoff effect in the rendering of
the
lightvector into the sum. Accordingly, Fig. 12a shoes this inverse Wyckoff
effect in
which a Wyckoff set is captured, passed through a combiner (synthesis), to
generate a
67

CA 02316451 2000-08-02
Composite image. The Composite image is then split up. This split up is
inherent in
the use of a strongly colored lightmodule coefficient. For example, using a
bright red
lightmodule coefficient of RGB = (0.01, 0.1, 1.0~ will result; in an
approximation to the
Wyckoff effect in the output, in which the blue channel will contain an
underexposed
version of the image that will show much of the highlight detail in the image,
as
denoted WBLUE in Fig. 12a. Similarly, the green channel of the output image
may
be moderately exposed, as denoted WGREEN in Fig. 12a. Finally, the red channel
of the output image will be overexposed, since the lightmodule coefficient was
bright
red. This overexposed channel is denoted WRED in Fig. 12a.
Fig. 12b illustrates how this effect happens in a real filter. A red filter
allows red
light, R, to pass through with very little attenuation. Green light, G,
experiences
greater attenuation. In the case of a pure red filter, typically blue light,
B, will
experience even more attenuation.
Fig. 12c illustrates how the Wyckoff effect and inverse Wyckoff effect operate
in
the context of lightspace rendering. Suppose that the portion of the subject
matter
being imaged is white, and we are imaging it through a red filter.
A ray of weak white light, W1, will pass through the filter and emerge as red,
since the filter is a red filter.
A ray of strong white light, W2, will pass through the filter and emerge as
yellow,
Yo, since enough green light will get through to produce an appreciable amount
of
green exposure, and the green and red together form yellow. The yellow output
Yo will
likely have a red halo around it, as blooming into adjacent pixels or sensor
elements
will be weaker than the central beam, and will thus only expose the red
channel.
A ray of really strong white light, W3, will pass through the filter and
emerge as
white, Wo, since it will be strong enough to saturate all three spectral bands
of the
sensor (assuming a three band RGB sensor). Although the red component is
stronger
than the others, all components are strong enough to saturate the respective
sensors
to their maximum value. The white output Wo will likely have a yellow halo
around
68

CA 02316451 2000-08-02
it, and, further out, a red halo, as light spilling over to other adjacent
pixels or sensor
elements will be weaker than the central beam, and will create behaviour
similar to
that of Yo further out, and Ro still further out.
It will be understood that to render this kind of effect, it will not be
sufficient
to just have a normal picture and computationally apply a red virtual filter
to it in
lightmodulespace, but, rather, it will be preferable to capture a picture of
extremely
broad dynamic range so that this inverse Wyckoff effect can be synthesized,
resulting
in a natural looking image in which the red channel is extremely overexposed,
the
green channel is moderately exposed, and the blue channel is possibly
underexposed.
Such a picture will appear white in areas of overexposure, yellow in areas of
moderate
exposure, and red in areas of weaker exposure (and of course dark red or black
in
areas of still weaker exposure).
Accordingly, an important aspect of the invention is photorendering in
lightmod-
ulespace, with lightmodules being derived from a phlashlamp or the like.
Another
important aspect of the invention is the application of various philters to
lightvectors
of extended response.
Fig. 13a depicts an EyeTap (TM) flashlamp. The EyeTap flashlamp produces
rays of light that effectively emanate from an eye of a photoborg. Light
source 1310
is collimated with optics 1320, and aimed at diverter 1340. Diverter 1340 is
typically
a mirror or beamsplitter. Diverter 1340 creates an image of light source 1310
as if it
originated from an eye 1331 of a photoborg. Preferably the light effectively
originates
from the center of the lens 1330 of the eye 1331.
Optionally, an aiming aid 1350 reflects off the back of beamsplitter or mirror
1340.
If 1340 is a mirror, it should be a two-sided mirror if there is an aiming aid
1350.
Aiming aid 1350 may be an aremac, projector, television, or the like, which
serves as
a viewfinder for aiming the EyeTap flashlamp apparatus of the invention.
Fig. 13b depicts a wide-angle embodiment in which eye 1331 is a right eye, so
optional aiming aid 1350 can extend behind the eye, to the right side of the
face of a
69

CA 02316451 2000-08-02
photoborg using the apparatus of the invention.
Fig. 14a depicts an EyeTap (TM) camera system. An EyeTap camera system
provides a camera with effective center of projection co--incident with the
center of
the lens 1330 of an eye 1331 of the user of the EyeTap camera system.
Preferably the
EyeTap camera system is wearable.
Rays of light from subject matter 1300 are diverted by diverter 1340, and pass
through optics 1313, to form an image on sensor array 1311, which is connected
to a
camera control unit (CCU) 1312. Preferably diverter 1340 is a beamsplitter so
that
it does not appreciably obstruct the vision of the user of the apparatus.
Optionally,
optics 1313 may be controlled by focus control unit (FCU) 1314.
The EyeTap camera system, in some embodiments, may include a second similar
apparatus for a second eye of the user. In this way, a binocular video signal
may be
captured, depicting exactly what the user sees.
The image from the EyeTap camera system may be transmitted as live video to a
remote manager so that she can experience what the user experiences. Typically
the
user is a photoborg, who may also communicate with a remote manager.
Optionally the EyeTap camera system may also include a display means which
may show the output of a remote fixed camera at a remote base station.
Fig. 14b depicts an alternate embodiment of the EyeTap camera system. A curved
diverter 1341 serves also as at least part of the image forming optics. A
satisfactory
curved diverter is a toroidal mirror, which forms an image on sensor array
1311
without the need for a separate lens, or with only a small correction lens
needed.
Typically, diverter 1341 forms an image with considerable distortion.
Distortion is acceptable, so long as the image is sharp. Distortion is
acceptable
because CCU 1312 is connected to a coordinate transformation means 1315 which
corrects for the distortion. Thus output 1316 is free of distortion
notwithstanding
distortion that may have been introduced by the use of a curved diverter.
Preferably
the diverter and sensor array are aligned in such a way as to meet the EyeTap
criterion

CA 02316451 2000-08-02
in which the effective location of the camera is the eye 1331 of the user, as
closely as
possible. The effective center of projection of the camera should match
closely with
the location of the center of the lens 1330 of the eye 1331. This embodiment
of the
EyeTap camera can be made with reduced size and weight, and reduced cost.
Eyewear in which the apparatus of the invention may be built is preferably pro-
tective of the eyes to excessive exposure to light, as might happen if a
flashlamp of
the invention is fired into the eyes of a photoborg, especially when large
flashlamps
are used to light up tall skyscrapers in a large cityscape. Accordingly,
eyewear should
incorporate an automatic darkening feature such as is typical of the
quickshade (TM)
welding glasses, or Crystal Eyes (TM) 3-D glasses. An automatic darkening
feature
can either be synched to the flashlamps, or be triggered by photocells or a
wearable
camera that is part of certain embodiments of the invention.
Fig. 15 shows an embodiment of a finder light or hiding light. The finder
light is
used to find where the camera is located, particularly when shooting a large
cityscape,
where the camera might, for example, be located on the roof of a building a
few
hundred meters away. In this case, a light source 1510 may be remotely
activated.
Together with optics 1520 and field of view limner 1521, a very bright light
is produced
by rays 1511, which partially pass through a 45 deg. beamsplitter and are
wasted
as rays 1512, and partially reflected as rays 1513. Alternatively, the light
source
may be placed next to the camera and facing in the same direction, if the
losses of
a beamsplitter are unacceptable. A satisfactory light source is a 1000 watt
halogen
lamp, or arc lamp, which can be detected from among other lights in a large
city by
way of the fact that a photoborg has remote control of it. Alternatively, lamp
1510
may be a flashlamp that the photoborg can remotely flash, in which case it is
also
quite visible from a great distance, nothwithstanding other bright lights in
an urban
setting.
In addition to helping to find the camera, the finder light can also be used
to
determine if one is within its field of coverage. For this purpose, Camera
1500 has
71

CA 02316451 2000-08-02
the same field of view as the light source, so that one ca,n make this
determination.
In some embodiments, barrier 1521 is a colored filter, so that the light
appears a
different color when one is within the field of view of the camera, but can
still be seen
when one is outside the camera's field of view.
At close range, the light is strong enough to light up the scene, and also
thus func-
dons as a worklight so that a photoborg can see where he or she is going.
Preferably,
in this use, another worklight off axis is used so that the camera finder
light is not
on continuously enough to attract insects toward the camera, causing
degradation of
the image in the time immediately following the shutting off of light 1510.
As a hiding light, light 1510 can be illuminated and a photoborg can also see
if
he or she is casting a visible shadow. A visible shadow indicates that he or
she does
not blend into the background, assuming black clothing which would blend with
a
long-range open space behind the photoborg, such state being readily visible
by the
finder light at close range.
Fig. 16 shows the lightsweep apparatus of the invention. A row of lamps (as
little as 5 or 7 lamps, but preferably more, such as 16 or 32 lamps) is
sequenced
as it is moved through space, during which time the shutter of the camera is
either
held open in a long exposure, or rapidly acquires multiple exposures which are
later
photoquantigraphically summed. During this time, the lamps on frame 1600 are
turned on and off. In the figure, the letter "A" has just been drawn in mid-
air by the
device, and lamp 1601 is still on, while lamp 1607 has turned off. The path of
frame
1600 through space leaves behind a ribbon of light in the photograph. For
example,
element 1610 persists even though the frame is no longer there.
Typically the device is used with graphics rather than. text. For example, a
circle
may be drawn using sin and cos lookup tables. A solid filled-in circle is
often drawn
in mid-air, often not directly into the camera, but, instead, pointing away
from the
camera so that it is only seen indirectly as its effect of illumination. In
this way,
frame 1600 can be used to synthesize any arbitrary shape of light, such as a
softbox
72

CA 02316451 2000-08-02
in mid air (if a rectangle is chosen), or a light more like an umbrella
diffuser if a circle
is chosen.
Rather than program the shape of light a-priori, it is sometimes preferable to
simply sequence the lamps while recording the scene at video frame rates, and
then
use photoquantigraphic weighted summation, setting weights to zero to achieve
the
equivalent effect of turning off certain lights.
Fig. 17a shows a lamp sequencer of the invention, in which processor 1700 cap-
tures images from camera 1500 while it controls a sequence of lamps 1701,
1702, 1703,
.. .. Subject matter 1720 may be a person or include people, in which case
lamps
1701, 1702, 1703, . . . are preferably flashlamps and camera 1500 is
preferably a high-
speed video camera, or subject matter 1720 may be a still life scene in which
case
lamps 1701, 1702, 1703, . . . may be ordinary tungsten lamps or the like, and
camera
1500 an ordinary digital still camera or the like.
In the former case, wires 1730 are flash sync cables, while in the second
case, wires
1730 may be the actual power cords for the lamps. In either case, no
preparation
of lamps is needed and ordinary lamps may be used. Thus the innovation of the
invention is in processor 1700 which may include a computer controlled
multichannel
light dimmer, or a flash sequencer.
In the situation illustrated here, five pictures of the same subject matter
are
captured, and in each of the five pictures, the subject matter is differently
illuminated.
These five pictures are then passed to a lightspace rendering program which
allows
for the generation of a lightmodule painting. Typically in a studio setting,
there are
preferred default settings for the lightvectors. For example, the lightmodule
weight
for the picture corresponding to lamp 1703 is typically set; to blue, and
split diffusion
is used to run it through a photoquantigraphic blurring filter prior to the
computation
of a photoquantigraphic sum.
The apparatus of Fig. 17a may be used for the production of still pictures or
for
the production of motion pictures. When still pictures are being produced,
ordinarily
73

CA 02316451 2000-08-02
the lamps are sequenced through only once. When a motion picture is being
produced,
camera 1500 is a high speed motion picture camera, and the lamps are sequenced
through periodically. In this example, since there are five lamps, the motion
picture
camera must shoot at a frame rate or field rate at least five times the
desired output
frame rate or field rate. For example, if we desire a motion picture shot at
24 frames
per second, then the motion picture camera must shoot at least 120 frames per
second.
Each set of five pictures, corresponding to one cycle around the lamps, is
used to
photoquantigraphically render a single frame of output picture.
In the case of motion pictures, camera 1500 may be mobile, if desired, and
lamps
1701, 1702, 1703 may also be mobile, if desired. In this case, preferably
motion picture
camera 1500 will be an even higher speed motion picture camera than necessary.
For
example, if it is a 240 frames per second camera, it can cycle through all
five lights,
and then wait a brief interval, before cycling through once again. In this
way, there is
less misregistration artifacts. Additionally, or alternatively, a registration
algorithm
can be applied to the images from camera 1500 to compensate for the fact that
the
subject matter may have changed slightly from the time the first lamp 1701 was
fired,
to the time the fifth lamp was fired.
Fig. 17b shows a system in which a lightvectorspace is generated without
explicit
use of any controller. Instead, special flashlamps, 1801, 1802, 1803, . . . ,
are used.
These are all connected directly to the camera. If camera 1500 is a video
camera,
all the flashlamps may be supplied with the video signal to lock onto. If' the
camera
1500 is a still picture camera, then all the flashlamps may simply receive a
flash sync
signal from the camera.
Wires 1730 from each of the flashlamps are connected i;o an output 1840 of
camera
1500. Alternatively, some of the flashlamps may be daisy chained to others,
e.g. by
connections 1831, since all the flashlamps only need to be connected in
parallel,
and no longer need separate connections to any central controller.
Alternatively the
connection may be wireless, and each flashlamp may act as a slave unit.
74

CA 02316451 2000-08-02
Ordinarily, in the prior art, all flashlamps would fire simultaneously when
camera
1500 took a picture. However, in the context of the present invention,
flashlamps
1801, 1802, 1803, . . . are special flashlamps that can be set to respond only
to every
Nth pulse, starting at pulse M, where N and M are user-selectable. In this
case, all
flashlamps may be set to N=5 when we are using 5 flashlamps. In general, N is
set
to the desired number of lightvectors. Then the values for M are selected,
e.g. lamp
1801 is set to M=1, lamp 1802 to M=2, and so on. Thus lamp 1801 fires once
every
pulses, starting on the first pulse, lamp 1802 fires once every 5 pulses
starting on
the second pulse, and so on.
The number of lamps can be greater than the number of lightvectors. For
example,
we may set N=2 on each lamp, so that, for example, three of them will fire on
even
numbered pulses and the other two will fire on odd numbered pulses. This
setting
would give us a two-dimensional lightvectorspace.
In any case, the novelty is in the design of the flashlamps when using the
system
depicted in Fig. 17b. Thus the camera can be an ordinary camera, and the user
simply
purchases the desired number of special flashlamps of the invention. Since
many
flashlamps already have menus and means for adjusting and programming various
setting, it is not hard to manufacture flashlamps with the capabilities of the
invention.
Ideally the flashlamps may each contain a slave sensor, infrared sensor, radio
receiver, or the like, so that they can operate within the context of the
invention
without the need for wires connecting them. If wires are to be used to power
the
separate lamps, the necessary synchronization signals may be sent over these
power
lines.
Fig. 18 shows a typical session using the lightspace rendering
(photoquantigraphic
rendering) system. Preferably this system is on the Internet so that it can be
accessed
by any of the photoborgs by way of a WearComp (wearable computer) system. Addi-
tionally, one or more remote managers can also visit this site. Accordingly, a
preferred
embodiment is a WWW page implementation.

CA 02316451 2000-08-02
Here ten pictures are shown on a WWW browser 1800. These ten have been
selected from a set of 1000 pictures, by visiting another WWW page upon which
a
selection process is done. All of the photoborgs have agreed that these ten
images are
the ones they wish to use to make the final rendering. All of these ten images
1810
are pictures of the same subject matter under different illumination.
Below each image is a set of controls in the space below each image 1820.
These
controls include Y channel selector 1830 for greyscale pseudocolor modulespace
selec-
tion, together with three color sliders 1840, an overall weighting, 1850, and
a focus
adjust 1860. Focus adjust 1860 blurs or sharpens the image
photoquantigraphically.
To observe the output, another WWW page is visited. Each time that page is
reloaded, the photorendering is produced according to the weights set here in
1800.
OTHER EMBODIMENTS
From the foregoing description, it will thus be evident that the present inven-
tion provides a design for a system that uses a plurality of pictures or
exposures
to produce picture that is improved, or where there is some extended
expressive or
artistic capability. As various changes can be made in t;he above embodiments
and
operating methods without departing from the spirit or scope of the following
claims,
it is intended that all matter contained in the above description or shown in
the
accompanying drawings should be interpreted as illustrative and not in a
limiting
sense.
Variations or modifications to the design and construction of this invention,
within
the scope of the appended claims, may occur to those skilled in the art upon
reviewing
the disclosure herein. Such variations or modifications, if within the spirit
of this
invention, are intended to be encompassed within the scope of any claims to
patent
protection issuing upon this invention.
76

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

2024-08-01:As part of the Next Generation Patents (NGP) transition, the Canadian Patents Database (CPD) now contains a more detailed Event History, which replicates the Event Log of our new back-office solution.

Please note that "Inactive:" events refers to events no longer in use in our new back-office solution.

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Event History , Maintenance Fee  and Payment History  should be consulted.

Event History

Description Date
Inactive: IPC expired 2020-01-01
Application Not Reinstated by Deadline 2003-08-04
Time Limit for Reversal Expired 2003-08-04
Inactive: Adhoc Request Documented 2003-05-05
Change of Address Requirements Determined Compliant 2003-03-17
Inactive: Office letter 2003-02-27
Inactive: Office letter 2003-02-14
Change of Address or Method of Correspondence Request Received 2003-02-04
Inactive: Abandoned - No reply to s.30(2) Rules requisition 2002-11-04
Deemed Abandoned - Failure to Respond to Maintenance Fee Notice 2002-08-02
Inactive: S.30(2) Rules - Examiner requisition 2002-07-02
Application Published (Open to Public Inspection) 2002-02-02
Inactive: Cover page published 2002-02-01
Inactive: IPC assigned 2000-09-29
Inactive: First IPC assigned 2000-09-28
Inactive: IPC assigned 2000-09-28
Inactive: IPC assigned 2000-09-28
Inactive: IPC assigned 2000-09-28
Inactive: Filing certificate - RFE (English) 2000-09-13
Inactive: Office letter 2000-09-13
Application Received - Regular National 2000-09-12
Request for Examination Requirements Determined Compliant 2000-08-02
All Requirements for Examination Determined Compliant 2000-08-02

Abandonment History

Abandonment Date Reason Reinstatement Date
2002-08-02

Fee History

Fee Type Anniversary Year Due Date Paid Date
Application fee - small 2000-08-02
Request for examination - small 2000-08-02
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
STEVE MANN
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column (Temporarily unavailable). To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.

({010=All Documents, 020=As Filed, 030=As Open to Public Inspection, 040=At Issuance, 050=Examination, 060=Incoming Correspondence, 070=Miscellaneous, 080=Outgoing Correspondence, 090=Payment})


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Representative drawing 2002-01-07 1 12
Description 2000-08-01 74 3,764
Abstract 2000-08-01 1 42
Claims 2000-08-01 17 676
Drawings 2000-08-01 41 587
Filing Certificate (English) 2000-09-12 1 163
Notice: Maintenance Fee Reminder 2002-05-05 1 120
Courtesy - Abandonment Letter (Maintenance Fee) 2002-09-02 1 182
Second Notice: Maintenance Fee Reminder 2003-02-03 1 114
Courtesy - Abandonment Letter (R30(2)) 2003-01-12 1 167
Notice: Maintenance Fee Reminder 2003-05-04 1 115
Correspondence 2000-09-12 1 9
Correspondence 2003-02-03 1 44
Correspondence 2003-02-13 1 14
Correspondence 2003-02-26 1 26