Language selection

Search

Patent 2261376 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 2261376
(54) English Title: MEANS AND APPARATUS FOR ACQUIRING, PROCESSING, AND COMBINING MULTIPLE EXPOSURES OF THE SAME SCENE OR OBJECTS TO DIFFERENT ILLUMINATIONS
(54) French Title: MOYEN ET DISPOSITIF D'ACQUISITION, DE TRAITEMENT ET DE COMBINAISON D'EXPOSITIONS MULTIPLES D'UNE SCDNE OU D'OBJETS SOUS DIVERS ECLAIRAGES
Status: Dead
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04N 5/232 (2006.01)
  • H04N 5/243 (2006.01)
(72) Inventors :
  • MANN, STEVE (Canada)
(73) Owners :
  • MANN, STEVE (Canada)
(71) Applicants :
  • MANN, STEVE (Canada)
(74) Agent:
(74) Associate agent:
(45) Issued:
(22) Filed Date: 1999-02-01
(41) Open to Public Inspection: 1999-08-02
Examination requested: 1999-02-01
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
2,228,403 Canada 1998-02-02
2,233,047 Canada 1998-03-15
2,235,030 Canada 1998-04-14
2,247,649 Canada 1998-10-13
2,248,473 Canada 1998-10-29

Abstracts

English Abstract





A novel means and apparatus for a new kind of photography is described. In
particular, multiple exposures of the same scene or object are acquired by a
camera
at a fixed location, while at the same time, one or more photoborgs
(photographic
cyborgs, photographers, lighting technicians, artists, or engineers) taking
the picture
may freely roam about the scene and differently illuminate various objects in
the
space visible to the fixed camera. A photoborg typically carries one or more
portable
light sources, typically interfaced to a wearable computer system, which is
connected
(wirelessly or otherwise) to a base station computer within or connected to
the fixed
camera. In this way, a photoborg can control the remote camera which gathers
multiple exposures, in a manner analogous to how an artist applies layers of
paint to
a canvas. Typically a photoborg's wearable computer contains a display
(viewfinder
which shows the state of the image, updating the display with each new
exposure.
An interface (typically taking the form of a chording keyboard built into the
handle
of a flashlamp, or the like ) allows a photoborg to erase any desired
exposure, or to
change the intensity or color of any of the exposures and interactively see
the effect,
as it appears from the perspective of the fixed camera. Because of a,
photoborg's
ability to constantly see the small incremental effects of the light sources,
the
apparatus behaves as a true extension of the photoborg's mind and body.


Claims

Note: Claims are shown in the official language in which they were submitted.





CLAIMS
The embodiments of my invention in which I claim an exclusive property or
privilege are defined as follows:
1. A cybernetic photography system, said cybernetic photography system
including:
~ a camera to be placed at a fixed location:
~ an inbound channel, said inbound channel for carrying information from
at least one photoborg to said camera;
~ an outbound channel, said outbound channel for carrying information from
said camera back to said at least one photoborg;
~ at least one source of illumination, said source of illumination being one
of:
- a hand-holdable light. source carryable by said photoborg; and
- a wearable light source wearable by said photoborg,
where said cybernetic photography system further includes remote activation
means of said camera, said remote activation means operable by said photoborg,
and synchronization means, said synchronization means comprising means of
causing said source of illumination to produce light during the time interval
in
which said camera. becomes sensitive to light.
2. A cybernetic photography system as described in Claim 1 where said remote
activation means includes a switch affixed to said source of illumination,
where
said switch provices said photoborg with a means of repeated activation of
said
camera.
3. A cybernetic photography system as described in Claim 2, where said source
of
illumination is an electronic flashlamp.
69




4. A cybernetic photography system as described in Claim 1 where said outbound
channel includes a machine-readable signal sent to at least one WearComp
wearable by said at least one photoborg.
5. A cybernetic photography system as described in Claim 1, further including
a display, said display for viewing by said photoborg, and said display being
responsive. to an output of said camera.
6. A cybernetic photography system as described in Claim 5 where said display
means includes means of displaying a result of a photoquantigraphic summation.
7. A cybernetic photography system as described in Claim 5, in which said
display
means is worn by said photoborg.
8. A cybernetic photography system as described in Claim 5, in which said
display
means is affixed to said source of illumination.
9. A cybernetic photography system as described in Claim 5 including means of
updating an image displayed on said display means each time said camera is
activated.
10. A cybernetic photography system as described in Claim 9 where said means
of
updating said image includes the computation of a photoquantigraphic quantity
q(x, y) determined by applying the inverse response function of said camera to
a picture output from said camera.
11. A cybernetic photography system as described in Claim 9 where said means
of updating said image includes the computation of a photoquantigraphic sum
from a picture taken when said camera is activated and at least one other
picture
taken during previous times said camera was activated.
12. A cybernetic. photography system as described in Claim 9 where said means
of updating said image includes the computation of a photoquantigraphic




vectorspace from a picture taken when said camera is activated and at least
one
other picture taken during previous times said camera was activated.
13. A photorendering system for computing an output picture from a plurality
of
input pictures, said plurality of input pictures having been derived from the
same subject matter under differing illumination, said photorendering system
including the steps of:
~ computation of photoquantigraphic quantities q1, q2,. . . , for each of said
input pictures,
~ computation of a weighted sum, q = w1q1 + w2 q2 + . . .,
14. A cybernetic photography system as described in Claim 1 further including
a
photorendering system as described in Claim 13.
15. A cybernetic photography system as described in Claim 5, further including
a photorendering system as described in Claim 13 where said display means
includes means of displaying said output picture.
16. A cybernetic photography system as described in Claim 9, further including
a
photorendering system as described in Claim 13 where said means of updating
an image comprises means of displaying said output picture, where said output
picture is computed from a picture taken when said camera is activated and at
least one other picture taken during previous times said camera was activated.
17. A cybernetic photography system as described in Claim 9 including a method
of updating said image comprising steps of:
~ determining from said camera a spatially varying quantity linearly
proportional
to the photoquantigraphic quantity, q(x, y), over spatial coordinates
(x, y) of light falling on the image plane or image sensor of said camera for
each of a plurality of exposures;
71




~ computing a weighted sum q(x, y) over said plurality of exposures, said
weighted sum being given by q(x,y) = w 1q1(x, y)+ w2q2 (x,y)+ . . .;
~ applying an essentially semi-monotonic transfer function, f (q), to said
sum, q(x, y), to obtain a picture f (q(x,y)), where said essentially
semi-monotonic transfer function, f (q) has essentially semi-monotonic slope;
~ displaying said picture f (q(x, y)) on said display.
18. Means and apparatus as described in Claim 1 where said outbound channel
comprises a human-readable signal that indicates when the camera has become
sensitive to light, and where said signal comprises at least one of:
~ an audible signal reproduced from a signal sent via said outbound channel;
~ an audible signal produced in the vicinity of said camera, where said
out-bound channel comprises the ability of sound to travel through the air
between said camera and said photoborg;
~ a vibrotactile signal perceptible by said photoborg;
~ direct electrical stimulation of the body of said photoborg;
~ a visual signal visible to said photoborg;
~ a negated visual signal. said negated visual signal comprising a lamp
turning off while said camera is sensitive to light;
~ an acknowledgement provided by virtue of the display of an image,
transmitted over said outbound channel, where said image is responsive to an
output of said camera.
19. A photography system including a flashlamp, where said flashlamp may be
held
by a hand of a user of said flashlamp, and where said photography system also
includes a radio transmitter and a radio receiver borne by said flashlamp.
72




20. A cybernetic flashlamp including means of repeatedly activating a remote
camera, where said cybernetic flashlamp further includes means of
synchronization
with said remote camera.
21. A cybernetic flashlamp including:
~ a light source;
~ an activator for signalling a remote camera to take an exposure each time
said activator is activated;
~ a synchronizer for flashing said light source in synchronism with exposures
of said camera.
22. The flashlamp of Claim 21 including means to disable said light source,
while
leaving said activator enabled.
23. A cybernetic flashlamp as described in 21 further including means of
determining
the field of coverage of the illumination of said cybernetic flashlamp.
24. A cybernetic flashlamp system as described in Claim 21, together with
remote
control means for positioning at least one of a variety of remotely selectable
filters in front of or within said camera, where said means of remote control
is
operable by said photoborg.
25. A cybernetic flashlamp system as described in Claim 21; said cybernetic
flashlamp system including a plurality of pushbuttons borne by said source of
illumination, and means for remote activation of said camera by pressing at
least
one of said pushbuttons, where at least three of said pushbuttons select from
at
least three colors, said colors being either of:
~ the color of a filter as described in Claim 24; and
~ the color assigned to a lightmodule in a photorendering.
73




26. A cybernetic photography system as described in Claim 1 including means
for
said photoborg to specify a color choice together with each of a plurality of
lightstrokes acquired by said camera, where said color choices may be used in
a
photorendering process as described in Claim 13.
27. A cybernetic photography system including:
~ a camera for placement at a fixed location;
~ a portable light source;
~ a portable user actuator which, when actuated by a user, sends a signal to
said camera. causing said camera to take an exposure;
~ means to synchronize said light source with said camera such that said
light source flashes when said camera takes an exposure.
28. The system of Claim 27 wherein said portable user actuator and said
portable
light source comprise an integral unit.
29. The system of Claim 27 wherein said portable user actuator is voice
actuated.
30. A cybernetic photography system, said cybernetic photography system
including:
~ a camera to be placed at a fixed location;
~ at least one source of illumination, said source of illumination being one
of:
- a hand-held light source carryable by a photoborg; and
- a wearable light source wearable by a photoborg,
where said cybernetic photography system includes means of taking a plurality
of pictures while said photoborg directs said source of illumination at
different
portions of subject matter in view of said camera, and where said source of
74




illumination is activated in synchronization with at least some of said
plurality
of pictures.
31. A cybernetic photography system as described in Claim 27, where said
camera
includes means of detecting that subject matter in view of said camera is
being
illuminated with said source of illumination.
32. A cybernetic photography system as described in Claim 27, where said
camera
takes at least one picture of said subject matter with no use of said source
of
illumination, and then where said cybernetic photography system uses said at
least one picture of said subject matter to compare with further pictures of
said subject matter to determine whether or not said subject matter is being
illuminated with said source of illumination.
33. A cybernetic photography system as described in Claim 32 where said
cybernetic
photography system includes means of recording pictures that are determined
to have been pictures of said subject matter illuminated with said source of
illumination, and not recording pictures that are determined to have been
pictures
of said subject matter not illuminated with said source of illumination.
34. A cybernetic photography system as described in Claim 27 where said camera
is a video camera, and where said source of illumination flashes repeatedly at
the frame rate of said video camera.
35. A cybernetic photography system as described in Claim 34, including means
of
turning said source of illumination on and off, where said source of
illumination
produces repeated rapid bursts of light when it is turned on, and no light
when
it is turned off, and where said video camera records while said source of
illumination is turned on, and stops recording during at least some of the
time for
which said source of illumination is turned off.




36. A cybernetic photography system, said cybernetic photography system
including:
~ a lock-in camera to be placed at a fixed location;
~ at least one source of illumination, said source of illumination being one
of:
- a hand-held light source carried by said photoborg; and
- a wearable light source worn by said photoborg,
where said source of illumination produces a periodically varying level of
intensity, and where said cybernetic photography system includes means of
taking
at least one picture with said lock-in camera.
37. A phlashlamp, where said phlashlamp includes means of producing at least
three flashes of different strengths in rapid succession.
38. A phlashlamp photography system, including a phlashlamp as described in
Claim 37, where said phlashlamp includes remote control means for a camera,
said remote control means including means for taking at least three pictures
in rapid succession, where said three at least three pictures are pictures of
the
same subject matter exposed to different quantities of light.
39. A cybernetic photography system, including at least one flashlamp, where
said
flashlamp is at least one of:
~ wearable; and
~ hand-holdable,
and where said cybernetic photography system includes means of producing a
plurality of flashes of light in rapid succession, where said cybernetic
photography
system further includes means of remotely activating a camera to take a
plurality of pictures in rapid succession, where at least some of said
pictures are



76




pictures of subject matter that has been affected by at least one of said
flashes
of light.
40. A cybernetic photography system including a photorendering system as
described in Claim 13 where. said cybernetic photography system further
includes
a virtual control panel presented upon a video display means, and where said
virtual control panel comprises lightmodule weight selection means.
41. A cybernetic photography system as described in Claim 40, where said
virtual
control panel is operable by a, Web browser, running on a computer connected
to the Internet.
42. A cybernetic photography system as described in Claim 40, further
including
the features of Claim 27, where said video display means is viewable by said
photoborg.
43. A cybernetic photography system including color coordinate transformation
means, together with brightgrey warning means, said brightgrey warning means
including means of indicating image areas that correspond to regions of
colorspace
at the gamut boundary of the domain of said color coordinate transformation,
but not at the gamut boundary of the range of said color coordinate
transformation.
44. A cybernetic photography system including lightspace rendering means, and
color coordinate transformation means in lightspace coordinates, together with
brightgrey reduction means, said brightgrey reduction means including means
of identifying regions of colorspace at the gamut boundary of the domain of
said
color coordinate transformation, but not at the gamut boundary of the range
of said color coordinate transformation, said cybernetic photography system
including means of adjusting said color coordinate transformation means to
reduce the amount of brightgrey image content.
77




45. A cybernetic photography system as described in Claim 44, where said
lightspace
rendering is from a plurality of lightmodules, said brightgrey reduction means
including means of identifying brightgrey regions in each of said plurality of
lightmodules, and where said adjustment of said color coordinate
transformation
includes at separate color coordinate transformations in each of said light-
modules,
said separate color coordinate transformations including at least one
of:
~ deliberate distortion of color hue to reduce the amount of brightgrey
contribution; and
~ deliberate destruction of highlight detail by clipping, to reduce the amount
of brightgrey contribution.
46. A cybernetic, photography system including color coordinate transformation
means, together with brightgrey reduction means, said brightgrey reduction
means including means of identifying regions of colorspace at the gamut
boundary
of the domain of said color coordinate transformation, but not at the gamut
boundary of the range of said color coordinate transformation, said cybernetic
photography system including means of adjusting said color coordinate
transformation
means to reduce the amount of brightgrey image content.
47. A cybernetic photography system as described in Claim 46, where said
adjustment
of said color coordinate transformation includes at least one of:
~ deliberate distortion of color hue; and
~ deliberate destruction of highlight detail by clipping.
48. A cybernetic photography system including color coordinate transformation
means, said color coordinate transformation means comprising hue distortion
means, said hue distortion means altering the hue of color highlights toward a
78




hue that is one of the process colors of a printing process to be used to
print
pictures generated by said cybernetic photography system.
49. A cybernetic photography system as described in Claim 48, where said hue
distortion means includes at least one of:
~ distortion of blue highlights toward cyan
~ distortion of red highlights toward magenta
~ distortion of green highlights toward cyan
~ distortion of green highlights toward yellow
~ distortion of violet, purple, or bluish magenta highlights toward magenta
50. A cybernetic photography system as described in Claim 5 where said display
means includes inverse gamut warning means, where said inverse gamut warning
means includes means of indicating image areas of a photoquantigraphic
summation
that correspond to regions of colorspace at the boundary of a domain
gamut but not at the boundary of a range gamut.
51. A method for facilitating combining pictures of a given scene or object,
comprising:
~ capturing photoquantigraphic quantities, q1, q2, . . . , one from each of a
plurality of pictures of said given scene or object, at least some of said
pictures taken under different illuminations.
52. The method of Claim 51, further comprising:
~ computing a weighted sum from said photoquantigraphic quantities, said
weighted sum given by q = w1q1 + w2q2 + . . .,
53. A photorendering method for computing an output picture from a plurality
of
input pictures, said plurality of input pictures having been derived from the
79




same scene or object under differing illumination, said photorendering system
including:
~ capturing photoquantigraphic quantities q1, q2,. . . , one from each of said
plurality of input pictures;
~ computing a weighted sum from said photoquantigraphic quantities, said
weighted sum given by q = w1q1 + w2q2 +. . .;
~ colorspace coordinate transformation of said weighted sum q.
54. A photorendering system as described in Claim 53 further including gamut
boundary management means.
55. A photorendering system as described in Claim 53 where the range of said
colorspace coordinate transformation is a CMYK colorspace, and where said
colorspace coordinate transformation is followed by steps that include the
steps
of:
~ application of a pointwise nonlinearity, where said pointwise nonlinearity
is semimonotonically increasing, and where the slope of said pointwise
nonlinearity is semimonotonically decreasing;
~ quantization.
56. A cybernetic photography system as described in Claim 27, where said
source
of light is wearable.
57. A cybernetic photography system as described in Claim 27, including optics
to
effectively locate said source of light near the center of a lens of the eye
of said
photoborg.
58. A cybernetic photography system as described in Claim 27, where said
source of
light is a phlashlamp, and where said cybernetic photography system includes




optics to effectively locate said source of light near the center of a lens of
the
eye of said photoborg.
59. A wearable photography apparatus, comprising:
~ headgear;
~ a camera borne by said headgear;
~ optics borne by said headgear and arranged to locate the effective center of
projection of said camera near the center of a lens of the eye of the wearer
of said wearable photography apparatus.
60. A wearable photography apparatus as described in Claim 59; where said
headgear is a pair of eyeglasses.
61. A wearable cybernetic photography apparatus including a camera, together
with
optics to effectively locate said camera near the center of a lens of the eye
of
the wearer of said wearable cybernetic photography apparatus.
62. A cybernetic photography apparatus, including a camera, where said
cybernetic
photography apparatus is wearable, and where said camera has an effective
center of projection in or near the lens of an eye of the wearer of said
cybernetic
photography apparatus.
63. A cybernetic photography apparatus as described in Claim 59, where said
camera is a left camera with effective center of projection in the left eye of
the
wearer of said cybernetic photography apparatus, and where said cybernetic
photography apparatus further includes a right camera with effective center of
projection in the right eye of said wearer.
64. A cybernetic photography system as described in Claim 1, together with a
source of illumination providing sustained light output where said sustained
light output includes means of adjusting a light output level of said source
of
81




illumination, and where said outbound channel comprises at least one of the
following:
~ an audible tone, originating from the vicinity of said camera, but loud
enough to be heard some distance away from said camera:
~ an audible tone. broadcast to said photoborg, and re-produced by an
audible transducer on said photoborg's person;
~ a vibrotactile signal reproduced within said source of illumination and
felt through a handle or other point of contact between said source of
illumination and a photoborg's body;
~ a vibrotactile signal reproduced by a wearable computer or communications
apparatus upon the body of a photoborg,
and where said cybernetic photography system further includes remote control
means to repeatedly remotely activate said camera, so that a plurality of long
exposure pictures can be taken of subject matter with different illumination
in
each of said plurality of long exposure pictures.
65. Means and apparatus as described in Claim 64 where said remote control
means
is operable by a switch affixed to said source of light.
66. Means and apparatus as described in Claim 64 where said means of adjusting
light output level includes a switch, selecting from among two light output
levels, and where said switch also operates said remote control means.
66. Means and apparatus as described in Claim 64 where said means of adjusting
light output level includes a spring-loaded lever, and where said spring-
loaded
lever also operates said remote control means.
68. Means and apparatus as described in Claim 64 where said means of adjusting
said light output level includes a squeezable spring-loaded trigger and where
82




said spring-loaded trigger also operates said remote control means when it is
squeezed beyond a certain threshold.
69. Means and apparatus as described in Claim 64 where said outbound channel
provides a confirmation that said camera has become sensitive to light, and an
indication of how long said camera remains sensitive to light.
70. A source of illumination, as outlined in Claim 64, together with a data
entry
device where said data. entry device issues a command via said inbound channel
to activate said camera in a manner in which information is also passed to
said
camera to select, specify, or continuously vary during an exposure at least
one
of:
~ aperture of said camera;
~ degree of sensitivity or gain of said camera;
~ shutter speed of said camera;
~ degree of openness of shutter of said camera;
~ focus of said camera;
~ degree of filtration applied optically or electronically to said camera
affecting spectral sensitivity of said camera;
~ degree of filtration applied optically or electronically to said camera
affecting sharpness or clarity of said camera;
71. A source of illumination as outlined in Claim 64, together with an input
device,
where said input device. includes at least one spring-loaded switch and where
said switch operates said remote control means, and where said camera remains
sensitive to light for as long as said switch is depressed.
72. A cybernetic. photography system as described in Claim 27, where said
portable
light source comprises a pushbroom light, said a pushbroom light including a
83




plurality of light emitting elements of separately controllable intensity
mounted
to a frame such that a photoborg may grasp said frame and move about with
it, said cybernetic photography system including means of dynamically varying
the output level of each of said plurality of light emitting elements.
73. A cybernetic photography system as described in Claim 27, further
including
worklights, said worklights allowing said photoborg to see, said cybernetic.
photography system further including means of turning off said worklights
during
said time interval in which said camera becomes sensitive to light.
74. A cybernetic photography system as described in Claim 1, further including
room light controlling means in a working environment such as a photographic
studio, where the room lighting itself may be controlled by an electric
circuit,
said room light controlling means including means of automatically turning
said
room lighting off during at least one time interval in which said camera
becomes
sensitive to light.
75. A cybernetic photography system as described in Claim 1, further including
at
least one indicator lamp fixed in the vicinity of said camera, said indicator
lamp
viewable by said photoborg when said photoborg is within the field of view of
said camera.
76. A cybernetic photography system as described in Claim 1, further including
at
least one indicator means by which said photoborg can determine whether or
not said photoborg is within the field of view of said camera.
77. A cybernetic photography system as described in Claim 1, further including
at least one indicator light source fixed in the vicinity of said camera, said
indicator light source having an attribute view able by said photoborg when
said photoborg is within the field of view of said camera, and said attribute
of
said light source not viewable by said photoborg when said photoborg is not
84




within the field of view of said camera.
78. A cybernetic photography system as described in Claim 77, in which said
attribute is a color of said light.
79. A cybernetic photography system as described in Claim 27, where said
cybernetic photography system further includes a hiding test light, and remote
activation means of said hiding test light operable by said photoborg.
80. Apparatus for processing a plurality of exposures of the same scene or
object,
comprising:
~ image buffers each for storing one of said plurality of exposures,
~ means for obtaining photoquantigraphic quantities, one for each of said
plurality of exposures; and
~ means for producing a weighted photoquantigraphic summation of said
photoquantigraphic quantities.
81. A cybernetic photography system for acquiring a pluralitly of pictures of
the
same subject matter under differently illuminated conditions, said cybernetic
photography system including a fixed camera and a plurality of flash lamps,
together with means for sequentially activating each of said flashlamps each
time one of said plurality of pictures is taken, said each of said flashlamps
activated during the time interval in which said camera is sensitive to light.
82. A cybernetic photography system, as described in Claim 81, including means
of
sequentially firing a. plurality of flashlamps, sequencing from one of said
flashlamps
to the next at a video rate, and where said camera is a video camera, and
where said cybernetic photography system further includes means of recording
video output from said video camera.




83. Means and apparatus as described in Claim 80 where, prior to computing
said
weighted photoquantigraphic summation, at least some of said exposures may
photoquantigraphically blurred.
84. The apparatus of Claim 5, where said display is a first display for
viewing by
a photoborg, and further including at least a second display for viewing by a
second photoborg, said second display also being responsive to an output of
said camera.
85. A game or amusement device as described in Claim 84 in which a plurality
of
photoborgs each have one or more wearable light sensitive devices.
86. A controller for a camera and at least one light source, such that said
camera
acquires a pair of pictures in rapid succession with at least one picture
acquired
with illumination from said light source, and at least. one other other
picture
acquired without illumination from said light source.
87. An apparatus which includes a camera and light source, where said
apparatus
includes means for acquiring a pair of images in rapid succession where one
image is acquired with greater influence from said light source than the other
image, and where said influence is judged in comparison to a somewhat constant
degree of illumination which is external and not controllable by the
apparatus.
88. An apparatus which includes a camera and a plurality of light sources,
where
said apparatus includes means of acquiring a plurality of images where said
images differ primarily in the relative amount of influence that each of said
plurality of light sources has had on each of said images.
89. A flashlamp for use in production of lightvectorspaces, where said
flashlamp
includes a synchronization input, where said flashlamp is responsive only to
every nth signal received by said synchronization input, and where the first
86




m < n synchronization signals are ignored by said flashlamp, and where m and
n are user selectable.
90. A flashlamp as described in Claim 89, where n may be set to 2, and where m
may be set to 0 or 1, so that when m = 0 said flashlamp fires on even numbered
pulses and when m = 1 said flashlamp fires on odd numbered pulses.
91. A cybernetic photography system using two flashlamps as described in Claim
90,
together with a video camera, where one of said flashlamps is activated when
even fields of said video camera are acquired, and the other of said
flashlamps
is activated when odd fields of said video camera are acquired.
92. A controller for a camera and a plurality of light sources, where said
camera
acquires a plurality of pictures in rapid succession while said controller
activates
different combinations of one or more of said light sources during the
exposure
of each of said plurality of separate pictures.
93. A means of combining a plurality of images of the same subject matter
where
said images differ primarily due to changes in illumination of said subject
matter, and where said means comprises the following steps:
~ application of a pointwise nonlinear function to each of said images, where
said function is approximately monotonically increasing and has an
approximately monotonically increasing slope;
~ pointwise addition of the results obtained from the above step;
~ applying a different pointwise nonlinearity to said sum, where said
different
pointwise nonlinearity is approximately monotonically increasing and has
an approximately monotonically decreasing slope.
94. An apparatus including a motion picture camera with film, videotape, or
electronic recording medium, and a plurality of light sources, where said
apparatus


87




includes means of sequentially activating said light sources so that each of
said
light sources periodically flickers or flashes with a period that is an
integer fraction
of the frame-rate or field-rate of said camera, and where said various light
sources reach their peak levels of illumination at different times.
95. Means and apparatus including a. motion picture camera and a light source
fixed
to said camera, together with means of controlling said light source in such a
manner that it flashes periodically with a period of one half the field rate
or
frame rate of said motion picture camera, such that said light source affects
even frames or fields to a different degree than it affects odd frames or
fields,
and where said means and apparatus also includes means of producing a new
image sequence from the image sequence acquired by said camera, where said
new image sequence is made at half the field or frame rate of the original
image
sequence by pairwise processing of adjacent pairs of pictures, said pointwise
processing including at least one of the following:
a photoquantigraphic summation;
~ an implementation of split diffusion in lightvectorspace;
~ calculation of a photoquantigraphic vectorspace.
96. A cybernetic hand-held flashlamp including means of synchronizing the
flash
from said flashlamp with a remote camera, and further including means of
repeatedly activating said remote camera.
97. An apparatus which comprises a means of activating a fixed camera by a
remote
control attached to a hand-held flash unit, and where each time said remote
control is activated, said camera briefly admits light to an image recording
medium and said flash unit is activated by said apparatus with the correct
timing such that said flash unit illuminates at least a portion of the subject
matter of said camera during the brief time that said camera admits light to
88




said image recording medium.
98. An apparatus which includes an electronic camera at a. fixed location, and
a
hand-held light source, where said camera is capable of capturing and
integrating
a plurality of image captures together into a single image, and where said
integration is initiated a,nd terminated using a remote control located in the
vicinity of said light source.
99. A cybernetic flashlamp where said cybernetic flashlamp includes a
viewfinder
means through which a user may look to determine the extent of illumination
of said flashlamp.
100. A lightsweep where said lightsweep includes a frame upon which a
plurality
of light sources is mounted, and means to vary the quantity of illumination
produced by each of said light sources through a data entry device affixed to
said lightsweep.
101. A camera for use at a fixed location, including a visual indication means
by
which a person may discern whether or not he or she is within the field of
view
of said camera, where said means includes sources of light visible from a
distance
of at least 1000 meters from said camera.
89

Description

Note: Descriptions are shown in the official language in which they were submitted.


INT~i.~,.~y-.~.~q .. ...;~;~~w
CA 02261376 1999-02-O1
FF8 Q ~ 1~~
PROPR~; ~.: ,r:~ ~:,.;~~?tfE
FIELD OF THE INVENTION
Generally this invention pertains to photographic methods, apparatus, and
systems
involving multiple exposures of the same sub ject matter to differing
illumination.
BACKGROUND OF THE INVENTION
In photography (and in movie and video production). it is desirable to capture
a
broad dynamic range from the scene, Often the dynamic range of the scene
exceeds
that which can be captured by the recording medium. Therefore, it is not
possible to
rely on a light meter or automatic setting on the camera.. Even if the
photographer
takes a reading from various areas of the scene that he/she considers
important, it is
seldom that the estimate of exposure will lead to an optimum picture. Results
from
cameras that attempt to do this automatically (e.g. by assuming the central
area is
important, and maybe. measuring a, few other image areas) are usually even
worse.
Still-photographers attempt to address this problem by a process called
b~~acheting
the exposures. This process involves measuring (or guessing) the correct
exposure,
and then taking a variety of exposures around this value (e.g. overexposing
one by
a factor of two, one by a factor of four, and underexposing one by a factor of
two.
etc). From this set of pictures, they select the single picture that has the
best overall
appearance. A photographer might typically take half' a dozen or so pictures
of each
pose or each scene. These pictures are usually taken in rapid succession, and
the
aperture is opened one stop (or 1 /3 of a stop) between each exposure and the
next,
or the shutter speed is equivalently adjusted between one exposure and the
next..
When the pictures are developed they are usually arranged in a row (say left
to right)
ordered from lightest to darkest; and one of them is chosen by visual
comparison to
the others. The remaining pictures are usually disposed of or not used at all.
In situations where there is high contrast, extended-response film may be
used.
Many modern films exhibit an extended response and are capable of capturing a
broad dynamic range. Extended response film was invented by C'.harles Wyckoff,
as
3


CA 02261376 1999-02-O1
described in U.S. Pat. No. 3,663,228. In variations of the Wyclcoff film in
which
the different exposures are separately addressable, it is also possible to
apply a new
imaging processing means and apparatus, as described in U.S. Pat. No.
5,828,'l93.
Often in an indoor setting, there is a. window in the background, and we wish
to capture both the indoor foreground (lit by low-power household lamps) and
the
outdoor scene, (which might be lit by bright sunlight. This situation is
usually dealt
with by adjusting the lighting. Often a, fill-flash is used, sometimes leading
to unnat-
ural pictures. It is difficult to tell exactly how much fill-flash to use,
a.nd excessive
fill flash leads to visually unpleasant results, while insufficient fill-flash
fails to reduce
the dynamic range of the scene sufficiently. Still-photographers address this
problem,
again, by bracket-iny, but now they must bracket over two variables: (1) the
exposure
for the background lighting, and (2) the exposure for the flash. This is
generally done
by noting that the shutter speed does not affect the flash exposure but only
affects
the exposure to background light., while the aperture affects both. Thus the
photob
rapher will expose for a variety of both shutter speeds and apertures.
Alternatively, a
flash with adj ustable output may be used, and the photographer will make a
variety
of exposures attempting to bracket through all possible combinations of flash
output.
and exposure to natural light. While there are many automatic systems that
combine
"intelligent" flash and light meter functionality, the results are often
unacceptable, or
at best, still fall short of the results that can be obtained by bracketing
over the two
variables - flash and natural light.
Alternatively, especially in commercial photography, movie production, or
video
production, great effort is expended to reduce the dynamic range of the scene.
Large
sheets of light-reducing dark grey transparency material are used to cover
windows.
Powerful lamps are temporarily set up in office or home interiors. Again, it
is very
dilTicult to adjust the balance between the various lamps. Most professional
photog-
raphers bracket the exposures over a variety of combinations of lamp output
levels.
As one can imagine, the number of possible permutations grows astronomically
with
4


CA 02261376 1999-02-O1
the number of separate lights. Furthermore, the dimension of color is often
involved.
It is common, for example, in pictures used in annual reports or magazine
adver-
tisements, for different colored filters to be placed over each lamp. For
example, it
is common to cover the lamps in the backgr ound with strongly colored (e.g.
dark
blue) filters. The exact effect is not predictable. ~VIost professional
photographers
test with Polaroid film first but Polaroid is not good enough for the final
use. A
high-quality film is inserted in place of the Polaroid, and the same exposure
is made
there. Because of differences between the response of the two films, it is
still neces-
sary to bracket over the balance of the various lights. Furthermore, it is
impossible to
predict the exact wishes of the client, or other possible end uses of the
image, and it is
usually necessary to take both "normal" pictures with no filters on the
lights, as well
as "dramatic" pictures with colored lights. Therefore, it is also common to
bracket
over colors (e.g. take one picture with a bright yellow background, another
with plain
white. and another with deep blue, etc). Thus the resulting shot can appear in
both
a more traditional publication and a more artistic/advertising-related
publication.
This dual-use can save the photographer from having to do a re-shoot when
another
use of the image arises, or should the client have a slight change of heart,
a,s is often
the case.
As an alternative to using many different lights, it is common in commercial
photography to leave the shutter open and move around through the scene with
a hand-held flash unit. The photographer aims the flash unit at various parts
of
the scene and triggers the flash manually. Apart from its economy (only one
flash
lamp is needed, rather than tens or hundreds of fiashlamps that would be
needed to
create the same effect with a single short exposure), the method is often
preferred for
certain applications even when more flash lamps are available to the
photographer.
The method, which is called painting with light, or, more succinctly,
lightpainting,
has a certain expressive artistic quality, that makes it popular among
commercial
photographers, particularly in the advertising industry. The reason for this
appeal is


CA 02261376 1999-02-O1
that the light sources can be placed right in view of the camera. At one
instant the
photographer may stand right in full view of the camera and point the light to
the
side, flashing on some object in the scene. The photographer does not show up
in
the picture because the light is aimed away from his/her body, and the. camera
only
"sees" the object that the flash is aimed at. If that were the only flash of
light, the
picture would be entirely black except for that one object. in the scene.
However, the
photographer moves through the scene and illuminates many different parts of
the
scene in a similar way. Thus, using a single flash lamp, the scene can be
illuminated
in ways that are simply not possible in a single short exposure, even with
access to an
unlimited number of flash lamps. This is because a plurality of lamps placed
in the
scene at the same time would illuminate each other, or, for example, light
from one
flashla.mp may illuminate the light stand upon which another flashlamp is
attached.
Often, in lightpainting, various colored filters are held over the lamp each
time it
is flashed.
Soft-focus and diffusion filters are frequently used in commercial
photography.
These filters create pleasing halos around specular highlights in the scene.
They may
or may not reduce resolution (e.g. the ability to read a newspaper positioned
in the
scene, since the image detail can remain yet be seen through a. "soft and
dreamy"
world. It is often desirable to either blur or diffuse some areas of the scene
but not
others. Sometimes a soft-focus filter with a hole in the middle is used to
blur the
edges of the image (usually corresponding to background material) while
leaving the
center (usually the main subject matter) unaffected.
Another creative effect that has become quite popular in commercial
photography
is called slit diffusion. Split-diffusion is created by separating the control
of the
foreground lighting from the control of the background lighting. The
foreground
lights are turned on and one exposure is made. The foreground lights are
turned off,
and the background lights are turned on. A diffusion filter is placed over the
lens and
a second exposure is made on the same piece of film. The split-diffusion
effect may
6


CA 02261376 1999-02-O1
also be created with flash. The foreground flashlamps are activated, the
difFusion
filter is moved over the lens, and the background flashlamps are then
activated.
Split-diffusion is also routinely applied within the c-ontext of
lightpainting. The
diffusion filter is often moved by an assistant, or electrically, back and
forth in front
of the lens or away from the lens, while the photographer flashes at difFerent
parts of
the scene, some flashes with and some without the diffusion.
SUMMARY OF THE INVENTION
The invention facilitates a new form of visual art, in which a fixed point of
view is
chosen for the base station camera, and then, once the camera is secured on a
tripod,
a photoborg can walk around and use various sources of illumination to
sequentially
build up an image layer-upon-layer in a manner analogous to paint brushes upon
canvas, and the cumulative effect embodied therein. To the extent that the
artist's
light sources can be made far more powerful than the natural ambient light
levels,
the artist may have a tremendous degree of control over the illumination in
the scene.
The resulting image is therefore a result of what is actually present in the
scene,
together with a potentially very visually rich illumination sculpture
surrounding it.
Typically the illumination sources that the artist carries are powered by
batteries,
and therefore, owing to limitations on the output capabilities of these light
sources,
the art is practiced in spaces that may be darkened sufficiently, or, in the
case of
outdoor scenes, at times when the natural light levels are least.
By ''photoborg'', what is meant is one who is either a photographic cyborg
(cyber-
netic organism), a lighting technician, a photographer, or an artist using the
apparatus
of the invention. By virtue of the communications link between the photoborg
and
the base station, the photoborg may move through the space, including the
space in
view of the camera, and the photoborg may selectively illuminate objects that
are at
least. partially within the field of view of the camera. Typically the
photoborg will
produce multiple exposures of the same scene or object. These multiple
exposures
l


CA 02261376 1999-02-O1
are typically each stored as separate files, and are typically combined at the
base
station. either by remote control of the photoborg (e.g. by way of wearable
computer
remotely logged into the base station computer), or by a director or manager
at the
base station.
In a typical application, the artist may, for example, position the camera
upon a
hillside, or on the roof of a. building, overlooking a. portion of a city. The
artist may
then roam about the city, walking down various streets, and use the light
sources to
illuminate various buildings one-at-a-time. Typically, in order that the
wearable or
portable light sources be of sufficient strength compared to the natural light
in the
scene (e.g. so that it is not necessary to shut off the electricity to the
entire city to
darken it sufficiently that the artist's light source be of greater relative
brightness)
some form of electronic flash is used as the light source. In some embodiments
of the
invention, an FT-623 lamp is used, housed in a lightweight 30 inch highly
polished
reflector. with a handle which allows it to be easily held in one hand. The
commu-
nications infrastructure is established such that the camera is only sensitive
to light
for a short time period (e.g. typically approximately 1a500 of a second),
during the
instant that the flash lamp produces light. In this manner a comparatively
small
lamp (e.g. a lamp and housing which can be held in one hand) may illuminate a
large
skyscraper or office tower in such a manner that, in the final image, the
flashlamp
is the dominant light source, compared to fluorescent lights and the like that
might
have been left turned on upon the various floors of the building, or to
moonlight, or
light from streetlamps which cannot be easily turned off.
Typically, the photoborg's wearable computer system comprises a visual display
which is capable of displaying the image from the camera. (typically sent
wirelessly over
a data communications link from the computer that controls the camera).
Typically,
also. this display is updated with each new exposure. The display update is
typically
switchable between a mode that shows only the new exposure, and a cumulative
mode
that shows a photoquantigraphic summation over time to show the new exposure
8


CA 02261376 1999-02-O1
photoquantigraphically added to previous exposures. This temporally cumulative
display makes the device useful to the photoborg because it helps in the
envisioning of
a completed lightmodule painting. The temporally cumulative display is also
useful
in certain applications of the apparatus to gaming. For example, a game can be
devised in which two players compete against each other. One player may try to
paint the subject matter before the camera red, and the other will try to
paint the
subject matter blue. When the subject matter is an entire cityscape as seen
from a
camera located on the roof of a tall building, the game can be quite
competitive and
interesting. Additionally, photoborgs can either work cooperatively on the
same team,
or competitively, as when two teams each try to paint the city a different
color, and
"claim" territory with their color. In some embodiments of the game the
photoborgs
can also shoot at each other with the flashguns. For example, if a photoborg
from
the "red" team "paints" a blue-team photoborg red, he may disable or "kill"
the
blue-team photoborg, shutting down his flashgun. In other embodiments; the
"kill"
and ''shoot" aspects can be removed, in which case the game is similar to a
game like
squash, where the opponents work in a collegial fashion, getting out of each
other's
way while each side takes turns shooting. The red team flashguns) and blue
team
flashguns) can be fired alternately by a free running base-station camera., or
they can
all bre together. When they fire alternately there is no problem
disambiguating them.
W hen they fire together; there is preferably a blue filter over each of the
flashguns
of the blue team; and a red filter over each of the flashguns of the red team,
so that
flashes of light from each team can be disambiguated.
The wearable computer is generally controllable by the photoborg through a
chord-
ing keyboard mounted into the handle of each light. source, so that it is not
necessary
to carry a separate keyboard. In this manner, whichever light source the
photoborg
plugs into the body-worn system becomes the device for controlling the
process. T'yp-
ica.lly, also, exposures are maintained as separate image files in addition to
a combined
cumulative exposure that appears on the photoborg's screen. The exposures
being
9


CA 02261376 1999-02-O1
in separate image files allows the photoborg to selectively delete the most
recent ex-
posure, or any of the other exposures previously combined into the running sum
on
the screen. This capability is quite useful, compared to the process of
painting on
canvas, where one must paint over mistakes rather than simply being able to
turn
off brushstrokes. Furthermore, exposures to light can be adjusted either
during the
shooting or afterwards, and then re-combined. T'he capability of doing this
during
the shooting is an important aspect of the invention, because it allows the
photoborg
to capture additional exposures if necessary, and thus to remain at the site
until a
satisfactory final picture is produced. The final picture as well as the
underlying
dataset of separately adjustable exposures, and the weighting that was
selected to
generate the final picture, is typically sent wirelessly to other sites (e.g.
on the World
Wide Web) so that others (e.g. art directors or other collaborators) can
manipulate
the various exposures and combine them in different ways. and send comments
back
to the photoborg by email. This additional communication facilitates the
collection
of additional exposures if it turns out that certain areas of the scene or
object could
be better served if they were more accurately or more expressively described
in the
dataset.
Each of these exposures is called a lightstroke. A lightstroke is analogous to
an
artist's brushstroke, and it is the plurality of lightstrokes that are
combined together
that give the invention described here it's unique ability to capture the way
that a
scene or object responds to various forms of light.
Furthermore, a particular lightstroke may be repeated (e.g. the same exposure
may be repeated in almost exactly the same way, holding the light in the same
position, more than once). These seemingly identical lightstrokes may be
averaged
together to obtain a single lightstroke of improved signal to noise ratio.
This signal
averaging technique of repeating a given lightstroke may also be generalized
to the
extent that the lamp output may be varied for each repetition, but otherwise
held
in the same position and pointed in the same direction at the scene. The
resulting


CA 02261376 1999-02-O1
collection of differently exposed pictures may be combined to produce a
lightstroke
that captures a broad dynamic range.
A typical attribute of the images produced using the apparatus of the inven-
tion is that of extreme exposure. Some portions of the image are often
deliberately
overexposed by as much as 10 f-stops or more, while other areas of the image
are
deliberately underexposed. In this way, selected features of the scene or
object are
emphasized. Typically, pictures produced using the apparatus of the invention
span
a very wide range of colorspace. Typically the deliberate overexposure is
combined
with very strongly saturated colors, so that the portions of the image extend
to the
boundaries of the color gamut. Accordingly, what is observed in some areas of
the
images is extreme shadow detail that would not show up in a, normally exposed
pic-
ture. In other areas of the picture, one might see extreme highlight details
that would
not show up in a normally exposed picture. Thus in order to capture
information
pertaining to the extreme dynamic range necessary to be able to render images
of
such extreme exposure range, lightstrokes of extended dynamic range are
extremely
useful. Moreover, lightstrokes of extended dynamic range may be useful for
other
reasons such as the synthesis of split-diffusion effects which become more
numerically
stable and immune to quantization noise or the like, when the input
lightstrokes have
extended dynamic range.
Finally, it may, at times, be desirable to have a real or virtual assistant at
the cam-
era, to direct~advise the photoborg. In this case, the photoborg's viewfinder
which
presents an image from the perspective of the fixed camera also affords the
photo-
borg with a view of what the assistant sees. Similarly, it is advantageous at
times
that the assistant have a view from the perspective of the photoborg. 'To
accomplish
this, the photoborg may have a second camera of a wearable form. Through this
second camera, the photoborg allows the assistant to observe the scene from
the pho-
toborg''s perspective. Thus the photoborg and assistant may collaborate by
exchange
of viewpoints, as if each had the eyes of the other.
11


CA 02261376 1999-02-O1
The photoborg's camera may alternatively be attached to and integrated with
the
light source (e.g. flashlamp), in such a way that it provides a preview of the
coverage
of the flashlamp. Thus when this camera output is sent to the photoborg's own
wearable computer screen, a flashlamp viewfinder results. The flashlamp
viewfinder
allows the photoborg to aim the flashlamp, and allows the photoborg to see
what
is included within the cone of light that the flashlamp will produce.
Furthermore,
when viewpoints are exchanged, the assistant at the main camera can see what
the
flashlamp is pointed at prior to activation of the flash.
Typically there is a command that may be entered to switch between local mode
(where the photoborg sees the flash viewfinder) a,nd exchanged mode (where the
photoborg sees out through the main camera and the assistant at the main
camera
sees out through the photoborg's typically wearable camera.
In many embodiments of the invention the flashlamp is wearable. The flashla,mp
may also be an EyeTap (TM) flashlamp. An EyeTap flashlamp is one in which the
effective source of light is co-incident with an eye of the wearer of the
flashlamp.
One aspect of the invention allows a photographer to use a flashlamp and
always
end up with the ability to produce a picture where there is just the right
proportion of
flash in relation to the total exposure, and where the photographer may even
change
the apparent amount of flash after a set of basis pictures has been taken.
Using
the apparatus of the invention, the photographer simply pushes a button and
the
apparatus takes. for example, a picture at a shutter speed of 1250 sec with
the flash,
then automatically turns off the flash and quickly takes another picture at
130 sec.
The look and feel of the system is no different than an ordinary camera and
the fact
that. two or more pictures are taken need not be evident to those being
photographed,
or to the photographer, since the flash will only fire once, and the second
click of the
camera shutter if it is of a mechanical variety is seldom perceptible if it
happens
quickly after the first. Preferably a non-mechanical camera is used so that a
possibly
distracting double or multiple clicking is not perceptible.
12


CA 02261376 1999-02-O1
After acquiring this pair of "basis pictures", various combinations of the
flash and
non-flash exposures may be synthesized and displayed on a computer screen,
either
after the camera is brought to a base station for processing, or directly upon
the screen
of a wearable computer that the photographer is using, or perhaps directly
inside the
viewfinder of the camera itself, if it has an electronic viewfinder. 'The
picture that
best matches personal preference may be selected and printed. Thus the desired
ratio
of flash to ambient light can be selected AFTER the basis pictures have been
taken.
Furthermore, color correction can be done on the flash and ambient components
of
the picture separately (automatically or manually). If the picture was taken
in an
office, the greenish cast of the fluorescent lights can be removed without
altering the
face of someone lit mostly by the flash.
Furthermore, the background may be colored for interesting effects. For exam-
ple suppose the background is mostly sky. The flash image may be left
unaltered,
resulting in a normal color balance for flesh tones, and the sky may be made a
nice
blue color, even though it might have been grey in reality. This effect works
really
nicely for night time portraits where the sky in the background would
otherwise tend
to appear green or dark brown, and changing it to a deep blue by traditional
global
color balance adjustment of the prior art would lend an unpleasant blue cast
to the
faces of the people in the picture.
Each of the two basis pictures may be generated in accordance with a Wyckoff
principle ( ''definition enhancement" ) as follows: the flash may be activated
multiple
times. Without loss of generality, consider an example where the flash is
activated 3
times with low, medium and high output levels, and where 3 non-flash pictures
are
also taken in rapid succession with three different exposures as well. Two
basis images
of extended dynamic range are then synthesized from each set of three pictures
using
the Wyckoff principle.
More generally, any number of pictures with any particular ratio of flash and
ambient exposure may be collectively used to estimate a two dimensional
manifold in
13


CA 02261376 1999-02-O1
the ~'pT N dimensional picture space defined by a picture of dimensions ALT by
~'~% .
The major aspect of this invention involves the Liyhtpccinting method
described
earlier. The invention permits the photographer to capture the result of
exposure
to each flash of light (called a "lightstroke'', analogous to an artist's
brush stroke)
separately. The lightstrokes can be electronically combined in various ways
before or
after the photographer has packed up the camera and left the scene. In
lightpainting,
photographers often place colored filters over the flash. to simulate a scene
lit by
multiple sources of different colored lights. Using the apparatus of the
invention, no
filters a,re needed, because the color of each lightstroke may be assigned
electronically
after the photographer has left the scene, although optional filters ma,y
still be used in
addition to electronic colour selection. Therefore, the photographer is not
necessarily
committed to decisions about the choice of color, or the relative intensity of
the
various lightstrokes, and is also free to make decisions regarding whether or
not to
apply, and in what extent to apply split-dif~~,sion, after leaving the scene.
These collections of lightstrokes are referred to as a "lightspace" . The
image pairs
in the above $ash/no-flash example are a special case of a lightspace where
the flash
picture is one lightstroke and the non-flash picture is another. In the case
of black
and white (greyscale) images, the lightspace is homomorphically equivalent to
a vector
space, where the coefficients in the vector sum are a scalar field. This
process is a
generalization of homomorphic filtering, where a pointwise transfer function
is applied
to each entire image, a weighted sum is taken, and then the inverse transfer
function
is applied to this sum. In practice, with typical cameras, a sufficiently
pleasing image
results if each image is cubed; the results added together with the desired
weighting,
and the cube root of the sum is computed. In the case of color images, the
vector
space is generalized to a rrnodule space, for colour coordinate
transformations and
various filtering, blurring; and diffusion operations. Alternatively the
process may be
regarded as three separate vector spaces, one for each colour channel.
Another aspect of the invention is that the photographer need not work in
total
14


CA 02261376 1999-02-O1
darkness as is typically the case with ordinary lightpainting. With a typical
elec-
tronic flash, and even with a mechanical shutter (as is used with photographic
film)
the shutter is open for only 1 /500 sec or so for each "lightstoke" . Thus the
lightpaint-
ing can be done under normal lighting conditions (e.g. the room lights may
often
be left on). This aspect of the invention pertains to both traditional
lightpainting
(where the invention allows multiple flash-synched exposures to be made on the
same
piece of film, as well as to the use of separate recording media (e.g.
separate film
frames or electronic image captures) for each lightstrol<e. The invention
makes use of
innovative communications protocols and a user-interface that maintain the
illusion
that the system is immune to ambient light, while rectuiring no new skills
beyond
that of traditional lightpainting. The communications protocols typically
include a
full-duplex radio communications link so that a button on the flash sends a
signal to
the camera to make the shutter open, and at the same time, a radio wired to
the flash
sync contacts of the camera is already "listening" for when the shutter opens.
The
fact that the button is right on the flash gives the user the illusion that he
or she is
just pushing the lamp test button of a flash as in normal lightpainting, and
the fact
that there is really any communications link at all is hidden by this
ergonomic user
interface.
The invention also includes a variety of options for making the lightpainting
task
easier and more controlled. These include such innovations as a means for
photoborg
to determine if he or she can be "seen" by the camera (e.g. means to indicate
extent
of camera's coverage), various compositional aids, means of providing
workspace-
illumination that has no effect on the picture, and some innovative light
sources.
Other innovations such as EyeTap cameras, EyeTap light sources, etc., and
further
means of collaboration among a community of photoborgs are also included in
the
invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in more detail, by wa,y of examples which
in


CA 02261376 1999-02-O1
no way are meant to limit the. scope of the invention, but, rather, these
examples will
serve to illustrate with reference to the accompanying drawings, in which:
FIG. 1 is a diagram of a typical illumination source used in conjunction with
the
invention, comprising a data entry or command entry device such as a
pushbutton
switch, which, when pressed, causes the lamp to flash, but not directly;
instead the
lamp flashes as a result of a bi-directional communications protocol.
FIG. 2a is a diagram of the camera and base station that. receives the signal
from
the light source shown in FIG. 1, wherein the signal causes the camera shutter
to
open, and it is the opening of the camera shutter which sends back a
confirmation
signal to the illumination source of FIG. 1, causing the flash to be
triggered.
FIG. 2b is a diagram of the camera and base station that uses a flash detector
instead of an explicit inbound radio channel for synchronization.
FIG. 3 shows a typical usage pattern of the invention in which the fixed
(static)
nature of the camera and base station is emphasized by way of concentric
circles
denoting radio waves sent to (inbound) it and radio waves sent from (outbound)
it,
and where a hand-held version of the light source depicted in FIG. 1 is
flashed three
times at three different locations.
FIG. 4 shows a detailed view of the user-interface typical of an instrument
depicted
in FIG. 1, where the data entry device comprises a series of pushbutton
switches and
where there is also a data display screen affixed to the source of
illumination.
FIG. 5 shows an example where pictures of dimension 640x480 are captured by
repeated flashing of the apparatus of FIG. 1 at different locations, where the
picture
from each exposure is represented as a point in a 30'7200 (640x480)
dimensional
photoquantigraphic imagespace.
FIG. 6 shows how some of these pictures, which are called lightvectors when
represented in photoquantigraphic imagespace, may fall in a subspace of the
30'l200
dimensional photoquantigraphic imagespace.
FIG. 7a shows an example of a two dimensional subspace of the 307200 dimen-
16


CA 02261376 1999-02-O1
sional lightvectorspace, where the corresponding pictures associated with the
lightvec.-
tons in this space are positioned on the page to correspond with their
coordinates in
the plane of the page.
FIG. 7b shows how the three pictures along the first row of the
photoquantigraphic
subspace of Fig. 7a are generated.
FIG. 8 shows a photoquantigraphic coordinate transformation, which appears as
a coordinate transformation of the two dimensional space depicted in FIG. 7a.
FIG. 9a shows the method by which pictures are converted to lightvectors by ap-

plying a linearizing inverse transfer function, after which the lightvectors
are added
together (possibly with different weighting) and the resulting lightvector sum
is con-
verted back to a picture by way of the forward transfer function (inverse of
that used
to convert the incoming images to lightvectors).
FIG. 9b shows the calculation of a photoquantigraphic sum in pseudocolor mod-
ulespace.
FIG. 9e shows photorendering (painting with lightmodules), e.g. calculation of
a
photoquantigraphic sum in pseudocolor modulespace.
FIC~. 9d shows a phlashlamp made. from 8 flashlamps, used to generate some of
the lightvectors of Fig. 9c.
FIG. 9e shows lightspace rendering in C11~ZYK colorspace.
FIG. 9f shows the inverse gamut warning and gamut boundary management aspect
of the invention.
FIG. 9g shows an alternate embodiment of vividness enhancement given a limited
color gamut.
FIG. 9h shows how the alternate embodiment of Fig. 9g manifests itself in the
final output lightmodule painting of CMYK printed matter.
FIG. 10a shows a general philter operation, implemented by applying a photo-
quantigraphic filter (e.g. by converting to lightvectorspace, filtering, and
converting
back).
li


CA 02261376 1999-02-O1
FIG. 10b shows the implementation of split diffusion using a philter on one
lightvectorspace quantity and no filter on the other quantity.
FIG. 10c shows an image edit operation. such as a pheathering operation, imple-

mented by applying a photoquantigr aphic edit operation (e.g.
photoquantigra,phic
feathering) .
FIG. 10d shows a philter operation applied over an ensemble of input images.
FIG. 11 shows how the estimate of a single lightvector (such as v4 of Fig. 5)
may
be improved by analyzing three different but collinear light vectors.
FIG. 12a shows the converse of Fig. 11, namely to illlustrate the fact that to
generate a Wyckoff set (a.s strongly colored lightvectors approximately do
over their
color channels), one desires to begin with a. great deal of dynamic. range, as
might be
captured by a Wyckoff set.
FIG. 12b attempts to make this point of Fig. 12a all the more clear by showing
that a strongly colored filter exhibits an approximation to the Wyckoff effect
by virtue
of the different degrees of attenuation in different spectral bands of a color
camera.
FIG. 12c show s a true Wyckoff effect implemented for a scene that is monochro-

matic and a color camera with strongly colored filter.
FIG. 13a shows the EyeTap (TM) flashlamp or phlashlamp aspect of the inven-
tion.
FIG. 13b shows a wide angle embodiment of the EyeTap (TM) flashlamp or
phlashlamp.
FIG. 14a shows an EyeTap (TM) camera with planar diverter.
FIG. 14b shows an EyeTap (TM) camera with curved diverter which is also part
of the optical system for the camera.
FIG. 15 shows an embodiment of the finder light or hiding light, which helps a
photoborg determine where the camera is, or whether or not he or she is hidden
from
view of the camera.
FIG. 16 shows an embodiment of the lightsweep (pushbroom light).
18


CA 02261376 1999-02-O1
FIG. 17a shows an embodiment of the flash sequencer aspect of the invention.
FIG. 17b shows an embodiment of the invention for acquiring lightvector
spaces,
using special flashlamps that do not require a sequencer controller.
FIG. 18 shows the user interface to a typical session of the Computer Enhanced
Multiple Exposure Numerical Technique (CEMENT) program.
While the invention shall now be described with reference to the preferred em-
bodiments shown in the drawings, it should be understood that the intent is
not to
limit the invention only to the particular embodiments shown; but rather to
cover all
alterations, modifications and equivalent arrangements possible within the
scope of
appended claims.
In all aspects of the present invention, references to "camera'' mean any
device or
collection of devices capable of simultaneously determining a quantity
proportional
to the amount of light arriving from a plurality of directions and or at a
plurality of
locations.
References to "photography", "photographic", and the like, may also be taken
to include "videography", "videographic", and the like. Thus the final result
may
be a video or other sequence of images. and need not be limited to a single
picture.
Indeed, the term "picture" may mean a motion picture, in addition to just
simply a
still picture.
Similarly references to "data. entry device" shall not be limited to
traditional
keyboards and pointing devices such as mice, but shall also include input
devices
more suitable to the "wearable computers" of the invention, as well as to
portable
devices. Such input devices may include both analog and digital devices as
simple
as a single pushbutton switch or as sophisticated as a voice controlled or
brainwave,
respiration. or heart rate controlled device, or devices controlled by a
combination
of these or other biosignals. The input devices may also include possible
inferences
made as to when to capture a picture or trigger an event in a manner that does
not
necessarily require or involve conscious thought or effort.
19


CA 02261376 1999-02-O1
Moreover, references to "inbound channel" shall not be limited to radio com-
munications devices as depicted in the drawings through the use of the
standard
international symbol for antenna, but shall also include communications over
wire
(twisted pair, coax, or otherwise), infrared communications, or any other
communi-
cations medium from a user to the camera base station. References to base
station
also do not limit it to a station that is permanent or semi-permanent; base
stations
may include mobile units mounted on wheels or vehicles, and units mounted or
carried
on other persons.
Similarly, references to "outbound channel" shall not be limited to radio
commu-
nication, as depicted in the dr awings, but rnay also include other means of
commu-
nication from the camera to the user, such as the ability of a user to hear
the click of
a camera shutter, perhaps in conjunction with steps taken to make the sound of
the
shutter louder, or to add other audible, visual, or the like, events to the
opening of
the shutter. The "outbound channel" may also include means by which a
photoborg
can confirm that a camera shutter is open for an extended period of time, or
means
of making a photoborg aware of the progression of time for which a shutter is
open.
The use of "shutter" is not meant to limit the scope of the invention. While
the
drawings depict a mechanical shutter with solenoid, the invention may be (and
is
more often) practiced with electronic cameras that do not have explicit
shutters, but,
rather, the shuttering operation may comprise electronic control of a sensor
array, or
simply the selection of the appropriate frames) from a video sequence. Thus
when
reference is made to the time during which the camera is "sensitive to light",
what is
meant is that there is an intent or action that collects information from the
camera
during that time period, so that this intent or action itself serves to take
the place of
an actual shutter.
Likewise, while the drawings and explanation involve two separate communica-
tions channels for the inbound and outbound channels, operating at different
radio
frequencies, it will be understood that the invention is typically practiced
using a


CA 02261376 1999-02-O1
single bidirectional communications link implemented via TCP~IP communications
protocols between a wearable computer system and a stationary computer at the
base
station, but even this method of communication is not meant to limit the scope
of the
invention. The communications channel could comprise, for example, a single
piece
of string or rope, where the user tugs on the rope to c.a,use a picture to be
taken, and
the rope is tugged back by the camera to activate the user's light source.
Moreover,
this communication need not be bidirectional, and rruay, for example, be
implemented
simply by having suitable timing between the flash and the camera, so that a
signal
need only be sent from the flash to the camera, and then the flash may be
fired at
the appropriate interval, by a timing circuit contained therein, so that there
is no
explicit outbound communications channel or need for one. In this case, it
will be
understood that the outbound communications channel may comprise synchronized
timing devices; at least one of which is associated with a photoborg's
apparatus and
at least one of which is associated with a camera at the base station.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS,
WITH REFERENCE TO DRAWINGS
In using the invention to selectively and sequentially illuminate portions of
a scene
or object, a photoborg (e.g. a photographer, artist, lighting technician;
hobbyist,
professional or other user of the invention) will typically point a light
source at some
object or portion of the scene in view of the camera, and issue a command
through
a. wearable computer system, or through a portable system of some sort, to
acquire a
picture which is affected, at least in part, by the light source.
In the embodiment presented in Fig 1 this command is issued by pressing button
l10. Button 110 turns on radio transmitter l20, causing it to send a signal
out
through its antenna l30. Transmitter l20 is designated as "iTx" where "i"
denotes
in.bo~end. The inbound pathway is the communications pathway from a photoborg
to
the base station.
21


CA 02261376 1999-02-O1
The signal from iTx l20 is received at a base station and remote camera,
depicted
in Fig 2a, by antenna '210, where it is demodulated by inbound receiver iRx
220.
Receiver 220 may be something as simple as an envelope detector, or an LM567
tone decoder that activates a relay. In some embodiments it is a
communications
protocol running over amateur packet radio, using a terminal node controller
in KISS
mode (TCP/IP). In a commercially manufactured embodiment, however, it would
preferably be a communications channel that does not require a special radio
license,
or the like. In the simple example illustrated here) inbound receiver 220
activates
shutter solenoid 230.
What is depicted in this drawing is, for illustrative purposes, approximately
typical
of a 1940s press camera fitted with the standard 6 volt solenoid shutter
release.
Preferably, however, the synchronization does not involve a mechanical
shutter, and
thus no shutter contacts are actually involved. Instead a sensor array is
preferably
used. A satisfactory can nera having a sensor array is the Kodal< (TM) DCS-
260.
Continuing with the illustrative embodiment. Shutter flash synchronization con-

tads 240 (denoted "X" in Fig. 2a) activate outbound transmitter oTx 250
causing
a signal to be sent out antenna 260. This outbound signal from the base
station
is received by a photoborg by way of antenna l40. This received signal causes
the
outbound receiver oRx l50 of the outbound channel to activate an electronic
flash,
typically via an optocoupler such as a Motorola MOC3020 or the like, which is
typi-
cally connected to the synchronization terminals 160 (denoted "X" in Fig. 1)
of the
flash unit l70. An opening 1h0 on the light source allows light to emerge to
illuminate
the scene or objects that the source is pointed at.
It should be noted that many older cameras have a so-called "M" sync contact
which was meant for firing magnesium flashbulbs. This sync contact fires
before the
shutter is open. It is often useful to use such a, sync contact with the
invention as it
may account for delay in the outbound channel, or equivalently allo~~ for a
form of
pulse compression such as a chirp to be sent over the outbound channel, so
that the
22


CA 02261376 1999-02-O1
invention will enjoy a greater robustness to noise and improved signal range.
Similarly
where the camera is an electronic imaging device, it may be desirable to
design it so
that it provides a sync signal in advance of becoming sensitive to light. The
advance
sync may then be used together with pulse compression.
Typically, the components of Fig 1 are spread out upon the body of the
photoborg
and incorporated into a wearable computer system or the like, but
alternatively, if
desired; the entire source may be contained inside a single box, together with
all
the communications hardware, power source, and perhaps a display means. In
this
alternative hand-holdable embodiment, a photoborg will then not need to wear
any
special clothing or apparatus.
A photoborg will typically wear black clothing and hold the light source
depicted
in Fig 1 using a black glove, although this is not absolutely necessary.
Accordingly,
the housing of the apparatus 190 will typically be flat black in colour, and
might have
only two openings, one for the button l10, and one for the light opening 180,
thereby
hiding the complexity of the contents from the photoborg so as to make the
device
more intuitive for use by those not technically skilled or inclined.
The purpose of this aspect of the invention, illustrated in Fig. 1 and Fig. 2a
is to
obtain a plurality of pictures of the same subject matter, where the subject
matter is
differently illuminated in each of the pictures. There are various other
embodiments
of this aspect of the invention, which also allow this process to be
performed. For
example, the camera set up at the base station may be a video camera, in which
case the photoborg can walk around with a flashlamp and flash the lamp at
various
portions of the subject matter in the scene.
Afterwards. the photoborg or another person can play back the video, and
extract
the frames of video during which a flashlamp was fired. Alternatively, a
computer can
be used to analyze the video being played back, and can automatically detect
which
frames include subject matter illuminated by flash, and can then mark the
locations of
these frames or separate them from the entire video sequence. If a computer is
going
23


CA 02261376 1999-02-O1
to be used to analyze the video afterwards, it may also analyze the video
during
capture. This analysis would greatly reduce the storage space needed because
the
system could just wait until a flashlamp was fired, and then automatically
store the
pictures in which subject matter was illuminated (in whole or in part) by a
flashlamp.
Fig. 2b depicts such a system. Video camera 265 is used to take the pictures.
Video
camera 265, denoted CAM, is connected to video capture device 270, denoted
CAP.
The capture device 2 70 captures video continuously, and sends digitized video
to the
processor; PROC, 275. A satisfactory connection between CAP 270 and PROC 275
is an IEEE 1394 connection. In particular, a satisfactory unit that can be
used as
a, substitute for both CAP 270 and PROC 275 is a digital video camera such as
a
SONY PC7, which outputs a digital video signal.
Processor 275 originally captures one or more frames of video from the scene
under ambient illumination. If more than one frame is captured, the frames may
be
photoquantigraphically averaged together. By this means, or by other similar
means,
a background frame is stored in memory 280, denoted MEM. The stored image can
then be compared against further incoming images to see if there are regions
that
differ. If there is sufficient difference over a region of a new incoming
frame of video
to overcome a certain noise threshold setting, then a the new frame is
captured. The
comparison frame may also be updated between flashes, in case the ambient
light is
slowly changing. In this case, PROC 275 processes with an assumption that
flashes
are occasional, e.g. that there will likely be many frames of video before and
after
each flash, so that changes in ambient light can be tracked. This tra,ck.ing
can also
accommodate sudden changes, as when lights turn on inside a building by timer
control, since the changes will be more like a step function, while the
flashlamp is
more like a Dirac delta measure (e.g. the ambient lights rnay quickly change
state
they don't usually go on and then off again in a very short time).
In this way, there is no need for any explicit radio communications system or
the
like. The photoborg simply sets up the camera and leaves it running, and then
takes
24


CA 02261376 1999-02-O1
an ordinary flashlamp and walks around lighting up different parts of the
scene.
Optionally the photoborg may have a one-way communications system from the
camera so that he c.an see what effect his flashlamp is having on the scene.
This
communications system may be as simple as having a television screen at the
base
station, where the television screen is big enough for him to see. from some
distance
away. Alternatively, an actual transmitter, such as a radio transmitter, may
be used
to send signals from the base station to a small television or EyeTap (TM)
display
built into his eyeglasses.
Each image, for which a flashlamp illumination was detected, may optionally be
sent back to the photoborg by way of communications system 285; denoted COMM,
and connected to antenna 290. Images due to each flash of light, as well as
one or
more images due only to ambient light, are saved on disk 295, denoted DISK.
The components of Fig. 2b may be spread out over the Internet, if desired. The
base station is free-running and does not require any human operator, although
a
manager ma,y remotely log into the base station if it is connected to the
Internet. In
this case a manager can cement the images together into a photorendering, and
she
can select certain lightvectors of particular interest to send to a photoborg.
There
may also be more than one photoborg working together on a project.
A satisfactory camera for camera 265 is an ordinary video camera. Preferably,
however, camera 265 would be a specially built camera in which each pixel
functions as
a lock-in amplifier, to lock onto a known waveform emitted from a specially
designed
electronic flashlamp. The camera of this embodiment of the invention is then
referred
to as a lock-in camera.
Fig 3 depicts the typical usage pattern of the source depicted in Fig 1. A
fixed
camera 300 (depicted here with a single antenna as may be typical of a system
running
with a terminal node controller over a TCP~IP communications link, is used
together
with a hand-held illumination source which is flashed at one location 310,
then moved
to a new location 320, flashed again, and so on, ending up at its final
location 330


CA 02261376 1999-02-O1
where it is flashed for the last time. Alternatively, a number of photoborgs,
each
carrying a flashlamp, may selectively illuminate the scene.
Fig 4 depicts a view of a self contained illumination source. While the art is
most
frequently practiced using a wearable computer with head-up display or the
like, it is
illustrative. to consider a self contained unit with a screen right on it
(although there
is then the problem that the screen is lit up and may spoil a picture if it
becomes
visible to the camera whereas a head mounted display painted black and
properly
fitted to the eye with suitable polarizers will produce much less
environmental light
pollution) .
This source has pushbuttons 410 denoted by color (e.g. "R" for red, where the
button may be colored or illuminated in red), "G" for green, etc.. These
pushbuttons
may be wired so that they take exposures directly in the indicated color (e.g.
so that
pushing 410 R will cause the apparatus to request a red exposure of the
camera), or
they may be. wired so that pushing R marks the system for red, but does
nothing
until FILM WRITE (W) 420 is pressed. Pressing W will then send a signal to the
camera requesting a. red exposure, which typically happens via a spinning
filter wheel
in front of the camera, wherein the camera shatter opens but the flash sync
pulse is
not sent back right away. Instead the base station waits until the instant
that the
red filter is in front. of the lens and then at that exact instant, sends back
a flash
sync pulse, activating flash 430 so that it sends a burst of illumination out
opening
440. Alternatively. these color selections may be made electronically, wherein
the
only difference between pressing, for example, R, and pressing G is in the
form of
information appended to the header of the image file in the form of a comment.
For
example, if the captured image is a Portable PixMap (PPM), the image content
is
exactly the same except that the comment
#dustcolo 1 0 0
is at the beginning of the image file, as opposed to
26


CA 02261376 1999-02-O1
#dustcolo 0 1 0
if the green button were pressed.
The effect of pressing the red button might show up on screen 450 in the form
of an
additional red splotch of light upon the scene; if, for example, there is a
single object
illuminated with the white flash of light to which the color red has been
assigned.
In this way, it is possible to watch the lightpainting build up slowly. In the
case of
an electronic imaging system, any of the effects of these flashes (called
lightstrolces)
may be deleted or changed in colour after they have been made (e.g. since the
colour
choice is just header information in the computer file corresponding to each
picture).
However, in the case of a film-based camera, it is not possible to change the
color,
so a great deal of film could be wasted were it not for the preview button on
control
panel 420. In particular, the preview button (P) performs the operation on a
lower
resolution electronic camera, to allow the photoborg to see its effect, prior
to writing
to film (W).
Also, in practice, not every flash exposure need be written to a separate
frame
of film, and in fact, if desired, all of the flash exposures can be written to
a single
frame of film, to avoid the need to scan the film and combine the exposures
later. In
this way, a photoborg c-an preview (P) over and over again, and then practice
several
times each lightstroke of a. long lightpainting sequence, view the final
result. and then
execute the exact same lightpainting sequence onto a single frame of film.
In practice, when using film, the art of this invention is practiced somewhere
between the two extremes of having a lightpainting on one single frame and
having
each lightstroke on its own frame. Typically portions of the lightpainting are
practiced
using P 420 and then written onto a frame of film using several presses of W
420.
Then the film is advanced one frame (by pressing all three buttons on panel
420
together; or by some other special signal sent to the camera) and another
portion of
the lightpainting is completed. A typical lightpainting comprises then less
than 36
frames of film and can thus be captured on the same negative strip. which
simplifies
27


CA 02261376 1999-02-O1
the mathematical analysis of the film once it is scanned, since each
lightvector will
have undergone the exact same film aging prior to development, as well as the
exact
same development.
In the situation where film is not being used (e.g. in an embodiment of the
invention using a completely electronic camera), lightvectors may still be
grouped
together if it is decided that they never need to be independently accessed.
In this
case, a new single image file is created from a plurality of lightvectors.
Typically a
ne~~ file format is needed in order to preserve the full dynamic range,
especially if
a Wyckoff effect (combining differently exposed lightvectors~ is involved.
Typically
these combined files will take the form of a so-called ''Portable DoubleMap
(PDM)",
and will have a. file header of the form
P8
#photoborg 8.5 has image address space v850 to v899
#photoborg 8.5 selected:
#v852.pgm 0 0 1 g *2.5
#v853.pgm 0 0 1 g *2.5
#v854.pgm 0 0 1 g *2.5
#cement to v855.pdm
1536
1024
255
where the P8 indicates image type PDM, lines beginning in the # symbol are com-

ments, and in particular, v852.pdm is the file name of the file containing
both the
ascii header and the raw binary data, and the numbers following the filename
indicate
how the image is to be cemented into the viewfinder with the other images. In
par-
ticular, the first three numbers after the filename indicate the color blue
(in RGB),
the next symbol, "g" indicates that only the green channel of the image is to
be con-
sidered (e.g. instead of mapping the whole image to blue which would only
consider
28


CA 02261376 1999-02-O1
the blue channel, the green channel is mapped to blue), and the "*2.5"
indicates a
photoquantigraphic multiplicative factor (e.g. this lightvector will be
boosted to 2.5
times its normal strength). This line together with the next two lines
indicate what
will be combined to make the new file v855.pdm.
After that, the next two lines indicate the file dimensions, and the last line
of the.
header indicates the default range of the image data. Since PDM files are of
type
double (e.g. the numbers themselves range up to more than l0308) this number
does
not indicate the limit of the data type, but. rather, the limit of the data
stored in the
datatype (e.g. the maximum entry in the image array).
Each combined (PDM) lightvector (e.g. v855.pdm) of the above file size
occupies
36 megabytes of disk space. Therefore, preferably, when a large number of
lightvectors
are being acquired; the JPF~G file format is used to store the individual
lightvectors.
A new file format is invented for storing combined lightvectors. This new
format is
a compressed version of the PDM file format, and normally has the file
extension
".j dg". Thus the above file would become "v855.jdg" in a combined form.
Fig 5 depicts lightvectors in the multidimensional photoquantigraphic space
formed
by either converting a picture to lightspace coordinates (by applying the
inverse non-
linear transfer function of the camera. to the picture) or by shooting with a
camera
that provides lightspace (linearized) output directly, and then unrolling a
picture so
taken into a long vector. For example. a picture of pixel dimension 640x480
may be
regarded as a single point in a 307200 dimensional lightvectorspace. Likewise
contin-
uous pictures on film may be transformed (by application of the inverse
response of
film and scanner) to points in infinite dimensional lightvectorspace, but for
purposes
of illustration, consider the case when the pixel count is finite while the
pixel values
remain unquantized. The first lightvector, vl (short for vector of
illumination number
1) 5l0 is depicted together with v2 520, v3 530, etc.. It so happens that v2,
v3; and
v4 are collinear in this example, owing to the fact that they were captured
under
exactly the same lighting condition (e.g. the light did not move but where
only the
2g


CA 02261376 1999-02-O1
exposure was varied (e.g. the light output was varied; or equivalently, the
camera
sensitivity was varied). Similarly lightvectors v6 560 and v'l 5r0 are
collinear and
correspond to two pictures that differ only in exposure. As many other
lightvectors
as there are separate exposures, are present. and there is no particular
reason why the
number of lightvectors need be limited by the dimension of the space, e.g. the
ellip-
sis 580 ( ". . . ") denote a continuation up to v307200; denoted 590,
v30'1201, denoted
59l, and beyond to, for example, v999999, denoted 599 in the figure. However,
in
practice, due to limitations of film there are typically far fewer
lightvectors than the
dimension of the space, e.g. 36 lightvectors if using a typical 35mm still
film camera,
or in the case of a motion picture film camera or most electronic cameras, the
number
of lightvectors is typically not more than 999 in many applications of the
invention.
Accordingly, in many previous embodiments of the invention where lightvectors
were
each stored as a separate file on a hard disk, the filenames of these
lightvectors were
numbered with three digit alphanumeric filenames of the form, v000; v001, . .
. v123,
if; for example, there were 124 lightvectors, so that they would list in
numerical order
on a typical UNIX-based computer using the "ls" command. For each scene or ob
j ect
being photographed, a new directory was created to hold the corresponding set
of
lightvectors.
F'ig 6 depicts some pictures that represent linear combinations of two light
sources
and are therefore in a two dimensional lightvector subspace of the 307200
dimensional
space. Lightvector 610 denotes the vector spanned by 520 530 and 540 of Fig 5.
Lightvector 630 denotes the lightvector spanned by 560 and 570 of Fig 5.
Lightvectors
620 and 640 denoted in bold together with 6l0 and 630 span a two dimensional
lightvector subspace.
Fig. 7a depicts two pictures, ?10 taken with a slow shutter (long exposure)
and no
flash to record the natural light and 720 taken with a fast shutter and flash
to record
the response of the scene to the flash. In 710, the entire image is properly
exposed,
while in 720 the foreground objects are properly exposed while the background
objects


CA 02261376 1999-02-O1
are underexposed. These two images represent two points in the 307200
dimensional
space of Fig. 5 and Fig. 6. Any two such noncollinear (e.g. corresponding to
differently
lit pictures) points span a two dimensional space, depicted by the plane of
the paper
upon which Fig. 7a is printed. The two axes of this space a,re the ambient
light axis
730 (labeled 740 with numerals 750) a,nd the flash axis 760 (labeled 770 with
numerals
780).
The manner in which the other images are calculated will be now be described
with reference to Fig. 7b which depicts the top row of images in Fig. 7a,
namely
images 790, 792, and 794.
The basis image of Fig. 7a denoted 710 is depicted as function fl in F'ig. 7
b.
The basis image 720 is denoted as function f2. These functions are functions
of the
quantity of light falling on time image sensor due to each of the sources of
light.
The quantity of light falling on the image sensor due to the natural
illumination
is ql(~, q). That due to the fiashlamp is q2(x, y). Thus the picture 7l0 is
given by
.fl(~', q) _ .~(ql(~, q)). Similarly; picture 720 is given by f2(x; y) = f
(q2(x, y)). Thus;
referring to Fig. 7b, passing f l and f2 through the inverse camera response
function,
f -1 results in ql and q2, which are then distributed through vectorspace
weights w11
through w3,. These vectorspace weights are denoted by circles along the signal
flow
paths in Fig. 7b.
The vectorspace weights wig map bases q~ to lightvectors qli to form the first
(top)
row of pictures depicted in Fig. 7a, according to the following equation:
f11 w11 w12 _
f
fia = f z21 'w22 (0.1)
f 1(f2
f 13 w31 2~32
Which corresponds to the operation performed in Fig. 7 b. The linear
vectorspace
spanned by qli is called the lightvector space. The nonlinear space spanned by
fli is
called the lightstroke space.
The other two rows of pictures in Fig. 7a are formed similarly. Clearly ~~e
need
31


CA 02261376 1999-02-O1
not limit ourselves to a 3 by 3 grid of pictures in Fig. 7a. In general we
will have
a continuous two-dimensional lightstrolce space, from which infinitely many
pictures
can be generated within a continuous plane like that of Fig. 7a.
Any image that would have resulted from any amount of flash or natural light
mixture falls somewhere on the page (plane) depicted in Fig. 7a. For example,
at
coordinates (0,2) which correspond to zero ambient exposure (fast shutter) and
two
units of flash, we have an image '790 where the foreground objects are
overexposed
while the background objects are still underexposed. At (1,2) we have an image
792 in which the foreground is grossly overexposed and the background is
normally
exposed, while at (2,2) we have an image 794 in which the foreground is so
heavily
overexposed that these objects are completely white, while the background
subjects
matter is somewhat overexposed. Therefore, lightvectors 7l0 and 720 are all
that are
needed to render any of the other lightvectors (and thus to render any of the
other
images). Thus a photographer who is uncertain exactly how much flash to use
may
simply capture a small number (at least two) of noncollinear lightvectors
(e.g. take
two pictures that have different ratios of flash and natural light) and can
later render
any desired fill-flash ratio. Thus we see that 7l0 and 720 form a basis for
rendering any
of the other nine images presented on this page, and in fact any of the other
infinitely
many images at other coordinates on this two dimensional page. In practice,
however,
due to noise (quantization noise owing to the fact that the camera may be
digital,
a.s well as other forms of noise) and the like, a better image will be
rendered with a
more accurate form of determining the two lightvectors than just taking one
picture
for each one (two pictures total). By taking more than just one picture each,
a
better estimate is possible. For example. taking ten identical pictures for
'710 and
averaging them together, as well as taking another ten identical pictures for
720 and
averaging them together will result in much better lightvectors. Moreover,
instead
of merely taking multiple identically exposed images for each lightvector, the
process
will be better served by taking multiple differently exposed images for each
lightvector.
32


CA 02261376 1999-02-O1
This is in fact the scenario depicted in Fig 5 where for example, lightvector
v2,3,4 is
determined from three lightvectors 520; 530, 540, using the Wyckoff principle.
The
Wyckoff principle is a generalization of signal averaging known to those
skilled iii the
art of image processing.
Finally, a further generalization of signal averaging, which is also a
generaliza-
Lion of the y~~yckoff principle, is the improved estimation of the lightstroke
subspace
through capture of lightvectors off the. axes. For example, suppose that image
794
was not rendered from 710 and 720, but instead suppose that image '794 was
captured
directly from the scene with two units of flash and 2 units of ambient light.
From
these three pictures, 710, 720, and 794, any other picture in the two
dimensional
lightvector subspace can be rendered. These three pictures form "basis"
images.
Since the "bases" are overdetermined (e.g. three vectors that define a plane),
there
is redundant information, and redundant information assists in combating
noise, just
as the redundant information of signal averaging with identical exposures
(identical
points in the 307200 dimensional space) or implementing the Wyc.koff principle
with
collinear lightvectors (collinear points in the 307200 dimensional space) did.
Thus
the photographer may capture a variety of images of different combinations of
flash
and natural illumination, and use these to render still others. having
combinations of
flash and natural illumination different from any that were actually taken.
Often it is not possible to have the shutter be fast enough to completely
exclude
background illumination. In particular, lightpainting is practiced normally in
dark
places, and it would be desirable that this art could be practiced in places
that cannot
be darkened completely as might arise when streetlamps cannot be shut off, or
when a
full moon is present, or when one might wish to have the comfort and utility
of working
in an environment that is not totally dark. Accordingly, a coordinate
transformation
may be applied to all lightvectors with respect to the ambient lightvector.
The
ambient lightvector may be obtained by simply taking one picture with no
activation
of flash (to capture the natural light in the scene. In practice, many
identical pictures
33


CA 02261376 1999-02-O1
may be taken with no flash, so that photoquantigraphic signal averaging can be
used
to determine the ambient lightvector. Preferably various differently exposed
ambient
light pictures are taken to calculate an extended response picture for the
ambient
light.
The photoquantigraphic subtraction of the ambient light image fo from another
image f2 is given by the expression: f ( f -r ( f2) - f -r ( fo)) where f is
the response func-
tion of the camera. More generally, an entire lightvector space may be
photoquanti-
graphically coordinate transformed. Typically when photo-subtracting ambient
ill-
lumination, it is preferable that the background illumination lightvector
actually be
determined by a plurality of collinear but different (e.g. of different
lengths) lightvec-
tors. In particular, it will be seen later, by way of other examples, that
lightvectors
are generally determined making use of the Wyckoff principle.
An example of a photoquantigraphic coordinate transformation is depicted in
Fig 8 where the ambient light axis 810 remains fixed, but the space is sheared
along
the flash axis owing to the change to a new axis 820 now called "Total
illumination'''
830. The numerals now extend further 840 owing to the fact that the new coordi-

nates capture the essence of images such as 850 that is now the greatest along
axis
820. Mathematically, the example coordinate transformation given in Fig. 8 may
be
written:
ambient - f 1 0 f -r (ambient) (0.2)
total 1 1 f -r (flash)
which. through example, illustrates what is meant by a "photoquantigraphic
coordi-
nate transformation" .
Fig 9a illustrates what is meant by a photoquantigraphic summation, and illus-
trates through example, one of the many useful mathematical operations that
can
be performed in lightspace. Images 910 are processed through inverse ( f -r )
transfer
functions 920 to obtain photoquantigraphic measurments (lightvectors) 930.
These
photoquantigrahic measurements are denoted qr through q~~.
Typically the inverse transfer functions 920 expand the dynamic range of the
34


CA 02261376 1999-02-O1
images since most cameras have been built to compress dynamic range of typical
scenes onto some recording medium of limited dynamic range. In particular,
inverse
transfer functions 920 are preferably such that they undo the inherent,
dynamic range
compression process performed by the specific camera in use. In the absence of
knowledge about the specific camera being used; the inverse transfer function
920
may be estimated by comparing two or more pictures that differ only in
exposure, as
described in LI. S. Pat. ~~1~'0.5, 828. '793. Alternatively, a generic inverse
transfer function
may be used. A satisfactory generic inverse transfer function is the function
f -1 ( f i ) _
f l . Thus a satisfactory operation is to cube each of the incoming images;
although
it would be preferable to try to actually estimate or determine f -1.
The transfer functions drawn in boxes 920 are actually a plot of the function
f ( fl~ = fl , simply because the parabolic shape is one of the easiest
concave-upweards
plot to draw by hand, and is typical of the shape of many inverse transfer
functions,
e.g. it is visually similar to the actual shapes of curves typically used. In
this illus-
tration. then, every pixel of each incoming image 9l0 is squared to make a new
image
930 which is the lightvector. These squared images are summed 940 and the
square
root of the sum is computed by transfer function 950 to produce output image
960.
Optionally, the photoquantigraphic summation may be a weighted summation, in
which case weights 935 may be adjusted as desired.
Suppose that each of the input images in Fig. 9 corresponded to a set of
pictures
that differed only in illumination, and that each of these pictures
corresponded to a
picture taken by the apparatus of Fig. 3 with the flashlamp in each of the
positions
depicted in Fig. 3. Then image 960 would have the visual appearance of an
image
that would have been taken if three flashes depicted in Fig. 3 were
simultaneously
activated at the three locations of Fig 3, rather than in sequence.
It should be noted that merely adding the images together will not produce the
desired result because the images do not record the quantity of light, but,
rather,
some compressed version of it.


CA 02261376 1999-02-O1
Similarly, the example depicts a squaring, mhen in fact the actual inverse
function
needed for most cameras is closer to raising to an exponent between about
three
(cubing) and five (raising to the fifth power).
As stated above, inverse function 920 might cube the images, and forward
function
950 might extract the cube root of the sum, or in the case of a typical film
scanned
by PhotoCD, it has been found by experiment that the exponent of 4.2'2 for 920
and
( l/4.22) for 950 is satisfactory. MToreover, a more sophisticated transfer
function other
than simply r wising images to exponents is often used when practicing the
invention
presented here. Typically the curves 920 are monotonic and also of monotonic
slope.
Lastly if and when cameras are made to directly support the art of this
invention,
these cameras would provide measurements linearly proportional to the quantity
of
light received, and therefore the images would themselves embody a lightvector
space
directly.
In general, the input images will typically be color pictures, and a the
notion
of photoquantigraphic vectorspace implicit in Fig. 9a is replaced with that of
pho-
toquantigraphic modulespace. Typically a color camera involves the use of
three
separate color channels (red, green, and blue). Thus the inverse transfer
functions
920 will apply to each of the three channels a separate inverse transfer
function. In
practice, the tree separate inverse transfer functions for a particular camera
are quite
similar, so it may be possible to apply a single inverse transfer function to
each of
the three color channels. Once these color inverse transfer functions are
applied,
then quantities of light 930 are color quantities, and weights 935 are colour
weights.
In general weights 935 will be three by three matrices (e.g. matrices with
nine ele-
ments). Thus instead of a single scalar constant as with greyscale images,
there are
nine scalar constants for color images. These constants 935 amount to a
generalized
color coordinate trasnformation, with scaling of each of the colour
components. The
resulting color quantities are then added together, where adder 940 is now a
three
channel adder. Forward transfer function 950 is also a color transfer function
(e.g.
36


CA 02261376 1999-02-O1
comprises three scalar transfer functions, one for each channel). Output image
960 is
thus a color image.
In some cases, it is preferable to completely ignore the color information in
the
original scene, while still producing a color output image. For example, a
photoborg
may wish to ignore color information present in some of the lightstrokes,
while still
imparting a strong color effect in the output. Such a lightstrokes will be
referred
to as pseudocolor lightstrokes. An example of when such a lightstroke is
useful is
when shooting late at night, when a one wishes to have a blue sky background
in
a picture, and the sky is not blue. For example, suppose that the sky is
green,
or greenish~reddish brown is is typically the case for a night; time slcy. An
color
image of the sky is captured, and converted to greyscale. The greyscale image
is
converted back to color by repeating the same greyscale entry three times. In
this
way the file and data type is compatible with color images but contains no
color
information. Accordingly, it may be colorized as desired, in particular, a
weighting
causing it to appear in or affect only the blue channel of the output image
may be
made, notwithstanding the fact that there was little if any blue content in
the original
color image before it was converted to greyscale. An example in which two
greyscale
images are combined to produce a pseudocolor image is depicted in Fig. 9b.
Specifically, Fig. 9b depicts this variation of the photoquantigraphic
modulespace
in which the color coordinate tr ansformation matrices 935 (of Fig. 9a) are:
wR 0.299 0.587 0.1l4
wG [ 1 1 1 ) 0.299 0.587 0.1l4 (0.3)
iu~ 0.299 0.587 0.1l4
where the square matrix is formed by repeating the standard YIG~
transformation
three times. Thus it is clear that this matrix will destroy any color
information
present in the input image, yet still allow the output image to be colorful
(by way of
the ability to adjust weights wR, wG, and wB).
In Fig. 9b there is depicted a situation involving two input images, so the
corre-
3r


CA 02261376 1999-02-O1
sponding mathematical operation is that of a photoquantigraphic pseudocolor
mod-
ulespace given by:
tuRl 0.299 0.58 7 0. l 14 wRZ 0.299 0.587 0.114
f wGL [l 1 1 ] 0.299 0.587 0.114 f -1 (fi ) + wG~ [1 1 l, 0.299 0.587 0.114 f -
1 (fa)
~'sl 0.299 0.587 0.114 u;B2 0.299 0.587 0.1l4
(0.4)
This ma,thematica,l operation can be simplified by just using a, greyscale
camera.
In fact it is often desirable to use only a greyscale camera, and simply paint
the scene
with pseudocolor lightvectors. This strategy is particularly useful when the
scene
includes apartment buildings or dwellings, so that an infrared camera and
infrared
flashlamp may be used. In this way a colorful lightvector painting can be made
without awakening or disturbing residents. For example, it may be desired to
create
a colorful lightvector painting of an entire city, by walking down each street
with a
fla,shlamp and flashing light at the houses and buildings along the street,
without
disturbing the residents. In this situation, a satisfactory camera is the
Kodak (T~I~
DCS-460 in which the sensor array is specially manufactured with no color
filters
over any of the pixel cells, and with no infrared rejection filter. Such a
specially
manufactured camera will have a tremendously increased sensitivity compared to
the standard DCS-460, allowing a small handheld flashlamp to illuminate a
large
building.
A satisfactory flashlamp is a specially modified Lumedyne 468 system in which
a
quartz infrared fla,shlamp is fitted, and in which an infrared filter is
placed over the
lamp head reflector. A satisfactory reflector is the Norman-2H sports
reflector which
will also fit on the lumedyne lamp head. Preferably a cooling fan is installed
in the.
reflector to dissipate excess heat buildup on account of the infrared filter
that makes
the lamp flashes invisible to the human eye.
In Fig. 9b, what is shown is two input images that have either already been
converted to greyscale, or were greyscale already, on account of their being
taken
38


CA 02261376 1999-02-O1
with a. greyscale system, such as the infrared camera and flashlamp described
above.
These two greyscale input images are denoted fyl and fy2 in Fig. 9b. T'he
images then
pass through inverse transfer function denoted by fy 1 producing qyl and qy2.
These
quantities contain no color information from the original scene. However. it
is desired
to colorize them into a color output image. Accordingly quantity qyl is spread
out into
three identical copies, each passing through weights wRl, wci, and wBl.
Similarly,
quantity qy2 is spread out into three identical copies, each passing through
weights
wR2, wC2; and wB2. a. total quantity of red is obtained at qB, a total
quantity of green
at qB, and a total quantity of blue at q~. These total quantities are then
converted
into a picture by passing them through three separate forward transfer
functions f .
In practice each of these three transfer functions is similar enough that they
rnay be
regarded as identical, but if desired. may also be calculated independently if
there
is reason to believe that the camera compresses the dynamic range of its three
color
channels differently.
In a typical scenario, an image of a building interior may be taken with the
infrared
camera and infrared flashlamp described above. This interior may, for example,
be
the stairwell of an apartment building in which there are glass windows
showing the
stairs to the outside. It is desired to capture an expressive architectural
image of the
building.
A photoborg climbing the stairs flashes a burst of infrared light at each
floor, to
light up the inside stairs. The images arising from these bursts are captured
by an
infrared camera fixed outside. The camera. is preferably fixed by a heavy cast
iron
surveyor's tripod registered on three stakes driven into the ground; or the
like. After
the photoborg has done each floor, the resulting images are
photoquantigraphically
averaged together as was shown in Fig. 9a. The photoquantigraphic average is
the
image fyl depicted in Fig. 9b.
Then the photoborg leaves the building and illuminates the exterior. Again, an
infrared flashlamp is used so as not to awaken or disturb residents of the
building.
39


CA 02261376 1999-02-O1
A large number of exterior pictures are taken, while the photoborg walks
around
and illuminates the outside concrete structure of the building. These images
of the
exterior are photoquantigraphically averaged to obtain fy2 depicted in Fig.
9b.
In the case of a small building, a single shot may provide sufficient coverage
and
Signal to Noise Ratio (SNR~, but often multiple shots are
photoquantigraphically
averaged as described.
Then the photoborg selects the weights. A common selection for the weights in
the scenario described above is wRl = l, w~l = l, w~l = 0, to give the
building
interior a welcoming yellow appearance, and wR2 = 0, wG2 = 0, 'wB2 = 1, to
give
the exterior a "midnight blue" appearance. Thus, although the camera captured
no
color information from the scene, a, colorful expressive image as might be
printed on
the cover of an architectural magazine using high quality color reproduction
may be
produced.
The above scenario is not entirely ideal because it may be desired to mix
color
lighstrokes with pseudocolor lightstrokes in the same image. Accordingly, a
more
preferable scenario is depicted in Fig. 9c.
Fig. 9c depicts a simplified diagram showing only some of the steps involved
in
making a typical lightmodule painting.
The process begins by calculating one or more ambient lightmodules. This esti-
mate is useful either for photoquantigraphically subtracting from each image
that will
later be taken, or simply to fill in a background level of detail. In the
latter case, the
ambient lightmodule typically comprises a daytime estimate multiplied by the
color
blue, added to a night time estimate multiplied by the color yellow, and added
to the
overall image in addition to the lightstrokes made with the photoborg's
flashlamp.
There may be more than one ambient lightvector, as indicated here (e.g. one
for
day time lighting to create a blue sky in the final picture, and one for
nighttime lighting
to create yellow lights inside all the buildings in the picture). Sometimes
there are
hundreds of different ambient lightvectors computed as the sun passes through
the


CA 02261376 1999-02-O1
sky, so that each time of day provides different shadow conditions from which
other
desired lightmodule spaces are computed.
In this simple example, it is assumed that only one ambient lightmodule is to
be
computed. This ambient lightmodule is typically computed as follows: A
photoborg
first issues a command from his WearComp (wearable computer) to the base
station
to instruct it to construct an estimate of the background ambient
illumination. The
computer at the base station directs the camera at the base station to acquire
a
variety of differently exposed pictures. In this simple example, sixteen
pictures are
captured at 12000th of a second shutter speed.
These pictures are stored in files with filenames ranging from v000.jpg to
v015.jpg.
Note that v000.jpg, etc., are not usually lightvectors until they pass through
the cam-
era's inverse transfer function, unless the camera already shoots in
lightvectorspace
(e.g. is a linearized camera). The signal v000 in Fig. 9c denotes the image
stored in
file v000.jpg, and the signal v001 in Fig. 9c denotes the image stored in file
v001.jpg,
and so on. These sixteen images are photoquantigraphically averaged. By photo-
quantigraphic averaging, what is meant is that each is passed through an
inverse
transfer function, f -1 to arrive at the quantities of light falling on the
image sensor,
and then these quantities are averaged. These values are denoted qooo through
qois
in Fig. 9c. Each of these values may be stored in a double precision image
array,
although preferably the process is clone pixelwise or in smaller blocks so
that the
amount of memory required in the base station computer is reduced. The average
of
these sixteen photoquantigraphic quantities is denoted vo_15 in Fig. 9c. It
should be
noted that average and sum are conceptually identical, and that the extra
factor of
division by 16 may be incorporated into the weight wo_i5 to be described
later.
Then the base station computer continues to instruct the camera to acquire six-

teen pictures at a shutter speed of 1~250sec. The picture signals associated
with these
sixteen pictures are denoted v016 through v031 in Fig. 9c. These signals are
used
to estimate the photoquantigraphic quantities qois through qosi. These
photoquanti-
41


CA 02261376 1999-02-O1
graphic signals are averaged together to arrive at lightmodule vls-31.
Then the base station computer continues to instruct the camera to acquire six-

teen pictures at a shutter speed of 1 ~8sec. The picture signals associated
with these
sixteen pictures are denoted v032 through v047 in Fig. 9c. These signals are
used
to estimate the photoqllantigraphic quantities qo32 through c~o:~7. These
photoquanti-
graphic signals are averaged together to arrive a,t lightmodule v32-a7.
The three lightmodules v~-1,5, vls-31, and v32-47 are further processed by
weighting
each of them in accordance with the shutter speeds. Thus vo_15 is multiplied
by
23 = 8, while v32-47 is multiplied by 2-5 = l/32. Lightmodule v16-31 is
multiplied by
1 (e.g. it is left as it is, since it has been selected as the reference
image.
In this way, all lightmodules are scaled according to the shutter speeds, so
that
each will be an equivalent estimate of the quantity of light arriving at the
image
sensor, except for the fact that quantization noise, and other forms of noise,
and the
like, will tend to cause the highlight detail of vo_15 to be best, while the
shadow
details will be best captured by lightmodule v32-4~.
This preference for highlight detail from vG_15, nlidtone detail from v13_31
and
shadow detail from v32-4; is captured by certainty functions co_15, cls-31,
and c32-~7,
shown in Fig. 9c. After applying these certainty functions, a weighted
summation is
made, to arrive at lightmodule signal vo which is the estimate of the ambient
light.
Lightmodule vo is typically a double-precision 3 channel (color) array of the
same
dimensions as the input images. However, vo is in photoquantigraphic units
(which
are neither irradiance nor illuminance, but, rather, are characterized by the
spectral
response in each of the three color bands over which they are taken).
Typically, lightmodule vo is actually computed over many more exposure steps,
e.g. 256 pictures at every possible shutter speed the camera is capable of.
The gain
or sensitivity of the camera may also be varied under program control to
obtain far
greater range and far finer range than just three different exposure levels as
illustrated
in Fig. 9c. It should also be noted that generally the exposures begin with
the higher
4 '?


CA 02261376 1999-02-O1
shutter speeds and progress downwards; as this ordering has been found to
result in
far less saturation of the CCD sensor arrays or the like. Otherwise, it is
common
for entire rows or columns to white out on account of bright lights shining
into the
camera.
Each exposure, or at least some of the longer exposures, is tested for
whiteout,
to make sure that the exposure is intact. This test is denoted by, for
example, T032
in Fig. 9c, where T032 tests image v032 to make sure that the exposure was not
so
long that a complete row or column was white or washed out beyond the location
of
a bright source of light.
After the ambient light quantity vo is determined, control of the camera is
returned
to one or more photoborgs who can then select a portion of the scene or
objects in
view of the camera to illuminate. A photoborg generally illuminates the scene
with
a flashlamp or phlashlamp. A phlashlamp is a photoquantigraphic. flashlamp, as
illustrated in Fig. 9d.
Fig. 9d shows the Medusa8 (TM) phlashlamp which is made from eight ordinary
flashlamps. A satisfactory configuration is made from eight of the most
powerful
Metz (TM~ flashlamps mounted to a frame with grip handles MBG. Grips M8G are
preferably smooth and easy to grab onto. A satisfactory material for handles
M8G
is cherry, or other hardwood. Grips MHG allow the photoborg operator to hold
the
bank of eight flashlamps and aim the entire bank of flashlamps at the sub ject
matter
of interest.
One of the grips MHG preferably contains a chording keyboard built into the
grip,
which allows the photoborg to type commands into a wearable computer system
used
together with the apparatus of Fig. 9d. When the photoborg has selected
subject
matter of interest, and aimed the phlashlamp at this subject matter, the
photoborg
issues an acquire lightmodule command. This command is transmitted to the base
station computer causing four pictures to be taken in rapid succession. Each
of these
pictures generates a sync pulse transmitted from the base station to the
photoborg.
43


CA 02261376 1999-02-O1
There is contained in the phlashlamp a sequencer computer, MBC, which fires
the
four flashlamps designated M8F when the first synchronization pulse is
received. Al-
ternatively. the sequencing may be performed on the body-worn computer
(WearC'omp)
often worn by a photoborg. The sequencing computer M8C fires the two
flashlamps
vIBT when the next sync pulse is received. It then fires the single one
flashlamp des-
ignated M80 when the third sync pulse is received. Finally it fires the
flashlamp M8H
at half power when the fourth sync pulse is received. In order for this
sequencing to
take place, the eight flashlamps are connected to the sequencing computer M8C
by
way of wires M8W leading from each of the "hot shoe" connectors MBHS ordinar-
ily found on many flashlamps. Typically hot shoe connectors MBHS are located
on
the bottom of the flashlamp bodies MBB. The flashlamp bodies M8B can usually
be
folded to one side, so that the flashlamp heads M8F, MBT, M80, and M 8H ca,n
all
be clustered together in approximately the same space.
In this way, the Medusa8 (TM) phlashlamp causes there to have been taken a
plurality of pictures of different exposure levels. This plurality of pictures
(in this
case, four di$~erently exposed pictures) is designated v049 through v052 in
Fig 9c.
Alternatively (and often preferably), a phlashlamp comprises a single lamp
head,
with a single flashtube, which is flashed at a plurality of different levels
in rapid
succession, by, for example, rapidly switching differently sized capacitor
banks into
the system. In this way, all flashes of light come from exactly the same
direction, as
opposed to the Medusa8 approach in which flashes of light come from slightly
different
directions, owing to the fact that different flashtubes are being used.
A phlashlamp may be fired repeatedly as a photoborg walks around and illu-
urinates different objects in the scene. Alternatively, several photoborgs
carrying
phlashlamps may illuminate the scene either at the same time (synchronized to
each
other), or may take turns firing their respective phlashlamps. For example,
another
object is selected by another photoborg, and this photoborg aims the
phlashlarnp at
this other object, and another acquire light~nodule command is issued. Another
four
44


CA 02261376 1999-02-O1
pictures are taken in rapid succession, and these are designated as v812
through v815
in Fig 9c.
Typically each photoborg is given a range of images, so, for example.
photoborgl
may have image space from v100 to v199, and photoborg 8 will have image file-
names v800 to v899. Alternatively, the photoborg's UID and GID may be inserted
automatically in each filename header, together with his or her heart rate,
physical
coordinates, etc., and any other information which may help to automate the
process.
In this way, the base station may also, through Intelligent Signal Processing,
as de-
scribed in 1'roc. IEEE, Vol. 86, No. 11, make an inference as to which
lightmodules
are most important or most interesting to each of the photoborgs.
In the situation depicted in Fig 9c, photoborgl has decided to cement his
lightvec-
for into the sum with weight w l, while photoborg8 has selected weight w2.
Addition-
ally, photoborg8 has decided to cement his contribution into the sum as a
greyscale
image but with color weight w2. As will be seen, although w2 affects the color
of the
lightmodule as it appears in the final Output Image, no color information from
vz
gets to the Output Image.
T'he lightmodule from photoborgl is computed automatically by setting w f =
1~4, wt = 1~2, wo = 1. and wh = 2. In this way, the four-flash image is scaled
down four times, the two-flash image down two, and the half-power-flash image
up
2, so that all four photoqttantigraphic estimates qioo to qoos are brought
into tonal
register. Then certainty functions are applied. The four-flash certainty
function, c f,
weights the darker pixels in qooo more heavily. The two-flash certainty
function ct
weights the darker midtones most heavily, while the one-flash certainty
function co
weights the brighter midtones most heavily. The half-power-flash certainty
function
ch weights the highlights (brightest areas of the scene) most heavily. The
result is
that the weighted sum of these four inputs gives lightmodule vl. Lightmodule
vl then
continues on toward the total photoquantigrahic sum, with a weighting wl
selected
by photoborgl.


CA 02261376 1999-02-O1
In the situation in which a pseudocolor lightmodule is desired, as is
illustrated
in Fig. 9c with q812 through q815, the color lightmodule v2 is computed just
as in
the above case, but instead, this lightmodule is converted to greyscale and
typecast
back to color again as follows: Lightmodule v2 passes through color separates
CS and
is broken down into separate Red (R~, Green (G), and Blue (B) channels. Each
of
these has a weight associated with it. The weights are designated wR, w G, and
wB
in Fig. 9c. The default weights are those of the standard YI~ transformation
if none
are specified by the photoborg.
Color depth is often expressed in bits per pixel, e.g. b bit precision is
often
referred to as "24 bit color" (meaning 24 bits total over a11 three channels.
Likewise,
a double precision variable (e.g. a REAL*8 variable, in standard IEEE $oating
point
arithmetic), occupies 64 bits for each of red, green, and blue channels, and
is thus
designated as 192 bit color. Hence the designations in Fig 9c showing where
the
signals have a color depth of l92 bits. After the color separator CS, the
signals in
each channel have a depth of 64 bits (greysca,le), passing through the
weights. After
passing through the weights. certainty functions are computed based on
exposure in
each color band. Thus, for example, if the red channel is overexposed as is
often the
case where tungsten lights are concerned, then the highlight details can come.
from
the blue channel. Photoborg8 may also deliberately use a colored gel over a
flashlamp
in addition to or instead of using a plurality of flashlamps as in using a
phlashlamp.
For example a red gel over an ordinary flashlamp with a deliberate overexpo-
sure will convenienty overexpose the red channel. Typically the blue channel
will be
underexposed. The green channel will typically fall somewhere in between.
Accordingly, certainty functions cR, cG, and cB will often help extend the
dynamic
range of the greyscale image through the process of deriving a greyscale image
from
a color image. A weighted sum, including weighting by these certainty
functions,
is produced at v2y which is still a 64 bits per pixel greyscale image. This
image is
replicated three times arid supplied to color combines CC. Ordinary a. color
combines
46


CA 02261376 1999-02-O1
will take separate R, G, and B inputs and combine them into a single color
image.
However, when fed with three identical inputs color combiner CC simply
converts the
greyscale image into a datatype that is compatible with color images. The
result,
vyz~, is a l92 bits per pixel greyscale image in a format in which the subject
matter
is simply repeated three times. This signal may now be passed through weight
wz
where it may be assigned to the final output image. Weight iL~2 might, for
example,
be ~550~ in which case light module vyz~ will appear as yellow in the final
image, with
a strength five times the default strength.
Ordinarily images are produced in 24 bit Red Green Blue (RGB), and converted
to 32 bit CMYK (Cyan, Magenta, Yellow, black) for printing. However, if
printing is
desired, it will be advantageous to do the conversion to CMYK in lightspace
prior to
converting back to a 32 bit CMYK picture. Accordingly, T'he three lightmodules
vo,
vl, and v2y~ are weighted as desired (these final weights are selected for the
desired
visual effect), a weighted sum is taken in l92 bit color, and converted to 256
bit color
CMYK colorspace by the block denoted by RGBtoCMYK in F'ig. 9c.
Ordinarily there is some color shift in conversion from RGB to CMYK, and most
conversion programs are optimized for mid-key, e.g. fleshtones, or the like.
However,
a feature of the images produced by the apparatus of the invention is that
much
of the image content exists at extremes of the color gamut, so it is desirable
that
when converting to CMYK col.orspace, that the resulting image stretch out
toward
the different boundaries of the CMYK space. The CI~rYK space is quite
different
than RGB space in the sense that there are colors that can be obtained in RGB
that
cannot be obtained in CMYK and vice-versa. However, what is desired is an
image
that hits the edges of whatever colorspace it is to exist in. Most notably,
color hue
fidelity is typically less important than simply the fact that the image
should touch
the boundaries of the colorspace. Thus it will typically be desired to convert
from
RGB to LAB, HSV, or HSL space, and then increase the saturation, and then
convert
to CMYK, where colors will be clipped off for being out of gamut.
47


CA 02261376 1999-02-O1
Ironically, it is preferable that colors be clipped off as being out of gamat,
rather
than having them fall in within the. color gamut boundaries after having been
previ-
ously clipped in the old colorspa,ce. Thus the block denoted SAT in Fig. 9c
will in
fact ttse the output of block GBM (Gamut Boundary Manager) which detects where
the colors were originally at the boundaries of the RGB colorspace. In this
way, block
SAT will adjust the CMYK input in accordance with where the RGB signals were
at their extrema and ensure that none of these colors get mapped to the
interior
of the CMY K space. The block denoted SAT performs an optimization in 256 bit
lightspace and attempts to map any images that were clipped to a clipped part
of the
new gamut.
Fig 9f depicts a color coordinate transformation from domain DOM to range
RAN, where the domain DOM may, for example, be an RGB colorspace, or a higher
dimensional colorspace a,s might be captured using a filter wheel over a
camera. The
range, RAN is typically a CMY K colorspace suitable for normal printing, or
another
colorspace, such as the Hexachrome (TM) colorspace described in L.S. Pat. No.
5,734,800; "Six-color process system", issued Nov. 29, 1994, invented by
Richard
Herbert and Al DiBernardo, and assigned to Pantone, Inc.
Ordinarily the two colorspaces have different color gamuts, so that there will
be
some colors in the domain DOM that will get clipped (distorted) when converted
to
the range RAN. Colors that are clipped are denoted CL in Fig. 9f.
Conversely, there will also be colors BC that are not necessarily distorted by
the
colorspace conversion, but were at the gamut boundaries in the domain D014M
and
exist inside the boundaries in the range RAN. Consider two colors in
lightspace, BCl
and BC2, where BCl is just at the boundary of the domain DOM, while BC2 is
beyond the domain boundary DOM. The camera will map both of these colors to
BC'l, since BC2 is beyond its gamut. For example, both may be bright blue, and
both may get mapped to RGB = ~001~. However, in the conversion to colorspace
domain DOM, both will appear within the boundary of what the new colorspace
48


CA 02261376 1999-02-O1
could achieve.
Colors BC1 and BC2 represent single pixels, in isolation. In practice,
however,
it is evident from a picture, by the context of surrounding pixels; when a
region of
the picture goes beyond the gamut of the colorspace. For example, an extremely
bright. red light in a picture ~-ill often register as white, and then have a
yellow halo
around it, and then bloom out to red further out. When such an image is
rendered
anywhere other than at the boundary of the new colorspace RAN, the appearance
is
not visually appealing, and will be referred to as ''brightgrey" . The term
brightgrey
denotes colors that should be bright but register as greyish, for example,
colors that
were bright and vibrant in DOM, but may appear greyish in RAN, especially when
RAN is a C.'MYK colorspace or the like. For example, a bright magenta, in RGB
may
register as a dul greyish magenta in CMYK, even though the color is not
distorted.
In fact it is the very fact that the color is not distorted that is the
problem, e.g. since
CMYK is capable of producing a very strong magenta, there is a perception of
the
magenta being weak when it is faithfully reproduced in CMYK. Instead of
faithfully
reproducing it in CMYK, it is preferable, within the context of the invention,
to
distort the magenta from its original RGB value to something that is much
stronger
than it was in RGB. Typically this may be done by intensifying the magenta and
reducing the amount. of cyan, or the like, that might be causing the cyan to
appear
brightgrey. (Cyan and black tend to darken certain colors.)
When the camera is a lightspace camera, e.g. one that implements a Wyckoff
effect; or is otherwise based on a plurality of differently exposed images, it
is possible
to determine the actual quantity of light arriving in each of the three color
spec-
tral bands; and therefore it is possible to identify colors that are outside
the RGB
colorspace one would ordinarily have for taking a picture.
V'hen these colors would be further distorted by clipping, in conversion to
the
new colorspace, the appearance is not so bad as when they would fall in the
interior
of the new colorspace, so the emphasis of this invention is to adds ess the
darkgrey
49


CA 02261376 1999-02-O1
colors (colors denoted BC', or BC2 having been clipped to BCl and then
existing in
the interior of RA~1).
Most notably, there are two ways, within the context of the present invention,
to
obtain a vibrantly colored lightmodule painting:
~ use a. plurality of input images, preferably differing only in exposure, to
calcu-
late each lightvector, and then do all calculations and colorspace conversions
in lightspace, prior to converting back to an image by applying a pointwise
nonlinearity, f ;
~ accept. the fact that incoming lightstrokes will have been limited by domain
DOM, and attempt to stretch them out in colorspace so that regions such as
BCl will be stretched out further toward the boundaries of the range RAN (e.g.
BCl would move out toward BC2 or beyond).
It is understood that this second method will involve some distortion of the
colors,
and it is understood that this distortion is acceptable because often the
apparatus of
the invention is used to create expressive lightmodule paintings in which
colors are
not natural to begin with.
Fig. 9c includes the saturation booster SAT and the Gamut Boundary Manager
GBM. The effect of the SAT block with the GBM input is to ensure that, for
example,
a portion of the image that a photoborg deliberately overexposed by 12 f stops
and
then mapped through a dark blue filter weight, e.g. wl = ~00212~, will not
come
out with a greying effect in the CMYK space. It is not uncommon to
deliberately
overexpose by a dozen or so f stops when using a dark blue (e.g. pure blue as
in
RGB = ~0, 0, l~ ) filter. Ordinarily such an image is shot overexposed in
order to
deliberately blow away any appreciable detail. Thus a textured door, or rough
wall
will have an appearance as if a blob of deep blue paint were splashed on the
image to
obliterate any detail. Such an image creates the visual percept of something
that is
extremely bright. Thus should it. land anywhere but at the outer edge of the
CMYK


CA 02261376 1999-02-O1
gamut, it will create a very unsightly appearance. This appearance is hard to
describe,
other than by saying it looks ''bright bluish grey" . Obviously such a bright
splotch of
lightmodule paint should no be printed in any way that contains grey (e.g.
contains
black ink in CMYK). Thus SAT together with GBM must ensure; at all costs, that
such a color maps to something at the outer boundary of CMYK space, even if it
means that the hue must be shifted. Indeed, it is preferable that the hue does
shift.
For example. it would be preferable that the blue be shifted to pure cyan,
rather than
risk having it fall anywhere but at the extreme outer boundary of CMYK space..
It is understood and expected that additional information will be lost when
c.on-
verting to CMYK. In fact, it is the very fact that methods of converting from
RGB to
CMYK of the. prior art try to preserve. information that leads to this
problem. Thus
an important aspect of the present invention is a means of converting from RGB
to
C'MYK where hue fidelity is of relatively little importance, and where
maintaining
detail in the image is of relatively little importance compared to the
importance of
maintaining extremely bright vibrant colors.
Once the image has been adjusted in 256 bit CMYK lightspa,ce, so that all
colors
that were bright and vibrant in the input image are also bright and vibrant in
the
CMYK lightspace (even if it was necessary to distort their hues, or destroy
large
amounts of highlight detail to do so~, then the lightspace is passed through a
nonlin-
earity f which compresses its dynamic. range. The nonlinearity f may be the
forward
transfer function of the camera itself, or some other desired transfer
function that
compresses the dynamic range of the image in a slightly different way. After
passing
through f, the result is quantized to 32 bit color, so that it can be saved in
a standard
CMYK file format, such as TIFF. In this way, it can be sent either to a
digital press,
such as a Heidelberg digital press. or it can be used to make four color
separation
films which in turn can be used to prepare four metal plates for a traditional
print-
ing press. The resulting image will thus have very rich vibrant colors,
exhibit no
noticable quantization contouring (e.g. have no solarized appearance or
contour line
51


CA 02261376 1999-02-O1
appearance). Typically the resulting images, when printed on a high quality
press,
such as is used for a magazine cover, will have a much richer tonal range, and
much
better color, than is possible with photographic film, because of the
capabilities of
the lightspace processing of the invention.
In practice, only some of the lightstrokes are offenders containing
contributing
to or containing darl:grey portions. Accordingly, it is preferable to alter
only the
offending lightvectors, or to alter the worse offenders more severely.
Accordingly,
Fig. 9e depicts a plurality of quantities of light arriving from a cybernetic
photogra-
phy system. These quantities are denoted qRCBO, qRGBI, . . . qRCBn and are
linearly
proportional, in each of a plurality of spectral bands (typically at least
three spec-
tral bands, such as red, green, and blue) to the scene radiance integrated
with the
spectral response in each of these spectral bands. Such quantities are
referred to as
photoquantigraphic.
These quantities, q~GBO; qRGBla and qRCBrv, may be arrived at by applying the
inverse camera response function, ,f -1, to each of a plurality of pictures,
or alterna-
tively, a photoquantigraphic camera may be constructed, in which the output of
the
camera is in these photoquantigraphic units.
Each of these photoquantigraphs is typically due to a different source of
light; e.g.
qRGBO might be a photoquantigraph taken in the daytime, qRGm, a long exposure
photoquantigraph taken at night, and q~~B~,=, taken with a flashlarnp over a
short
(e.g. 1~500sec) exposure.
Each photoquantigraph is first converted immediately to CMYK. In Fig. 9e, the
conversion from qRCao to a cmyk is denoted by CMYKO, and the result of the con-

version is denoted by qc.M~, h-o, the conversion from qRGBi to a cmyk is
denoted by
C1VIYK1, and the result of the conversion is denoted by qC,NYrw, . . . the
conversion
from qRGBm to a cmyk is denoted by CMYKN, and the result of the conversion is
denoted by qCNr~~-m,v. Typically CMYKO, CMYKl, . . . CMYKN are identical con-
version processes even though the input photoquantigraphs (and hence the
outputs)


CA 02261376 1999-02-O1
are typically different. Each of these conversion processes is done
independently in
such a way as to minimize brighgrey, and thus contribute to a vibrant
lightmodule
painting. Thus qCNSYh o is generated from qRGBO by also looking at the gamut
bound-
aries. Gamut boundary manager 0. denoted GBMO looks at the gamut boundaries of
qRGBO~, with particular emphasis on where the gamut boundaries are reached in
the
domain colorspace but not the range. Thus G'_VIBO controls SATO to resaturate,
and
expand the gamut of qC,~,fYr;o> as well as deliberately distort the hue, and
deliberately
truncate highlight detail as needed to boost the brightgrey regions out to the
edges
of the new C'.MYIi gamut. Similarly, GBMl controls SATI to resaturate, and
expand
the gamut of qCfuY~~l, . . . and GBMN controls SATN to resaturate, and expand
the
gamut of qc~,~Yl,-N. The gamut boundary managers and saturation boosters are
also
responsive to the overall photoquantigraphic sum, e.g. to the sum in
lightspace af-
ter it has passed through a forward transfer function, fC~7Y~; . The forward
transfer
function fc~,NYI,- is semimonotonically increasing, but is level or concave
down (has
nonpositive second derivative) in each of the four C, M, Y, and K channels.
Fig 9g depicts an alternate method of ensuring vivid colors in lightmodule
paint-
ings than the method depicted in Fig 9f.
Referring to Fig 9g, there are six color axes, a red axis, denoted RE, a
yellow axis,
denoted YE, a green axis, denoted GR, a cyan axis, denoted CY, a blue axis,
denoted
BL, and a magenta axis, denoted =VIA. These axes are arranged rougly in the
order of
a, color wheel, with approximate hue going around in a counter clockwise
direction.
The example in this figure pertains to the typical CMYK printing process, but
may equally well apply to a. Hexachrome (Tlly or other printing process, or to
a
process involving touchplates or other colorspac.es. Thus without loss of
generality
the process is explained in the context of the gamut of ordinary CMYK
printing.
CMYK printing provides vivid yellow, cyan, and magenta colors because these
colors
are the actual ink colors. The vividness of these colors is expressed by the
solid black
dots in Fig 9g, as far out from the origin as possible, on each of the YE, CY,
and MA
.53


CA 02261376 1999-02-O1
axes.
Ordinarily, yellow and magenta inks mix together to produce vivid reds that
are
satisfactory, although not as vivid as the yellow and magenta from which they
are
made. Yellow and cyan inks mix to produce a dull green. Cyan and magenta mix
to
produce an extremely dull blue a blue that fails to capture the same vivid
blue
typical of photographic prints in RGB space. These limits on the colors are
depicted
as open circular outlines on each of the RE, GR, and BL axes of Fig. 9c.
Curve 990 shows the locus of maximum vividness of th.e colors in the
colorspace,
and thus establishes a boundary outside which the colorspace cannot represent
colors.
The colorspace is not necessarily a linear colorspace, and thus the curve 990
is not
simply a triangle made of three straight lines, but, rather, .it is a curve of
somewhat
irregular shape. Moroever, to be as general as possible, the term "vividness"
is
used, rather than terms like saturation, luminosity, or value. The term
"vividness"
means approximately how intense and vibrant the colors appear to a person
looking
at the picture. Thus this diagram depicts the manner in which blues often look
dull
and greyish in C.'IiIYK printing, and greens are not quite as bad as blues,
but still
appear somewhat dull. Likewise, reds, yellows, cyans, and magentas a,re very
bright.
Furthermore, there is a wide range of colors between yellow and magenta, and a
little
beyond each of these, that appear vibrant, but the colors around cyan for
example,
only appear vibrant when they are close to cyan, and drop off dramatically in
vibrancy
as the hue shifts away from cyan.
In the present invention; the accuracy of the hues is of little importance, so
long
as the colors are vibrant. Thus an aspect of the embodiment of the color
management
disclosed in Fig 9g is that the hue is selectively distorted in areas where an
image is
to be more intensely colored.
For example. suppose that a fiashlamp is used to place a blob of light upon
some
vegetation, such as vines growing up the side of a building. It is desired to
have the
lightmodule of this blob of light be multiplied by the coefficient green, so
that it shows
54


CA 02261376 1999-02-O1
up as a green blob of light in the final lightmodule painting, but it is also
desired that
the subject matter be irregularly illuminated. This is often a characteristic
of the
method of the invention that the blobs of light be of irregular intensity.
Accordingly,
it is typically desired to have a region where the image may be overexposed by
some
ten f-stops, and then have the light taper out gradually from there, to pure
black at
the edges. In this way the light hits all levels of exposure in between as we
move from
the central "hot spot" of the blob of light to the outer edges.
lforma,lly this coloring might be achieved by having green at the hot spot,
and
then using black ink to dilute this down toward the black areas. With the
invention,
however, what is desired is that mid to dark areas be in the normal green hue,
but
that bright areas experience. a hue shift, so that the "hot" spot is either
yellow or cyan,
while the bright areas near it are somewhere between green and cyan or
somewhere
between green and yellow.
The locus of points taken in this colorspace is shown as 992 for a greenish
yellow,
and as 993 for a greenish cyan. This deliberate distortion of the highlight
hues causes
the viewer of the finished lightmodule painting to perceive a more vivid
green, on
account of the color shifted hot spot.
For blues, a similar procedure is followed, in which the hot spots are arrived
at
by hue-shifting either toward cyan as denoted by 994, or toward magenta as
denoted
by 995. In this case, cyan is the preferable hue shift, denoted by locus of
points 994.
Thus when illuminating an object blue (e.g. painting a blue lightmodule) with
a great
deal of deliberate overexposure in certain places, those regions of
overexposure are
shifted into cyan. Further exposure may shift to white, if desired. Thus a
picture in
which one sees the bare flashbulb, the flashbulb may be rendered as white; and
the
halo around it (extremely bright still) as cyan, and then further around that,
bright
blue, and then as we fall back into the shadows, as dark blue, etc.. In this
way, the
apparent range of light is much greater than might otherwise be the case.
In the case of red, a shift is not necessary, as denoted by 991, but a shift
of bright
.55


CA 02261376 1999-02-O1
red areas toward magenta is often still desirable because this creates a very
pleasing
effect in which the red seems somewhat more vivid a,nd powerful.
Again, since there is a deliberate use of very weird colors in most
lightmodule
paintings, the fact that the colors are so distorted in hue is hardly evident
to the
viewer of these images, except. that the colors simply appear more vivid.
Furthermore, when the original lightmodules are captured as wyckoff sets (e.g.
with phlashlamp, or otherwise with each light module having been derived from
a
plurality of different collinear lightmodules~, a greater degree of
manipulation can
be possible in colorspace, and the extent to which colorspace can be
deliberately
distorted in a very severe manner is greatly increased.
When using the apparatus of the invention, it is recognized that many everyday
scenes or objects contain redundant information. A person's face, for example,
is
somewhat symmetric, and thus we may deliberately overexpose one side of the
face,
and underexpose the other side, and then present this extended response image
in
a pseudocolor form, or in a deliberately hue-distorted form, in order to
display this
dynamic range. Similarly, a brick wall, for example; contains many repeated
and
nearly identical bricks. Thus a picture of the wall could be grossly
overexposed in one
area, and underexposed in other areas, and would then better convey the
information
to the viewer. These observations are in fact consistent with traditional
photography
in the sense that many professional photographers light a person's face a
little more
on one side than the other. However, within the context of the invention
disclosed
here; the desire is to have this unevenness in lighting be much more extreme
than
it might be in traditional photography of the prior art. The extreme
unevenness in
lighting is both for artistic effect, as well as to introduce a kind of
Wyckoff effect into
the scene or objects, so that an image that is both expressive as well as
revealing of the
scene content will result. Accordingy, with reference to Fig. 9h, there is
depicted the
contribution to a light vector painting, in CMYK colorspace, in which the
contribution
arises from a single lightmodule. This lightmodule will typically be the
result of rapid
56


CA 02261376 1999-02-O1
bursts of light from a phlashlamp, or from a similar means of obtaining a
plurality
of collinear lightmodules of different exposure strengths, and then combining
these
together to produce the single lightmodule. In this depiction there is a
contour 99'7c
of a blob of light flashed onto the surface of a smooth white wall or the
like. This
contour 997c depicts the locus of points beyond which all pixels are
approximately
black. Thus 996 depicts a black point in the image.
Assume, without loss of generality, that the coefficient of the lightmodule
contri-
bution depicted in Fig. 9h is blue. Thus inside contour 997c. pixels may
contain a
mixture of black and blue. In particular, in the region between contours 997c
and
998c, pixels such as 997 contain a. mixture of black and blue.
998c depicts a countour inside of which there are no black pixels, e.g. inside
of which no black ink will be used in the printing. Accordingly, pixel 998
consists
primarily of blue ink, with no black ink, and little cyan or magenta ink. Thus
point
998 is blue.
999c depicts a contour inside of which there is primarily cyan. Point 999 is a
point
from the highlight (brigh area of the lightstroke, and is thus pure cyan.
Therefore,
the visual effect is that there is a perception of this entire blob of light
inside contour
99 r c is very bright, and this perception of vividness of color arises from
the hue
distortion toward cyan in the highlight region 999c.
Note that in actual practice. the transitions are gradual, and we do not see
any
countour lines. Instead what we see in the final CMYK print is a smooth and
contin-
uous gradation from black to cyan that has the appearance of a very bright
blue. In
particular, the Wyckoff effect implicit in the capture of a plurality of
collinear light-
modules with which to synthesis the single lightmodule ensures that there is
sufficient
dynamic range in the input lightmodule to render the output in this
deliberately dis-
torted colorspace in such a way that there are no contour lines or
solarization effects,
and that there is no visible evidence of quantization noise or the like.
Fig. 10a depicts a photoquantigraphic filter, which will be referred to as a
57


CA 02261376 1999-02-O1
"philter" . A philter is obtained by first c.ompttting the photoquantagraphic
quantity
l030, denoted q, from the input image 10l0, by way of applying the inverse
response
function of the camera 1020. Then an ordinary filter, such as a linear time
invariant
filter; l040 is applied. The result, l045, is passed through camera response
function
l050, to produce output image l060.
Fig. 10b depicts an implementation of split diffusion in lightvectorspace.
Split
diffusion is useful because it is often desired that some lightvectors will
contribute in
a blurry fashion while others will contribute in a sharper fashion. More
generally, it is
often desirable that there will be different filters associated with different
lightvectors
or sets of lightvectors.
Referring to Fig. lOb, one or more lightvectors to be blurred, 103l, are
passed
through blurring filter 1040. It is assumed that lightvector(s) 1031 is (are)
already in
lightvectorspace (e.g. already passed through an inverse transfer function, or
taken
with a camera that outputs in lightspace).
The output of blurring filter 1040 is added to one or more other lightvectors
by
adder 1042, and the result is passed through a semimonotonic function of
nonpositive
second derivative 1050, giving an output image 1060.
A drawback of traditional image editing, where a feather radius is used to
move
pieces of an image from one place to another, is that there is a "brightgrey"
effect
when a bright area overlaps with a dark area. Typically, the bright area gets
added
to the dark area in regions of overlap, and what results is a grey area with
clipped
highlight details. Clipping is visually acceptable at the upper boundary of
greyvalues,
but when values are clipped and then reduced from white to grey, the
appearance of
bright lights or the like (such as in a region of the picture where bright
lights were
shining into the camera) is unacceptable when mixed with dark areas. For
example,
a "brightgrey" rendering of a portion of the scene where there was a bare
lightbulb
blasting light into the camera at very high intensity is not acceptable.
Accordingly, the notion of philters may be used to build up a complete image
58


CA 02261376 1999-02-O1
processing toolkit, which might, for example, include image editing tools,
etc.. These
tools can all be adopted to lightspace, so that, for example; during image
editing,
the selection of a region may be done in lightvectorspace,so that if a feather
radius is
used, the feathering happens in lightvectorspace. Such a feathering operation
will be
referred to as a pheathering operation.
The pheathering operation is depicted in Fig. 10c. Here the philter operation
is
denoted 104l, and comprises the editing of the image in lightvectorspace.
When filtering operations, editing operations, or split diffusion operations
are
tonally drastic, e.g. when one wishes to perform strong sharpening or blurring
op-
erations on images. often the effects of limited dynamic range become evident.
In
this case, it is preferable. that the inputs have extendecl dynamic range.
Accordingly,
Fig. 10d shows an example of a set of collinear lightvectors 1010, 101l, and
10l2
which are processed by Wyckoff Principle block 1025 which implements the
Wyckoff
Principle as described in U.S. Pat. No. 5,828,793.
The result, qTOT, denoted l031 in the figure, is passed through the filter
l040.
Since filter l040 is operating in lightvectorspace, it is a philter. The
result of is then
converted to an image with semimonotonic function of nonpositive second
derivative
10.50, giving an output image l060.
Fig. 11 gives an illustration of the ~~'yckoff principle, and in particular,
the fact
that taking a plurality of differently exposed pictures gives rise to a
decomposition of
the light falling on the image sensor into a plurality of collinear
lightvectors, denoted
by LI%'1, W2, and Lf 3 in this figure.
Typically when practicing the invention, very strong deliberate overexposure
is
used for at least some of the lightvectors (in greyscale images) or
lightmodules (in color
images). For example, a photoborg may deliberately overexpose a section of the
image
and then apply a very strong color such as pure red or pure blue to this
lightvector.
Portions spilling over into the adjacent color channels will thus be
moderately exposed,
or underexposed. Thus there is an inverse ~~'yckoff effect in the rendering of
the
59


CA 02261376 1999-02-O1
lightvector into the. sum. Accordingly, Fig. 12a shoes this inverse Wyckoff
effect in
which a Wyckoff set is captured, passed through a combines (synthesis), to
generate a
Composite image. The Composite image is then split up. This split up is
inherent in
the use of a strongly colored lightmodule coefficient. For exanuple, using a
bright red
lightmodule coefficient of RGB = 0.01, 0.1,1.0 will result in an approximation
to the
Wyclcoff effect in the output, in which the blue channel will contain an
underexposed
version of the image that will show much of the highlight detail in the image,
as
denoted LVBI,UE in Fig. 12a. Similarly, the green channel of the output image
may
be moderately exposed, as denoted WGREEN in Fig. 12a. Finally, the red channel
of the output image will be overexposed, since the lightmodule coefficient was
bright
red. 'This overexposed channel is denoted WRFD in Fig. 12a.
Fig. 12b illustrates how this effect happens in a real filter. A red filter
allows red
light; R; to pass through with very little attenuation. Green light, G,
experiences
greater attenuation. In the case of a pure red filter, typically blue light,
B, will
experience even more attenuation.
Fig. 12c illustrates how the Wyckoff effect and inverse Wyckoff effect operate
in
the context of lightspace rendering. Suppose that the portion of the subject
matter
being imaged is white. and we are imaging it through a red filter.
A ray of weak white light, Wl, will pass through the filter and emerge as red,
since the filter is a red filter.
A ray of strong white light, W2, will pass through the filter and emerge as
yellow,
Yo, since enough green light will get through to produce an appreciable amount
of
green exposure, and the green and red together form yellow. The yellow outpvlt
Y~ will
likely have a red halo around it, a,s blooming into adjacent pixels or sensor
elements
will be weaker than the central beam, and will thus only expose the red
channel.
A ray of really strong white light, W3, will pass through the filter and
emerge as
white, Wo; since it will be strong enough to saturate all three spectral bands
of the
sensor (assuming a three band RGB sensor). Although the red component is
stronger


CA 02261376 1999-02-O1
than the others, all components are strong enough to saturate the respective
sensors
to their maximum value. The white output W~, will likely have a yellow halo
around
it, and. further out, a red halo, as light spilling over to other adj acent
pixels or sensor
elements will be weaker than the central beam, and will create behaviour
similar to
that of Yo further out, and Ro still further out.
It will be understood that to render this kind of effect, it will not be
sufficient
to just have a normal picture and computationally apply a red virtual filter
to it in
lightmodulespace, but, rather, it will be preferable to capture a picture of
extremely
broad dynamic. range so that this 'inverse Wyckoff effect can be synthesized,
resulting
in a natural looking image in which the red channel is extremely overexposed,
the
green channel is moderately exposed, and the blue channel is possibly
underexposed.
Such a picture will appear white in areas of overexposure, yellow in areas of
moderate
exposure, and red in areas of weaker exposure (and of course dark red or black
in
areas of still weaker exposure).
Accordingly, an important aspect of the invention is photorendering in
lightmod-
ulespace, with lightmodules being derived from a phlashlamp or the like.
Another
important aspect of the invention is the application of various philters to
lightvectors
of extended response.
Fig. 13a depicts an EyeTap (TM) flashlamp. The EyeTap flashlamp produces
rays of light that effectively emanate from an eye of a photoborg. Light
source 1310
is collimated with optics l320, and aimed at diverter l340. Diverter l340 is
typically
a mirror or beamsplitter. Diverter l340 creates an image of light source 1310
as if it
originated from an eye 133l of a photoborg. Preferably the light effectively
originates
from the center of the lens l330 of the eye 1331.
Optionally, an aiming aid 1350 reflects off the back of beamsplitter or mirror
l340.
If 1:340 is a mirror, it should be a two-sided mirror if there is an aiming
aid l350.
Aiming aid 1350 may be an aremac, projector, television, or the like. which
serves a
a viewfinder for aiming the EyeT'ap flashlamp apparatus of the invention.
61


CA 02261376 1999-02-O1
Fig. 13b depicts a wide-angle embodiment in which eye l331 is a right eye, so
optional aiming aid l350 can extend behind the eye, to the right side of the
face of a
photoborg using the apparatus of the. invention.
Fig. 14a depicts a,n EyeTap (TM~ camera system. An EyeTap camera system
provides a camera with effective center of projection co-incident with the
center of
the lens l330 of an eye 133l of the user of the EyeT'ap camera system.
Preferably the
EyeTap camera system is wearable.
Rays of light from sub jec.t matter 1300 are diverted by diverter 1340, and
pass
through optics 13l3. to form an image on sensor array 13l1. which is connected
to a
camera control unit (CCU) l312. Preferably diverter 1340 is a beamsplitter so
that
it does not appreciably obstruct the vision of the user of the apparatus.
Optionally,
optics 1313 may be controlled by focus control unit (FCU) 13l4.
The EyeTap camera system, in some embodiments; may include a second similar
apparatus for a second eye of the user. In this way, a binocular video signal
nnay be
captured, depicting exactly what the user sees.
The. image from the EyeTap camera system may be transmitted as live video to a
remote manager so that she can experience what the user experiences. Typically
the
user is a photoborg, who may also communicate with a. remote manager.
Optionally the EyeTap camera system may also include a display means which
may show the output of a remote fixed camera at a remote base station.
Fig. 14b depicts an alternate embodiment of the EyeTap camera system. A curved
diverter 1341 serves also as at least part of the image forming optics. A
satisfactory
curved diverter is a toroidal mirror. which forms an image on sensor array
131l
without the need for a separate lens, or with only a small correction lens
needed.
Typically, diverter 1341 forms an image with considerable distortion.
Distortion is acceptable, so long as the image is sharp. Distortion is
acceptable
because CCU 1312 is connected to a coordinate transformation means l315 which
corrects for the distortion. Thus output l31.6 is free of distortion
notwithstanding
62


CA 02261376 1999-02-O1
distortion that may have been introduced by the use of a curved diverter.
Preferably
the diverter and sensor array are aligned in such a way as to meet the Eye'Tap
criterion
in which the effective location of the camera is the eye 133l of the user, as
closely as
possible. The effective center of projection of the camera should match
closely with
the location of the center of the lens l330 of the eye 1331. This embodiment
of the
EyeTap camera can be made with reduced size and weight, and reduced cost.
Eyewear in which the apparatus of the invention may be built is preferably pro-

tective of the eyes to excessive exposure to light, as might happen if a
flashlamp of
the invention is fired into the eyes of a photoborg, especially when large
flashlamps
are used to light up tall skyscrapers in a large cityscape. Accordingly,
eyewear should
incorporate. an automatic darkening feature such as is typical of the
quickshade (TM)
welding glasses. or Crystal Eyes (ThI) 3-D glasses. An automatic darkening
feature
can either be synched to the flashlamps, or be triggered by photocells or a
wearable
camera that is part of certain embodiments of the invention.
Fig. 15 shows an embodiment of a finder light or hiding light. The finder
light is
used to find where the camera is located, particularly when shooting a large
cityscape,
where the camera might, for example, be located on the roof of a building a
few
hundred meters away. In this case, a. light source l510 may be remotely
activated.
Together with optics 1.520 and held of view limner 1521, a very bright light
is produced
by rays 1511, which partially pass through a 45 deg. beamsplitter and are
wasted
as rays l512, and partially reflected as rays 1513. Alternatively, the light
source
may be placed next to the camera and facing in the same direction, if the
losses of
a beamsplitter are unacceptable. A satisfactory light source is a 1000 watt
halogen
lamp, or arc lamp, which can be detected from among other lights in a large
city by
way of the fact that a photoborg has remote control of it. Alternatively, lamp
15l0
may be a flashlamp that the photoborg can remotely flash, in which case it is
also
quite visible from a great distance, nothwithstanding other bright lights in
an urban
setting.
63


CA 02261376 1999-02-O1
In addition to helping to find the camera., the finder light can also be used
to
determine if one is within its field of coverage. F'or this purpose, Camera
l500 has
the same field of view as the light source, so that one can make this
determination.
In some embodiments, barrier 1521 is a colored filter, so that the light
appears a
different color when one is within the field of view of the camera, but can
still be seen
when one is outside the camera's field of view.
At close range, the light is strong enough to light up the scene, and also
thus func-
dons as a worklight so that a photoborg can see where he or she is going.
Preferably,
in this i.rse, another worklight off-axis is used so that. the camera finder
light is not
on continuously enough to attract insects toward the camera, causing
degradation of
the image in the time immediately following the shutting ofF of light 15l0.
As a hiding light; light 15l0 can be illuminated and a photoborg can also see
if
he or she is casting a visible shadow. A visible shadow indicates that he or
she does
not blend into the background, assuming black clothing which would blend with
a
long-range open space behind the photoborg, such state being readily visible
by the
finder light at close range.
Fig. 16 shows the lightsweep apparatus of the invention. A row of lamps (as
little as 5 or 7 lamps, but preferably more, such as 16 or 32 lamps) is
sequenced
as it is moved through space, during which time the shutter of the camera is
either
held open in a long exposure, or rapidly acquires multiple exposures which are
later
photoquantigraphically summed. During this time, the lamps on frame l600 are
turned on and off. In the figure, the letter "A" has just been drawn in mid-
air by the
device, and lamp 160l is still on, while lamp l60? has turned off. The path of
frame
1600 through space leaves behind a ribbon of light in the photograph. For
example,
element. 16l0 persists even though the frame is no longer there.
Typically the device is used with graphics rather than text. For example, a
circle
may be drawn using sin and cos lookup tables. A solid filled-in circle is
often drawn
in mid-air, often not directly into the camera, but, instead, pointing away
from the
64


CA 02261376 1999-02-O1
camera so that it is only seen indirectly as its effect of illumination. In
this way,
frame l600 can be used to synthesize any arbitrary shape of light, such as a
softbox
in mid air (if a rectangle is chosen), or a light more like an umbrella.
di$'user if a circle
is chosen.
Rather than program the shape of light a-priori; it is sometimes preferable to
simply sequence the lamps while recording the scene at video frame rates, and
then
use photoquantigraphic weighted summation, setting weights to zero to achieve
the
ectuivalent effect of turning off certain lights.
Fig. 17a shows a lamp sequencer of the invention, in which processor 1700 cap-
tures images from camera 1500 while it controls a sequence of lamps 1701,
1702, 1703,
... . Subject matter 1720 may be a person or include people, in which case
lamps
170l, l702. l703, . . . are preferably flashlamps and camera. l500 is
preferably a high-
speed video camera, or subject matter l720 may be a still life scene in which
case
lamps 170l, l702, l703, . . . may be ordinary tungsten lamps or the like, and
camera
1500 an ordinary digital still camera or the like.
In the former case, wires 1730 are flash sync cables, while in the second
case, wires
1730 may be the actual power cords for the lamps. In either case, no
preparation
of lamps is needed and ordinary lamps may be used. Thus the innovation of the
invention is in processor l700 which ma,y include a computer controlled
multichannel
light dimmer, or a flash sequencer.
In the situation illustrated here, five pictures of the same subject matter
are
captured, and in each of the five pictures, the subject matter is differently
illuminated.
These five pictures are then passed to a lightspace rendering program which
allows
for the generation of a lightmodule painting. Typically in a studio setting,
there are
preferred default settings for the lightvectors. For example, the lightmodule
weight
for the picture corresonding to lamp l703 is typically set to blue, and split
diffusion is
used to run it through a photoquantigraphic blurring filter prior to the
computation
of a photoquantigraphic sum.


CA 02261376 1999-02-O1
The apparatus of Fig. 17a may be used for the production of still pictures or
for
the production of motion pictures. When still pictures are being produced,
ordinarily
the lamps are sequenced through only once. When a motion picture is being
produced;
camera l500 is a high speed motion picture camera, and the lamps are sequenced
through periodically. In this example, since there are five lamps, the motion
picture
camera must shoot at a frame rate or field rate at least five times the
desired output
frame rate or field rate. For example, if we desire a motion picture shot at
24 frames
per second, then the motion picture camera must shoot at least 120 frames per
second.
Each set of five pictures, cor responding to one cycle around the lamps, is
used to
photoquantigraphically render a single frame of output picture.
In the case of motion pictures, camera l500 may be mobile, if desired, and
lamps
1701, l702, l703 may also be mobile, if desired. In this case, preferably
motion picture
camera 1500 will be an even higher speed motion picture camera than necessary.
For
example, if it is a 240 frames per second camera, it can cycle through all
five lights,
and then wait a brief interval, before cycling through once again. In this
way, there is
less misregistration artifacts. Additionally, or alternatively, a registration
algorithm
can be applied to the images from camera 1500 to compensate for the fact that
the
subject matter may have changed slightly from the time the first lamp 1'l01
was fired,
to the time the fifth lamp was fired.
Fig. 17b shows a system in which a lightvectorspace is generated without
explicit
use of any controller. Instead, special flashlamps; 1801, 1802, l803, . . . ,
are used.
These are all connected directly to the camera. If camera l500 is a video
camera,
all the flashlamps may be supplied with the video signal to lock onto. If the
camera
l500 is a still picture camera, then all the flashlamps may simply receive a
flash sync
signal from the camera.
Wires 1 r 30 from each of the flashlamps are connected to an output 1840 of
camera
l500. Alternatively, some of the flashlamps may be daisy chained to others,
e.g. by
connections 1831, since all the flashlamps only need to be connected in
parallel,
66


CA 02261376 1999-02-O1
and no longer need separate connections to any central controller.
Alternatively the
connection may be wireless, and each flashlamp may act as a slave unit.
Ordinarily, in the prior art, all flashlamps would fire simultaneously when
camera
l500 took a picture. However, in the context of the present invention,
flashlamps
1801, l802, l803, . . . are special flashlamps that can be set to respond only
to every
Nth pulse, starting at. pluse M, where N and M are user-selectable. In this
case. all
flashlamps may be set to N=5 when we are using 5 flashlamps. In general, N is
set
to the desired number of lightvectors. Then the values for M are selected,
e.g. lamp
180l is set to M=l, lamp 1802 to M=2, and so on. Thus lamp 180l fires once
every
pulses, starting on the first pulse, lamp 1802 fires once every 5 pulses
starting on
the second pulse, and so on.
The number of lamps can be greater than the number of lightvectors. For
example;
we may set N=2 on each lamp, so that, for example., three of them will fire on
even
numbered pulses and the other two will fire on odd numbered pulses. This
setting
would give us a two-dimensional lightvectorspace.
In any case, the novelty is in the design of the flashlamps when using the
system
depicted in Fig. 17b. Thus the camera can be an ordinary camera, and the user
simply
purchases the desired number of special flashlamps of the invention. Since
many
flashlamps already have menus and means for adjusting and programming various
setting, it is not hard to manufacture flashlamps with the capabilities of the
invention.
Ideally the flashlamps may each contain a slave sensor, infrared sensor, radio
receiver, or the like, so that they can operate within the context of the
invention
without the need for wires connecting therrn. If wires are to be used to power
the
separate lamps, the necessary synchronization signals may be sent over these
power
llIles.
Fig. 18 shows a typical session using the lightspace rendering
(photoquantigraphic
renderings system. Preferably this system is on the Internet so that it can be
accessed
by any of the photoborgs by way of a WearComp (wearable computer system. Addi-
6i


CA 02261376 1999-02-O1
tionally, one or more remote managers can also visit this site. Accordingly, a
preferred
embodiment is a WWW page implementation.
Here. ten pictures are shown on a WWW browser l800. These ten have been
selected from a set of l000 pictures, by visiting another WWW page upon which
a
selection process is done. All of the photoborgs have agreed that these ten
images are
the ones they wish to use to make the final rendering. All of these ten images
1810
are pictures of the same subject matter under different illumination.
Below each image is a set of controls in the space below each image 1820.
These
controls include Y channel selector l830 for greyscale pseudocolor modulespace
selec-
tion, together with three color sliders l840, an overall weighting, 180, and a
focus
adjust 1860. Focus adjust l860 blurs or sharpens the image
photoquantigraphically.
To observe the output, another WWW page is visited. Each time that page is
reloaded, the photorendering is produced according to the weights set here in
1800.
OTHER EMBODIMENTS
From the foregoing description, it will thus be evident that the present inven-

tion provides a design for a system that uses a plurality of pictures or
exposures
to produce picture. that is improved, or where there is some extended
expressive or
artistic capability. As various changes can be made in the above embodiments
and
operating methods without departing from the spirit. or scope of the following
claims,
it is intended that all matter contained in the above description or shown in
the
accompanying drawings should be interpreted as illustrative and not in a
limiting
sense.
Variations or modifications to the design and construction of this invention,
within
the scope of the appended claims, may occur to those skilled in the art upon
reviewing
the disclosure herein. Such variations or modifications, if within the spirit
of this
invention, are intended to be encompassed within the scope of any claims to
patent
protection issuing upon this invention.
68

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 1999-02-01
Examination Requested 1999-02-01
(41) Open to Public Inspection 1999-08-02
Dead Application 2003-04-16

Abandonment History

Abandonment Date Reason Reinstatement Date
2000-02-21 R30(2) - Failure to Respond 2000-11-21
2002-04-16 R30(2) - Failure to Respond

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Request for Examination $200.00 1999-02-01
Application Fee $150.00 1999-02-01
Reinstatement - failure to respond to examiners report $200.00 2000-11-21
Maintenance Fee - Application - New Act 2 2001-02-01 $50.00 2000-11-21
Maintenance Fee - Application - New Act 3 2002-02-01 $50.00 2001-11-13
Maintenance Fee - Application - New Act 4 2003-02-03 $50.00 2002-12-02
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
MANN, STEVE
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Claims 2000-11-21 23 966
Representative Drawing 1999-08-11 1 7
Abstract 1999-02-01 1 43
Description 1999-02-01 66 3,325
Cover Page 1999-08-11 1 55
Claims 1999-02-01 21 804
Drawings 1999-02-01 35 514
Correspondence 1999-03-16 1 17
Assignment 1999-02-01 2 117
Prosecution-Amendment 1999-10-20 4 9
Prosecution-Amendment 2000-11-21 27 1,200
Prosecution-Amendment 2001-10-16 4 131
Fees 2002-12-02 1 18
Fees 2001-11-13 1 72
Fees 2000-11-21 1 74