Language selection

Search

Patent 2209610 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 2209610
(54) English Title: OPTICAL RANGE AND SPEED DETECTION SYSTEM
(54) French Title: SYSTEME OPTIQUE DE CALCUL DE LA DISTANCE ET DE LA VITESSE
Status: Deemed expired
Bibliographic Data
(51) International Patent Classification (IPC):
  • G01C 3/14 (2006.01)
  • G01C 3/08 (2006.01)
  • G01C 23/00 (2006.01)
  • G01P 3/36 (2006.01)
  • H04N 7/18 (2006.01)
(72) Inventors :
  • NASH, LAWRENCE V. (United States of America)
  • HARDIN, LARRY C. (United States of America)
(73) Owners :
  • HARDIN, LARRY C. (United States of America)
(71) Applicants :
  • HARDIN, LARRY C. (United States of America)
(74) Agent: OYEN WIGGS GREEN & MUTALA LLP
(74) Associate agent:
(45) Issued: 2001-03-27
(86) PCT Filing Date: 1995-01-18
(87) Open to Public Inspection: 1996-07-25
Examination requested: 1998-02-18
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/US1995/000819
(87) International Publication Number: WO1996/022537
(85) National Entry: 1997-07-04

(30) Application Priority Data: None

Abstracts

English Abstract




A passive optical speed and distance measuring system includes a pair of
camera lenses (14, 18) positioned along a common baseline (60) a predetermined
distance apart and controlled by an operator to capture images of a target at
different times. The camera lenses (14, 18) are focused on light sensitive
pixel arrays which capture target images at offset positions in the line scans
of the pixel arrays. A video signal processor with a computer (30) determines
the location of the offset positions and calculates the range to the target by
solving the trigonometry of the triangle formed by the two camera lenses (14,
18) and the target. Once the range to the target is known at two different
times the speed of the target is calculated.


French Abstract

Cette invention concerne un système optique passif de mesure de la vitesse et de la distance, comportant deux objectifs de caméra (14, 18) placés le long d'une ligne principale commune (60), séparés l'un de l'autre par une distance prédéterminée et commandés par un opérateur afin d'acquérir les images d'une cible à des moments distincts. Les objectifs de caméra sont dirigés vers des réseaux de pixels sensibles à la lumière qui acquièrent les images selon des positions décalées lors des balayages de lignes des réseaux de pixels. Une unité de traitement de signal vidéo relié à un ordinateur (30) détermine l'emplacement des positions décalées et calcule la distance jusqu'à la cible par trigonométrie à partie du triangle que forment les deux objectifs de caméra (14, 18) et la cible. Une fois la distance à la cible connue en deux moments distincts, on peut calculer la vitesse de la cible.

Claims

Note: Claims are shown in the official language in which they were submitted.



27

I CLAIM:

1. An electro-optical range finding system
comprising:
(a) a pair of optical lenses, each lens in
said pair of optical lenses oriented along
a line of sight towards a target, said
pair of optical lenses positioned a
predetermined width apart along a common
baseline;
(b) at least one light sensitive device
responsive to light from each lens in said
pair of optical lenses for forming first
and second one-dimensional images of said
target on first and second linear pixel
arrays;
(c) a video correlator for comparing said
first and second one-dimensional images on
said first and second linear pixel arrays
to find an offset pixel shift between said
first and second one-dimensional images on
said first and second linear pixel arrays,
respectively; and
(d) a calculator for determining a range to
the target as a trigonometric function of
said offset pixel shift and said predetermined
width between said pair of optical
lenses.

2. The electro-optical system of claim 1
wherein there is a separate light sensitive device for
each optical lens in said pair of optical lenses.

3. The electro-optical range finding system
of claim 2 wherein said light sensitive devices are video
cameras, each video camera containing a charge coupled
device.

28
4. The electro-optical range finding system
of claim 1 wherein said video correlator includes a
computer programmed with a correlation algorithm for
finding a global null of sums of differences between said
first and second one-dimensional images to determine said
offset pixel shift.

5. In an electro-optical ranging device, a
method of determining a range to a target object
comprising the steps of:
(a) providing first and second optical lenses
spaced a predetermined distance apart
along a common baseline, said optical
lenses having overlapping fields of view;
(b) directing light from said first and second
optical lenses onto at least one optically
sensitive device;
(c) electronically scanning said light
sensitive device in a region thereon
responsive to said first optical lens to
establish a one-dimensional template image
along a predetermined scan line;
(d) successively electronically scanning said
light sensitive device in a region thereon
responsive to said second optical lens to
determine a plurality of one-dimensional
images;
(e) comparing each of said plurality of
one-dimensional images with said
one-dimensional template image to determine
which of said plurality of one-dimensional
images most closely correlates with said
template image;
(f) determining a line distance between said
template image and said one-dimensional
image which most closely correlates with
said template image; and

29
(g) calculating a range to said target object
as a function of said line distance and
said predetermined distance.

6. The method of claim 5 wherein said light
sensitive device is electronically scanned in a direction
perpendicular to said base line.

7. The method of claim 6 wherein said optical
lenses are mounted a predetermined vertical distance
apart along a vertical common baseline.

8. An electro-optical system for measuring a
range to a moving target comprising:
(a) a stationary pair of optical lenses, each
lens in said pair of optical lenses
oriented along a line of sight towards the
target, said pair of optical lenses positioned
a predetermined width apart along a
common base line;
(b) at least one light sensitive device
responsive to light from each lens in said
pair of optical lenses for forming first
and second one-dimensional images of said
target, said first and second one-dimensional
images being simultaneously
formed on respective first and second
linear pixel arrays;
(c) a video correlator for comparing said
first and second one-dimensional images on
said first and second linear pixel arrays
to find an offset pixel shift between said
first and second linear pixel arrays, said
offset pixel shift being proportional to
an offset distance needed to produce
coincidence between said first and second
one-dimensional images; and


(d) a calculator for determining said range to
the target as a trigonometric function of
said offset distance and said predetermined
width between said pair of optical
lenses.

9. The electro-optical system of claim 8
wherein each lens in said pair of optical lenses is
associated with a separate light sensitive device.

10. The electro-optical system of claim 8
wherein the range is calculated according to the formula
R=b/2 TAN (90-kd), where R is equal to said range, b is
equal to said predetermined width, d is equal to said
offset pixel shift and k is a proportionality constant.

11. The electro-optical system of claim 8
further including speed determining means comprising
control means for obtaining a first range measurement (R1)
at a time T1, and a second range measurement (R2) at a
time T2, and calculator means for determining a speed of
the target based upon the formula SPEED=(R2-R1)/(T2-T1).

12. The electro-optical system of claim 8
wherein the lenses in said pair of optical lenses are
oriented along respective lines of sight that are
parallel to each other.

13. The electro-optical system of claim 8
wherein said video correlator compares every Nth pixel in
said first and second linear pixel arrays, where N is a
number greater than or equal to 2.

14. The electro-optical system of claim 8
wherein said video correlator determines said offset
pixel shift by finding differences in light intensities
at a plurality of offset pixel positions between the

31

first and second one-dimensional images and locates an
offset position that provides a least amount of
differences in light intensities.

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 02209610 1997-07-04Ph;P~ 9 ~ / O U ~ 1
S l ~ AU~ 1~96




OPTICAL RANGE AND SPEED DETECTION SYSTEM

Technical Field
The following invention relates to a system for
ranging and speed detection, and more particularly
relates to an optical system for determining range and
speed that is passive and does not require an energy
transmitter to accomplish its purpose.

Backqround Art
Most speed detection systems require a
- transmitter to transmit energy towards a moving target
which is reflected back to a receiver. Various schemes
are then provided for measuring the time of transmission
and the return of the energy in order to calculate the
range to the target and its speed. Radar is a primary
example of this technique, and radar guns are conven-
tionally used by law enforcement agencies for traffic
control. The problem with radar as a traffic control
device is that target acquisition and measurement are
ambiguous. It can frequently not be determined which
target out of a multitude of possible targets is
responsible for generating any particular speed indica-
tion. Another problem is that radar can be detected by
receivers tuned to the proper frequency. Laser ranging
systems are also available but such systems are also
detectable at the target and are prohibitively expensive.
In the past there have been attempts to design
purely optical speed measuring systems, but all suffer
from one or more defects regarding accuracy or cost of
implementation. For example, passive optical systems are
available which calculate an oncoming object's velocity
by acquiring images at two different times and comparing
the relative sizes of the images in the field of view as
a function of time. Examples of such devices are shown
in the U.S. patents to Goodrich No. 4,257,703, Abel
No. 3,788,201 and Michalopoulous et al. No. 4,847,772.



~FNnFnSHE~

~ CA 02209610 1997-07-04 ~CT/US q ~ / 00 ~1 9
IPEAIUS 1 6 AV~ ~996
-




Other prior art devices utilize trigonometric
relationships by capturing an image at different times at
known marker positions. Such systems are shown in Tyssen
et al. No. 4,727,258 and Young et al. No. 4,135,817.
These systems, however, require that the time of capture
of an image be synchronized with the appearance of the
target object at a known marker position. This is not
.lways practical and sometimes requires that the cameras
be spaced widely apart or placed at different locations.
These and other prior art passive optical speed
detection systems are generally overly complex and/or
- impractical or require external markers and the like.
What is needed, therefore, is a practical, compact, low
cost, optical speed and/or distance detecting system
which can be used at any desired location with a minimum
of set-up time and complexity.

Disclosure of the Invention
According to the present invention, a passive
optical speed and distance measuring system is provided
~ which includes a pair of video camera lenses pointed
. along respective lines of sight towards a target. The
- lenses are positioned along a common baseline a prede-
termined distance apart. A timer causes both camera
lenses to capture a first target image in the field of
view of each lens at a first time T1 and also at a later
time T2. The images are transferred to a range measuring
means which determines a distance R1 from the common
baseline to the target at a time T1 and for determining
the distance R2 from the baseline to the target at a time
T2. A calculating means determines the speed of the
target by solving, for example, the equation, speed =
(R2 ~ R1)/(T2 - T1) or by using a linear regression method
employing additional ranges Rn determined at times Tn.
Preferably, the lines of slght of each of the camera
lenses are parallel, and the lenses may be included in
separate cameras, or, through the use of mirrors and

CA 02209610 1997-07-04 ~ / U U a
1~tAlU~ 1 6 AUG 1


prisms, each lens may be focused on a common image detec-
tion device such as a charge coupled device (CCD). Also,
the cameras are generally of the type that provide an
image which is electronically scanned in a pixel array of
light sensitive devices where the light intensity values
of the pixels in the array may be digitized and stored
for processing by a computer.
The first and second camera le~,es each focus
images found in their respective fields of view on a
pixel map. Contained in the pixel map is at least a
partial image of the target which is found in the field
-~ of view of each camera lens. The range measuring means
includes comparison means for correlating the position of
the target image on the first pixel map with the same
corresponding target image on the pixel map of the second
camera lens. This correlation is used to determine the
angle between the target and the respective lines of
sight of the first and/or second camera lenses.
The pixel map may comprise a single line scan
of video data and the comparison means includes means for
determining the differences of video data intensities at
each respective pixel location in the respective pixel
~ maps to find the correlation between the target images.
This correlation will occur at a global null which will
be the position giving a minimum intensity difference
between the pixel maps. This pixel location is a linear
function of the angle between the target image and the
line of sight of at least one of the cameras.
The video data contained in the pixel map may
be converted to digital data for processing by a computer
program. The computer program calculates the absolute
value differences in intensity between pixels in the
pixel maps from the first and second video cameras at
successively offset shift positions between the two pixel
maps. Alternatively, preprocessing circuitry may be used
to shift the pixel maps relative to each other to create

CA 02209610 1997-07-04 ~dl'~ g 5 / 00 81 9
~E~U~ 1 6 ~ )99~
-




a list of the values at all possible pixel offsets. The
computer may then search the list for the global null.
The first and second lenses may be mounted in a
camera array which further includes a third camera for
sighting and target acquisition. The first and second
cameras generally should have narrow fields of view to
avoid optical background clutter, but the third camera
may have a wide field of view for target acquisition.
Additionally, all three cameras may be mechanically
connected so that the first and second camera lenses may
be slaved to rotational motion by the third (target
--- acquisition) camera. The third camera may further
include an alphanumeric video generator for providing
predetermined message data upon acquisition of a selected
target by an operator. Such data could include, for
example, the time and date and other information about
the target.
Alternative methods of determining the offset
- angle of the target image relative to the line of sight
may include detecting the edges of the target as the
target changes its position relative to the optical back-
ground. Since the background does not change over most
of the field of view as the target moves, signal process-
ing means may be used to discriminate between a moving
edge and static background video data.
Generally the cameras and/or camera lenses are
mounted side-by-side along a horizontal line which is
parallel to the line scan of the light sensitive device
in the focal plane of the lenses. However, if desired,
the light sensitive element may be oriented so that the
line scan direction is perpendicular to the baseline.
The signal processing to determine the target image
offset from the line of sight in this case is slightly
different from that which is used when the light sensi-
tive element is scanned parallel to the baseline. Inthis embodiment a video line scan of one of the cameras
serves as a "template" and whole line scans of the other



r~1 S~¦Ft~

CA 02209610 1997-07-04 PCT/lJS 9 5 l O0 81 9
~PEAIUS ~ ~ ~UG ~9~

camera are compared to the template line to establish
image correlation, thus determining the offset. This
method is more accurate than determining the offset by
comparison of overlapped fractions of a line but requires
more memory and computational time.
Determination of the difference in pointing
angles to the target between the two cameras (the offset
angle) allows a computer associated with this system to
accurately estimate the distance to the target. The
computer does this by solving the essential trigonometry
- of the camera configuration. Because the two camera
lenses are mounted along a common baseline a predeter-
mined distance apart, lines extending from each of the
camera lenses to the target form a triangle. With both
offset angles known, a trigonometric solution that solves
for the distance from the target to the center of the
baseline can be calculated. This measurement is per-
formed at least two different times and the difference in
range to the target as a function of the time difference
may be used to calculate the speed of the target.
The foregoing and other objectives, features,
and advantages of the invention will be more readily
- understood upon consideration of the following detailed
description of the invention, taken in conjunction with
the accompanying drawings.

Brief Description of the Drawings
FIG. 1 is a simplified block schematic diagram
of the system of the invention.
FIG. 2 is a block schematic diagram of the
video camera subsystem shown in FIG. 1.
FIG. 3 is a block schematic diagram of the
control and computational subsystem shown in FIG. 1.
FIG. 4 is a simplified flow chart diagram of a
computer program used to calculate the range and speed of
a moving target.



, _ _

, CA 02209610 1997-07-04 ~ ~ 9 5 1 0 0 8 1 9
lPEAlUS 16 AUG 1g96




FIG. 4a is a schematic illustration of the
optical relationships in the system using separate
cameras with the line scans parallel to the baseline.
FIG. 5 is a schematic illustration of the
geometry of pixel maps which are used to calculate the
angles between the target center and the lines of sight
of the camera lenses.
FIG. 6 is a schematic diagram of the overall
geometry of the optical range and speed detecting system
of the present invention.
_ FIG. 7 is a schematic diagram illustrating the
--' relationship between the system geometry of FIG. 6 and
the pixel maps of FIG. 5.
FIG. 8 is a schematic diagram illustrating the
method by which the system calculates the target offset.
FIG. 9a-9f are a flow chart diagram
illustrating the method of calculating the range and
speed to the target using the system of FIG. 4.
FIG. lO is a waveform diagram illustrating a
second method of determining an offset angle between the
cameras and the line of sight, termed the edge detection
method.
- FIGS. lla-llc are a flow chart diagram
illustrating the edge detection capture method
illustrated in FIG. lO.
FIG. 12 is a schematic view of the fields of
view of right and left camera lenses arranged in the
system illustrated in FIG. 7.
FIG. 13 is a schematic diagram of overlapping
fields of view of camera lenses arranged so that the CCD
line scan is perpendicular to a vertical baseline.
FIG. 14 is a schematic representation of the
pixel frame maps produced by camera lenses arranged along
a vertical baseline as illustrated in FIG. 13.
FIG. lS is a schematic representation of the
geometry of the perpendicular line scan camera



AIJ~NDEO S~iEEI

CA 02209610 1997-07-04 1~ S16 AUG 19~6




arrangement with upper and lower camera lenses oriented
along a vertical baseline.
FIG. 16 is a schematic flow chart diagram
illustrating a method of calculating the pixel offset
when the cameras are arranged along the vertical baseline
as illustrated in FIG. 15.
FIG. 17 is a schematic diagram in perspective
view showing an embodiment of an invention using a single
camera having dual side-by-side lenses scanning the CCD
parallel to the baseline.
FIG. 18 is a block schematic diagram of a video
~ preprocessor to be used in an alternative embodiment of
the invention.
FIG. 19 is a schematic diagram showing a
perspective view of another embodiment of the invention
using a single camera having dual lenses mounted for
scanning the CCD perpendicular to the baseline.
FIG. 20 is a flow chart diagram similar to
FIG. 4 illustrating a linear regression method for
calculating speed.
FIG. 21 is a schematic diagram of a
-~ two-dimensional memory array containing range and time
'~ data.
FIGS. 22a and 22b are flow chart diagrams
illustrating a linear regression calculation for the
method shown in FIG. 20.

Best Modes for CarrYing Out the Invention
Referring to FIG. 1, the invention includes a
video camera subsystem and video display 10 connected to
a control and computational subsystem 12. The camera
subsystem 10 provides left- and right-hand camera video
to the control subsystem 12 and the control subsystem
supplies alphanumeric video to the video camera sub-
system. FIG. 2 which shows an expanded block diagram ofthe video camera subsystem includes a narrow field-of-
view lens 14 which provides an optical image to a



A~ rn ~

CA 02209610 1997-07-04 I~'r/U~~ 9 ~ / ~ ~ 1 i~
~P~ G 1~9~




left-hand video camera 16. A second narrow field-of-view
camera lens 18 provides an optical image to a right-hand
master video camera 20. The right-hand video camera
includes sync which is supplied to the left-hand video
camera 16 and to a sighting video camera 22 which is also
slaved to the right-hand video camera 20. The sighting
video camera 22 includes a wide field-of-view lens 24 for
target acquisition.
All of the cameras 16, 20 and 22 are of the
type that include a pixel matrix array of light sensitive
devices such as a CCD. As such the pixel array is
scanned electronically in horizontal or vertical lines to
provide video data which represents light intensity at
each scanned pixel location. In addition, the output of
the sighting video camera 22 provides video to a video
mixer 26 which may receive an alphanumeric video input.
The wide field-of-view image of the camera 22 along with
the alphanumeric video may be displayed on a video
display 28.
FIG. 3 is an expanded view of the control and
computational subsystem 12. The subsystem 12 consists
primarily of a personal computer 30 (shown in dashed
outline) which may be any type of IBMX-compatible per-
sonal computer. The computer 30 includes frame grabbers
32 and 34 for the left-hand camera video and the right-
hand camera video, respectively. An alphanumeric gener-
ator 36 is slaved to the sync provided by the right-hand
video camera 20. The computer includes a computer bus 38
which couples the frame grabber 32, the alphanumerics
generator 36 and the frame grabber 34 to a disk memory
40, random access memory 42 and a CPU/I0 unit 44.
External to the computer 30 the computer bus 38 is also
coupled to a real time clock and calendar 46, operator's
control unit 48 and a printer 50.
The system geometry is illustrated in FIG. 6.
The left-hand camera lens 14 is situated at a point 52
and the right-hand camera lens 18 is situated at a point

I-UIIU-~ 7 ~/ UU ol '7
CA 02209610 1997-07-04 lPEAJUS16 AUG 1~96



54. The wide field-o~-view lens 24 is located midway
between points 52 and 54 at a point 56. The target is
located at a point 58 in the field of view of all three
cameras. Preferably the respective lines of sight of the
narrow field-of-view cameras located at points 52 and 54
are parallel as indicated by the dashed lines in FIG. 6.
An angle ~LH indicates the angle between the target point
58 and the line of sight of the camera lGcated at 52.
Similarly, ~RH indicates the angle between the line of
sight and the target for the right-hand camera. The two
cameras are situated along a baseline 60 having a length
"b" where the center wide field-of-view camera lens 14 is
located midway between the two narrow field-of-view
cameras 14, 18 at a distance b/2 along the baseline.
From this point the range to the target point 58 is
indicated as "R." A triangle having sides "a," "b" and
"c" is thus formed between the target location 58 and the
right- and left-hand camera locations 52 and 54. This
triangle includes internal angles ~ (alpha) at the left-
hand camera location, y (gamma) at the right-hand loca-
tion and ~ (beta) at the target location. The system
~~ determines the range R to the target by solving the
trigonometry of the aforementioned triangle. In order
to determine the speed of the target, it performs this
calculation at two different times and divides the
difference in range by the elapsed time between
measurements.
FIG. 4 illustrates in flow chart diagram form
the essential system operational method. After "START,"
the system, at block 62, captures a linear stereoscopic
image of the target. This image consists of a single
line scan of video data captured by the left-hand and
right-hand lenses simultaneously. This linear scan is
conducted at times T1 and T2. At block 64 the system
calculates the sums of the differences in pixel intensity
between the right-hand and left-hand scan lines, respec-
_ tively, at each of the two times for the total possible


~MFNnED SI~E~T

CA 02209610 1997-07-04 ~ 9 5 / 00 8 1
2 L~ AllG ~5


number of pixel offset positions between the two scan
lines. At block 66 the system performs a search of the
sums of differences list for nulls and ~ null list is
made. At block 68 the system searches for a global null
and records the pixel offset location at which it was
found. A global null is the offset location that yields
the minimum intensity difference between images. This
offset location is proportional to an angle between the
- line of sight of at least one camera and the target. At
block 70 the system calculates the range to the target
based upon the number of pixels offset required to
achieve the global null. At block 72 the process is
repeated for the second line pair scan and at block 74
the system calculates the target speed by solving the
equation:
S = (R2--R1 ) / (T2--T1 )
This method is graphically illustrated with respect to
the drawings in FIG. 4a and FIG. 5. FIG. 4a illustrates
the field of view overlap region at the range to the
target, R (defined by points S, P, E and M). WR is the
width of this region. The width, WR, is a fraction of a
line, approaching unity as the range approaches infinity.
~ The point where WR goes to zero (M) is the absolute
minimum operating range (Rmin). Note that the overlap
region width (the line segment between points E and S) is
imaged on both detector arrays, but at different loca-
tions on each array~ FIG. 5 represents the two line pair
scans from FIG. 4a where the upper pixel scan 76 repre-
sents the left-hand camera lens pixel map and the lower
scan 78 represents the right-hand camera lens pixel map.
The dark pixel in each scan represents the calculated
location of the target and the shift of distance 2d is
the offset position where a global null occurs. This
embodiment of the invention presumes that at least a
portion of the target can be found within the field of
view common to both lenses. This is shown graphically in
FIG. 4A.

CA 02209610 1997-07-04 PC /lJS 9 5 / 00 8 1 9
IP:A/IJSl ~ AU~ ~996

11
FIG. 7 represents a special case for the
generic system geometry shown in FIG. 6. Referring again
to FIG. 5, the system determines the distance d which is
one-half the pixel of~set between the pixel maps 76 and
78. It does this by correlating the two lines. Correla-
tion occurs at a total offset 2d as shown in the lower
portion of FI&. 5 where the right-hand camera lens pixel
map 78a has been offset with respect to the left-hand
camera lens pixel map 76a to produce a total offset equal
to 2d. When pixel intensities are compared at this off-
set, a global null will be produced which represents a
minimum in the absolute differences in pixel intensities
between the pixel maps at this offset location. The
reason for this is that at the global null the target
image is the dominant object common to both camera fields
of view. The process is akin to shifting one of two
overlapping fields of view visually to produce a coinci-
dent field of view. The pixel offset needed to achieve
this is then directly proportional to the angle between
at least one of the camera lenses and the target image
relative to its original line of sight.
This method is graphically illustrated in
FIG. 7. A ray from the target 58a at point P makes an
angle ~ between line c and the left-hand camera lens line
2S of sight. This ray passes through the left-hand camera
lens 80 and is focused on a pixel in the line pixel map
76 located in the focal plane 82 of the lens 80. Simi-
larly, on the right side, a ray from point P on the
target 58a extends along line "a" to a line pixel map 84
which is located in the focal plane 86 of the right-hand
camera lens 88. Both pixel maps are located in their
_ respective focal planes at the focal length "FL" of
lenses 80 and 88 which are identical. At the focal
length FL, the ray from the target extending along line
"c" or line "a" makes an angle ~ with the camera line of
sight. This angle is the same angle as the angle between
the lines c and a, respectively, and line R which


r

CA 02209610 1997-07-04 PCT/US 9 5 / 0 0 8 1 9
; A~G lg~


represents the range to the target 58A. The calculation
step outlined in block 70 of FIG. 4 calculates the range
to the target by first determining the value of angle ~.
Angle ~ is determined by finding the shift, in number of
pixels, at which the absolute differences in pixel
intensities between respective pixels and pixel map are
at a minimum. This is equivalent to rotating the right-
hand pixel map about an axis at the right-hand lens 88
corresponding to an angle ~ which is equal to two times
~. This imaginary rotation of the pixel map is shown
graphically in FIG. 7 wherein pixel map 84A has been
rotated through an angle ~ so that the dark (target
image) pixel in each map is at the same location. The
dashed line at lens 88A indicates the revised "rotated"
position of the right-hand camera lens.
In actuality the lenses remain pointed along
their respective lines of sight. It is the mathematical
processing, however, that performs the mechanical equiva-
lent of an imaginary camera rotation through angle ~.
The dimension d shown on FIG. 7 represents the pixel
offset distance from the center of the pixel map at which
image correlation occurs. At ranges where R is much
greater than the length of the baseline b, the tangent
function of ~ is linear and the offset distance d is then
directly proportional to ~. This relationship provides
the means of calculating the range R as will be explained
below.
Referring to FIGS. 9a through 9f, a flow chart
diagram is shown which describes the method by which the
control and computational subsystem measures the target
speed. Once system start-up is initiated, computer
software under the control of the CPU/IO unit 44 in the
computer 30 enters a loop (blocks 87, 89) which waits for
a "measure speed" command. The command is initiated by
the operator's control unit 48 which may include a simple
push button switch associated with the wide angle field-
of-view camera 22. Once a desired target has been


n~n ~EET

CA 02209610 1997-07-04 l~ S 165~



located by the operator and the button pushed, a "command
received" flag is set (block 90) which initiates the
simultaneous capture of video in the left-hand and right-
hand frame grabbers 32 and 34, respectively (block 92).
Next, the video lines that are captured are transferred
from each frame grabber 32, 34 to the RAM 42 in the
computer 30 (block 94). Next, both video lines are
tagged with a time T1 representing a time of capture
(block 96). If the transfer is the first transfer, the
program loops back to block 92 to repeat the sequence
(blocks 98 and loO). After the second video line-pair
transfer, two video line-pairs are stored in computer RAM
tagged with times T~ and T2, respectively. Referring to
FIG. 9b, the software then sets a variable OS (offset)
equal to zero (block 102). Another variable, ACC, is
also set equal to zero (block 104). A third variable,
PIX, is made equal to OS plus 1 (block 106). The
accumulator variable, ACC, is then made equal to ACC plus
an expression which represents the absolute value of the
difference in intensities between the left-hand pixel map
and the right-hand pixel map at a predetermined offset
position (block 108~. The PIX variable is then incre-
mented by one (block 110) and a loop is formed so that
the calculation will be repeated until the variable PIX
is equal to NPIX which represents the total number of
pixels in the line of video (block 112). In a conven-
tional video camera of this type there will be 512 pixels
in a single horizontal line of video.
Once all of the values of ~CC have been
- 30 calculated, a new variable is defined QUOl (OS,1) which
is made equal to the offset OS (block 114). A second new
variable QUO1 (OS,2) is made equal to ACC divided by
NPIX-OS (block 116). This is done to normalize the abso-
lute value of the differences in intensities between the
two sums of pixel maps. The reason for this is that as
the offset is shifted (refer to FIG. 8), there are fewer
and fewer pixels in the overlap region. FIG. 8


~ i~e~ n ~l~t~

CA 02209610 1997-07-04 PCT/JS 95/nn8 9
IP~I JS 1~ AU~ ~9~6t

14
represents the left-hand and right-hand pixel memory maps
stored in RAM and designated 77 and 7~, respectively.
The arrows pointing between the memory maps 77 and 79
- represent the summation of the absolute differences of
the intensities at the pixel locations which are now
represented as addresses in the memory map that are in
the current overlap region. It can be seen that as the
offset increases, the overlap region gets smaller. The
variable OS is increased by one until the offset has
reached a maximum which theoretically could be equal to
the number of pixels, but, as a practical matter, is
limited to about 20% of a line (about 100 pixels) for a
typical 512 pixels/line camera. When this maximum is
reached, the software then operates on the variables QU01
(OS,1) and QUOl (OS,2).
In order to find the null location which will
ultimately represent the distance d which in turn repre-
sents the pixel offset, X is first made equal to zero and
OS is made equal to 1 (block 122). X is a null counter
variable and OS is the address of the second sum in the
sum of differences list which is the QUOl array. QUO1 is
a two dimensional array containing the sums of differ-
ences and the offset values at which they were calcu-
lated. This array has now been normalized as indicated
above at block 116. Referring to blocks 124 and 126,
each sum in the QU01 list is compared to the adjacent
sums in the list. If a sum is less than its adjacent
sums, it is recorded as a null, that is, X is incremented
as indicated at block 128. The null (labeled NULL) is
then placed in a two dimensional array along with its
offset value (block 130). The list pointer OS is incre-
mented at block 132 and the process is repeated until the
last unambiguous NULL candidate (the next to last sum in
the list) is evaluated as indicated at block 134.
FIG. 9d describes a search routine which
searches for a global NULL. The first NULL value in the
NULL list is placed in a register named GLNUL. The


o~o 3~EE~

CA 02209610 1997-07-04 ~ S ~ e9



corresponding offset is placed in a register named GNOS
(block 136). The NULL list pointer NL is then set to the
second value in the list where NL equals 2 (block 138).
This NULL value is compared to the previous NULL value
(block 140), and if the NULL value is less than the
previous NULL value, it replaces that value in the GLNUL
register and its offset replaces the value in the GNOS
register (block 142). NL is then incremented (block 145)
and the process is repeated until all X values in the
list have been tested (block 146). The NULL value that
survives in the GLNUL register will be the lowest value in
the NULL list, and its corresponding offset in the GNOS
register represents the pixel offset to the global NULL.
Knowing the pixel offset, the range to the
target can now be calculated. The method for doing so is
shown in FIG. 9e. The global NULL offset from the first
line pair is selected (block 148). Next, distance d is
defined as the offset of the global NULL divided by two
(block 150). Next, an angle ~ is made equal to d times a
proportionality constant ~PIX (block 152). The trigo-
nometric e~uation which yields the range is then solved
where R, which is the range, is equal to b/2 tan (90~-~)
(block 154). This is performed for the first range
calculation and the program then loops back and performs
it for the second range calculation (blocks 156 and 158).
If the second range calculation has been performed (block
- 160), the range value is stored in memory (block 162).
Speed calculation is shown in FIG. 9f. The
time of capture is fetched from storage for the first and
second frame pairs, respectively, which represent the
values T1 and T2 (block 164). The speed is then calcu-
lated as the change in the values R1 to R2 as a function
of the difference ~n times (block 166). The speed may
then be stored and displayed along with the calendar date
and other alphanumeric information generated by the real
time clock and calendar 46 (block 168). Once this

CA 02209610 1997-07-04 PCT/US 9 5 / 00 8 19
iPEAlUS16 AUG 199~

16
- occurs, the "command received" flag is cleared and the
program returns to start (block 170).
A second embodiment of the invention is shown
in FIGS. lo and lla through llc. The hardware configura-
tion for this embodiment is the same as described abovein FIGS. 1, 2 and 3. What is different about this alter-
native embodiment, termed the "edge detection" method, is
the way in which sof~ware present in the computer 30
controls the ac~uisition of video data from the cameras
and processes that data to determine the angle between
the right- and left-hand cameras and the target.
Referring to FIG. 10, the underlying theory of
the edge detection method is that a moving target will
change position in the overlapping fields of view of the
two narrow field-of-view cameras while the background
will remain essentially the same. By eliminating back-
ground video data, the right-hand and left-hand edges of
the moving target may be determined and the angle to the
target may then be calculated. In FIG. 10 a line of
video data termed LINE 1 shows an image in the field of
view of the right- and left-hand cameras. LINE 2 is a
video image of the same field of view but taken at a
later time. As the diagram shows, the video line images
are nearly identical except for the apparent movement of
an object which is presumed to be a moving target image.
If LINE 1 is subtracted by LINE 2, the background video
information is eliminated and the edges of the moving
target are seen as video pulses that exceed predetermined
positive and negative thresholds. This locates the edges
of the moving target, and, armed with this information,
the system can calculate the location of the center of
the moving target and thus the angle to the target.
This method is illustrated in the flow chart
diagram shown in FIG. lla through llc. At start (block
172) the cameras capture left-hand and right-hand images
of the target (block 174). The time of capture is
recorded (block 176) and the process is repeated at three

CA 022096 10 1997 - 07 - 04 ~ q

17
different times until three line pairs are captured
(blocks 178 and 180). The system then subtracts the
first two images pixel by pixel to form a differential
line image (block 182). Next, the root mean square (rms)
value of each differential line is calculated (block
184). The same process then occurs for the second and
third line image pair (blocks 186 and 188). The system
then sets positive and negative threshol~s for each
differential line image (blocks 190 and 192) as a func-
tion of the rms values. From these operations a "thresh-
olded" differential line is obtained (block 194). Up to
this point the system has obtained data representing a
differential video line image that corresponds to the
bottom waveform of FIG. lo. The system must then
calculate the position of respective left-hand and right-
hand edges. It does so by starting with the first pixel
in the line and determining whether that pixel is above
the positive threshold or below the negative threshold
(blocks 196, 198 and 200). Depending upon the decisions
at blocks 198 and 200, pixel values are either given a
positive 1, a negative 1, or a zero value (blocks 202,
204 and 206). This process repeats until the last pixel
on the line is reached (blocks 208 and 210). Referring
to FIG. llc, once the pixel values have been obtained and
stored in memory, the software searches for the first
non-zero pixel (block 212) and this pixel number is
recorded as a first edge of the target image (block 214).
The system then searches for the last non-zero pixel
(block 216) and records this pixel number as the second
edge of the target image (block 218). The target image
center location may then be calculated (block 220). This
calculation yields a pixel offset number which may then
be converted into an angle in the manner described in
FIG. 9e (block 222). Next, the range to the target for
each differential line pair may be calculated (block 224)
and, with this information, the system is able to



~MFNt)~ Et'

CA 02209610 1997-07-04 ~ ~ ~ O ~ 8 ~ r9
IPEAIIJS 1 6 AV~ 1996

18
calculate the speed (block 226) in the manner shown in
FIGS. 9e and sf.
It should be noted that the edge detection
method does not assume that the target image includes a
point centered midway between the left-hand and right-
hand cameras. In fact, the more general geometric
relation shown in FIG. 6 applies. Thus, in order to
determine the range R, ~LH and ~RH must both be known.
In calculating these angles, the only approximation made
is that ~ is directly proportional to the pixel offset
which represents the midpoint between the two target
edges. This will hold true when R is much greater
than b.
The angle ~ for both the right-hand and left-
hand cameras is calculated by determining the displace-
ment of the target image center from the line of sight.
The line of sight for both the right-hand and left-hand
cameras is the pixel position which corresponds to the
center of the image line scan. By simply counting from
the center of the line scan to the calculated position of
the center of the target image, a distance d can be
obtained for both the right-hand and left-hand cameras
and ~ will then be equal to a proportionality constant
multiplied by the distance d. It should also be observed
that with reference to FIG. 6 angle ~ is equal to goo
minus ~LH and y is equal to 90~ minus ~ RH. The range
is, therefore, calculated according to the following
expression:
R = ~ (a sin y)2 + (C cos~-b/2)2
These calculations are performed by the software at
blocks 222 and 224 of FIG. llc. The speed is then calcu-
lated according to the method previously described above
at block 226.
The edge detection method works best for
objects travelling parallel to the baseline b across the
field of view of the cameras. However, the system will
also work for objects entering the field of view in a

CA 02209610 1997-07-04 PCT'US 9 5/nn 81
IPÉAJIJS 1 ~ AU~ lg9~

19
direction perpendicular to the baseline or at an obtuse
angle. Adjustments in the calculation algorithm may be
made to include a trigonometric function which accounts
for differences in the angular orientation of the moving
target with respect to the field of view. This will be a
known quantity which can be represente~ by a constant,
since for applications such as traffic control, the
moving objects travel in an undeviating straight line
across the field of view.
Yet a third embodiment of the invention is
shown in FIGS. 13, 14, 15 and 16. Referring to FIG. 12
it can be seen that in the preferred embodiment of the
invention, the video scan lines for the left-hand and
right-hand camera lenses are parallel to the baseline.
(The fields of view of the left-hand and right-hand
camera lenses are offset vertically for purposes of
illustration only.) A different configuration is shown
in FIG. 13 in which the cameras are mounted along a
vertical baseline a predetermined distance apart and the
= 20 scan lines extend perpendicular to the baseline. The
cameras could also be mounted in the horizontal plane but
in such a case the scan lines on the light sensitive
device would be rotated 90~. The difference in
performance is illustrated in FIG. 14. In this embodi-
ment the pixel maps consist of complete lines of video
data instead of fractions of lines. Resolution is higher
for this embodiment because a complete line of video data
takes the place of a fraction of a line that can be as
short as 100 pixelsO There may be as many as 489 lines
having at least 500 pixels per line in conventional
camera systems which would employ this technique. As
with the single line scan embodiment, an offset distance
d will be determined which represents an offset from the
center of the field of view to the estimated location of
the target image. Although resolution of the global null
for this embodiment is better, processing time is slower
because of the need to scan an entire frame consisting of



C~ r~ C~_.{~ !

CA 02209610 1997 - 07 - 04 ~ /9 5 / O 0 8 ~ ~
lU~ 16 AUri ~


as many as 489 lines, and the memory for storing the
video data must be made larger in order to accommodate
the additional information.
FIG. 15 shows the system configured with an
5 upper camera 230 and a lower camera 232 mounted along a
vertical baseline and separated by a distance b. The
upper camera 230 includes a field of view that extends
from Ll UPR to L489 UPR. The lower camera includes a
field of view that extends from Ll LWR to L489 LWR. At
each range there is an overlap region which is shown
between the vertical arrows on the right side of the
drawing. The drawing shows the overlap region at a range
R2. The dashed lines extending from the upper and lower
cameras 230 and 232, respectively, indicate the position
of an arbitrarily chosen scanning line, termed herein-
after a "template" line. This line is labeled L TEMP
(LWR) in FIG. 15. The template line is an image scan
line containing at least a portion of the target image at
any arbitrary range. The pixel intensities from all
lines (the entire frame) of the upper camera are compared
to the template line. (Only the pixel intensities from
the template line of the lower camera are mapped in com-
puter memory.) The upper camera output is then shifted
and compared, line by line, to the single template line
from the lower camera frame map until the image line
locations correlate. This occurs at a total shift of 2d
lines. This relationship is illustrated as shown in
FIG. 14 where a selected line in the upper camera field
of view is shifted a distance of 2d lines in order to
correlate with the template (dark) line in the lower
camera field of view. The software processing for this
embodiment is very similar to that shown in FIGS. 9a-9f.
The exception is that the flow chart diagram of FIG. 16
is substituted in this case for the flow chart diagram of
FIG. 9b.
Referring to FIG. 16 the offset is made equal
to the template line (block 234). ACC is set equal to O

~ CA 02209610 1997-07-04 ~ 5 J~ 9
IPEAllJS1 ~ g6


(block 236) and PIX is set equal to 1 (block 238). ACC
is then made equal to the absolute difference between an
arbitrary line in the upper field of view and the
template line (block 240). The variable PIX is incre-
5 mented and the process is repeated until PIX equals NPIX
(blocks 242 and 244). The variable PIX is an array
pointer variable common to both arrays and thus points to
the like numbered pixel in both arrays. The line pointer
for the upper map is the variable OS whereas the line
10 pointer for the lower array is a constant (TEMP). QU01
is a two dimensional array containing the sums of differ-
ences and the offset values at which they were calcu-
lated. Note that the sums of differences in this case do
not need to be normalized since all are made up of the
15 same number of differences. The process proceeds until
the maximum possible offset is reached and all values of
QU01 have been calculated (blocks 246, 248, 250 and 252).
The program then goes to FIG. 9C to calculate the nulls
and to find the global null. The only difference will be
20 in the value of the proportionality constant ~PIX which
is now made a function of the distance between the
centers of adjacent lines instead of between the centers
of adjacent pixels.
In addition to the three foregoing signal
25 processing methods for operating on the captured video
data, several hardware modifications are also possible
without departing from the spirit of the invention.
The right- and left-hand cameras used to
produce the line images for process-ng according to the
30 three methods described above may be consolidated into a
single camera. FIG. 17 shows a schematic view of such an
arrangement. In FIG. 17 a left-hand lens 260 and a
right-hand lens 262 focus light through a pair of reflec-
tive prisms 264 and 266, respectively. These prisms
35 direct light onto a stacked prism arrangement 268 having
an upper and lower prism 268a and 268b, respectively.
The upper and lower prisms 268a and 268b reflect the


~AF~nFn ~U~T

CA 02209610 1997-07-04 PC /~1~ 9 5 / 00 81 9
AlJG ~99~

22
light onto different regions of a single CCD line or area
camera chip 270. The configuration Gf FIG. 17 is useful
primarily for the single line scanning method of FIG. 7
and the edge detection method of FIG 10.
A configuration appropriate to the multiple
line scanning method is shown in FIG. 19. In this figure
the lenses are mounted side-by-side but the line scanning
is conducted in a plane perpendicular to the plane of the
drawing. Light from the object enters a left-hand lens
272 and is reflected off of mirrors 274 and 276 onto a
CCD 278. A small area in the top of the field of view of
the lens 272 is chosen and dedicated to the left-hand
lens. This area is shown between the arrows and repre-
sents the template line scanning area. The rest of the
CCD 278 may be scanned by light from the right-hand lens
280 reflected from mirror 282 and prism 284. Thus, for
both the embodiments of FIG. 17 and FIG. lg a single
camera may be constructed which includes two lenses
mounted side-by-side or one on top of the other along a
common baseline and the light from both lenses may be
imaged onto a single light sensitive device.
The signal processing for at least a portion
of the "baseline perpendicular" multiple line scanning
embodiment (FIGS. 13-16) may also be employed as shown in
FIG. 18. In FIG. 18 a digital preprocessor replaces the
functions performed by the computer in FIG. 16. Addi-
tionally, the computer hardware complement of FIG. 3 is
altered so that frame grabbers 32 and 34 are no longer
needed. This digital preprocessor may be constructed
from commonly available off-the-shelf components and
significantly reduces the computational requirements of
the computer. Range measurements can, therefore, be made
more frequently thus improving the accuracy of the speed
computation. Also, less RAM is required in the computer.
The sums of differences calculations described in the
flow chart of FIG. 16 are performed by the preprocessor
of FIG. 18 and the results are transferred to a computer



~ ,~A,n~

CA 0 2 2 0 9 6 10 19 9 7 - 0 7 - 0 4 }~ .~ f



23
interface 318. The upper camera, termed the template
camera 300, transfers video to an eight bit analog-to-
digital converter 302. The output of the eight bit ADC
is provided to a 512 (pixel) by eight bit memory 304.
The lower or frame camera 306 transfers video to its
eight bit ADC 308. Both cameras can be RS 170 standard
video cameras synchronized by an RS 170 sync generator
310. Clocking and timing circuits 312 cause digital
video data representing the template line to be stored in
the memory and clocked out later and summed with the
output of ADC 308 in a sums of differences computation
device termed a complementor/adder/accumulator 314.
Computation device 314 is programmed to operate in
accordance with the program outlined on FIG. 16 and
- 15 provides the two dimensional sums of differences array to
489 x 8-bit memory 316. These data are transferred to a
computer interface 318. The computer then performs the
null search routine and range and speed calculations
described in FIGS. 9c through 9f.
The speed calculation for the invention has
been described with reference to a pair of range measure-
ments made at times T1 and T2. For situations requiring
more accuracy, however multiple range measurements Rn may
be made at multiple times Tn and a linear regression
algorithm may be applied to the multiple range measure-
ments in order to determine the speed of the target. The
linear regression method is an accepted and widely used
statistical method for fitting a straight line to a
plurality of points plotted in cartesian coordinates. In
this system the range would be the ordinate and time
would be the abscissa. The linear regression method may
be implemented by storing the measured ranges Rn and their
associated times Tn in a two dimensional array in computer
memory. Speed may then be determined based upon the
magnitude and slope of the line that is fitted to the
group of cartesian coordinates. If the algebraic sign of
the slope is positi~e, the target is receding. If



~ Ih ~

CA ~22~961~ 1997- ~7-1)4 IPEAIUS 1 6 AU~


24
negative, the target is approaching. A confidence level
which is based upon the square of the range coordinates
may also be calculated.
Referring to FIG. 2l, an array of range
measurements taken at a plurality of times is shown in
chart form. The range measurements are shown in column l
and comprise R1, R2 . . Rn taken at the times shown in
column 2 of Tl, T2 . . Tn. The number of the measure-
ment is designated by the row number and extends from l
to RCMAX. The row number indicates an address in memory
at which the two dimensional column array is stored.
~ FIG. 20 shows a general flow chart diagram of
the process of the invention using a linear regression
method to calculate the velocity of the moving target.
This chart is similar to the chart of FIG. 4. Blocks 62,
64, 66, 68 and 70 are the identical process steps shown
in FIG. 4. The difference from the process illustrated
in FIG. 4 begins at block 330. At this point the process
of blocks 62 through 70 is repeated at later times to
obtain a plurality of range and time measurements Rn at
times Tn. At least three such measurements are obtained
(and more could be obtained) depending upon the accuracy
desired. Blocks 332, 334 and 336 illustrate the linear
regression calculation that yields information regarding
the speed of the target. In block 332, a linear regres-
sion calculation is undertaken to fit the best straight
line to the cartesian data points represented by range
and time. In block 334 the slope of this line is calcu-
lated. The speed of the target is directly proportional
to this slope. At block 336 a confidence level may be
calculated. The results of the calculations of blocks
332, 334 and 336 are displayed at block 338.
Referring now to FIGS. 22A and 22B, a
mathematical method implemented by the computer 30 is
shown which performs the calculations of blocks 332, 334
and 336. At block 340, variables X and Y are initial-
ized. A variable to be entitled "ROW" is set at l in


AMENDE~IS~r-

CA 02209610 1997-07-04 ~PE~S 1 6 AU~ 1996



block 342. Using variables X, Y and ROW, a linear
regression straight line fit for the range and time
coordinates represented by variables X and Y is calcu-
lated in block 344 in the classical manner. In block
346, the variable ROW is incremented by 1. At block 348,
the program loops back to block 344 as long as the vari-
able ROW is not equal to RCMAX plus 1. When ROW is equal
to RCMAX plus 1, the program next calcu_ates R based upon
the sums of variables X and Y (block 350). At block 352,
a confidence level represented by R2 is calculated. The
calculation in block 354 represents the slope of the
straight line which represents the linear fit to the
cartesian coordinates X, Y. The speed is calculated in
block 356 by simply multiplying the slope by a constant.
In block 358, the speed and confidence level are
displayed.
In order to save computation time for certain
applications in which fine resolution of the target is
not required, correlation between pixel maps may be done
at every nth pixel in the overlap region instead of at
each pixel offset. The minimum overlap region is about
100 pixels and can extend to 512 pixels. When correla-
tion is performed only with each nth pixel, the selected
value of n represents a trade off between the maximum
spatial frequency resolvable with the algorithm and the
time required to perform the algorithm. For a given
processor, this is a trade off between the accuracy of
each range measurement and the number of range measure-
ments that can be made in a given time. Thus, when time
exists to make more than 2 range measurements on a
target, the linear regression technique described above
can be used.
In order to implement this variation, the
variable PIX plus 1 in block 110 of FIG. 9b would be
changed to PIX plus PIXSTEP where PIXSTEP equals n. In
block 116, the right hand side of the equation would
change to PIXSTEP *ACC/NPIX minus OS. In FIG. 16, block



~MFNnFn ~iHFET

CA 02209610 1997-07-04 ~T/lJS 9 5 / ~ 0 8 1
tP~A/l IS 1 ~ AU~ l9g~

26
242, the variable PIX plus 1 would change to PIX plus
PIXSTEP. This technique provides a more course resolu-
tion of the target but is much faster since only every
nth pixel must be correlated. This permits a larger
number of range measurements which would yield greater
range accuracy where a technique such as linear
regression is used.
Although the invention has been described with
reference to detection systems for detecting the speed of
moving vehicles it should be understood that the inven-
tion described herein has much broader application, and
- in fact may be used to detect the range to a stationary
object, the speed of any moving object and/or relative
motion between moving or stationary objects. For
example, the invention may be incorporated in a robotics
manufacturing or monitoring system for monitoring or
operating upon objects moving along an assembly line.
Another important application is a ranging device used in
conjunction with a weapons system for acquiring and
tracking a target. Yet another application is a spotting
system used to detect camouflaged objects which may be in
motion against a static background. The edge detection
embodiment disclosed above is especially useful for this
~~ purpose. Other possible uses and applications will be
apparent to those skilled in the art.
The terms and expressions which have been
employed in the foregoing specification are used therein
as terms of description and not of limitation, and there
is no intention, in the use of such terms and expres-
sions, of excluding equivalents of the features shown anddescribed or portions thereof, it being recognized that
the scope of the invention is defined and limited only by
the claims which follow.




~1~fEND~ r

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date 2001-03-27
(86) PCT Filing Date 1995-01-18
(87) PCT Publication Date 1996-07-25
(85) National Entry 1997-07-04
Examination Requested 1998-02-18
(45) Issued 2001-03-27
Deemed Expired 2004-01-19

Abandonment History

There is no abandonment history.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Registration of a document - section 124 $100.00 1997-07-04
Application Fee $150.00 1997-07-04
Maintenance Fee - Application - New Act 2 1997-01-20 $50.00 1997-07-04
Maintenance Fee - Application - New Act 3 1998-01-20 $50.00 1997-07-04
Request for Examination $200.00 1998-02-18
Maintenance Fee - Application - New Act 4 1999-01-18 $50.00 1999-01-15
Maintenance Fee - Application - New Act 5 2000-01-18 $75.00 2000-01-07
Final Fee $150.00 2000-12-13
Maintenance Fee - Application - New Act 6 2001-01-18 $75.00 2001-01-05
Maintenance Fee - Patent - New Act 7 2002-01-18 $75.00 2002-01-03
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
HARDIN, LARRY C.
Past Owners on Record
NASH, LAWRENCE V.
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Drawings 1997-07-04 26 426
Abstract 1997-07-04 1 56
Claims 1997-07-04 5 160
Description 1997-07-04 26 1,337
Cover Page 2001-02-16 1 51
Representative Drawing 2001-02-16 1 7
Cover Page 1997-10-06 1 51
Representative Drawing 1997-10-06 1 9
Prosecution-Amendment 1999-07-21 1 31
Prosecution-Amendment 1999-11-15 1 24
Prosecution-Amendment 1998-02-18 1 40
PCT 1997-07-04 52 2,267
Correspondence 2000-12-13 1 46
Assignment 1997-07-04 3 157