Language selection

Search

Patent 1254651 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 1254651
(21) Application Number: 455898
(54) English Title: STEREOPHOTOGRAMMETRIC SURVEYING AND INTERPRETING METHOD, AND INTERPRETING APPARATUS
(54) French Title: METHODE ET DISPOSITIF STEREOPHOTOGRAMMIQUE D'ARPENTAGE ET D'INTERPRETATION DES LEVES
Status: Expired
Bibliographic Data
Abstracts

English Abstract


ABSTRACT
A new digital procedure for generating and processing scanner-
images is presented. The terrain is scanned by an opto-electronic three-
line scan camera from aircraft, missiles or spacecraft. Three linear
sensor arrays are arranged in the focal plane of the camera objective per-
pendicularly to the flight course. Each sensor array produces an image
strip of the covered terrain according to the push broom principle. Nodes
of the digital elevation model (DEM) to be computed are selected in the
middle image strip whose object planes are nearly vertical. The conjugate
image points in the other two image strips are determined by area correlation
methods. The coordinates of all these image points and a few control points
are inserted into a least squares adjustment for computing the orientation
parameters of the camera along its entire flight course and the coordinates
of the digital elevation model. Rasterplots of ortho- and stereo-ortho-
photos are produced after the digital rectification of the image strips,
utilizing the nodes of the DEM grid.


Claims

Note: Claims are shown in the official language in which they were submitted.


THE EMBODIMENT OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:

1. In a stereophotogrammetric surveying and evaluation
method to obtain orientation data of a camera flying over a terrain,
and a digital terrain elevation model, which camera is not
attitude controlled, said camera comprising three sensor lines
A, B, C, arranged substantially transversely to the flight direc-
tion in the image plane of the camera for continuous line by line
scanning of the terrain flown over, and generation of over-
lapping line image strips AS, BS, CS, taken always from a
different perspective, the line images each consisting of a
plurality of adjacent image points, the improvement comprising:
(a) always using all three line sensors and thereby
generating three overlapping line image strips AS, BS, CS,
(b) synchronizing the line image generation of the
sensor lines A, B, C and registering the consecutive numbers N of
the line images,
(c) selecting in one of the line image strips image points,
arranged mesh-like and corresponding to the points of the digital
terrain elevation model and determined by means of area correlation
in the two other image strips the corresponding (homologous)
image points and their image coordinates x, y and the associated
line image numbers NA, NB, NC,
(d) determining by means of spatial intersection the
digital elevation model coordinates X, Y, Z using the approxi-
mately known flight movements for each digital elevation model
point with its associated line image numbers NA, NB, Nc, the
-18-

approximate orientation parameters of said camera and the image
point coordinates xA, YA and xB, YB and XC, YC, establishing
beam intersection conditions for image point beams, belonging
to each digital elevation model point, which image point beams
are defined by the digital altitude model point, the positions
of the perspective center of said camera corresponding to the
line image numbers NA, NB, NC, and the image points, located on
the sensor lines A, B, C with the respective x and y coordinates,
wherein the orientation parameters are represented as functions
of update points which are arranged in certain intervals along
the flight path, that error equations are established according
to indirect observations, and that the most probable and final
values of the orientation parameters in the update points and the
digital elevation model coordinates are determined by means
of a least-squares adjustment process.

2. A method according to claim 1, characterized in
that all image points within a digital elevation model mesh
are transformed by means of transformation procedures from the
line strips AS, BS, CS into the ground plane, wherein for the
transformation of a digital elevation model mesh always the
corresponding digital elevation model points in the image strips
AS, BS, CS and their point positions PA, PB, PC, projected into
the ground plane, are used and in this manner orthophotos and
stereo partners are produced.

-19-

Description

Note: Descriptions are shown in the official language in which they were submitted.


~,S465~

26648-2

The invention relates to a stereophotogrammetric
surveying and evaluation method to obtain orientation data of
a camera flying over a terrain, and of a digital terrain evaluation
model, using three sensor lines A,B,C, which are arranged trans-
versely or obliquely to the flight direction, in the image plane
of the camera for the continuous line by line scanning of the
terrain flown over, and generation of three overlapping line image
strips As, Bs, Cs which are taken always from a different perspec-
tive, the line images each consisting of a plurality of adjacent
image points.
It is known that with opto-electronic cameras in which
three sensor lines A, B, C are associated with an optic element
(German Offenlegungsschrift 29 40 871 published on April 23, 19~1
in the name of Messerschmitt-Bolkon-Blohm GmbH) and their lines
are arranged transversely to the flight direction or (according to
German Offenlegungsschrift 30 43 577 published on June 3, 1982 in
the name of Messerschmitt-Bolkon-Blohm GmbH) at a specific
angle to one another, simultaneously three image strips As, Bs, Cs
can be produced. By conjunction of a new line image to the line
image sequence which is already assumed as known in its
orientation, this line image conjunction can be extended as desired.
The generation and supply oE this line image conjunction, assumed
as known, is however still connected with difficulties and also
the stepwise addition of always one new line image is not
advantageous from the viewpoint of error theory.
It is therefore the task of the invention to provide a
method of the type described above with which the entire line



-1- ' "

~:?,5~653~

26648-2


image conjunction and the surveyed terrain surface can be
unconditionally reconstructed with information, originating
exclusively from this image survey itself, in a uniform
closed method.
When here the reconstruction of the camera orientation
along the flight path and the terrain is mentioned, then the
following terms and definitions are used which are in part
already known from stereophotogrammetry:
a) Under the outer orientation of a camera one
understands its position coordinates XO, YO, ZO and its three
tilt components ~,~ , X at the time of the instaneous
exposure. Basically these definitions are also valid for the
electro-optical line camera wherein for each line image number
N an orientation set consisting of six orientation parameters
is provided. The line duration is electronically exactly set
and is in the magnitude of approximately 1/100 sec.
Each line period N generates synchronously three image
lines of the image strips As, Bs, Cs and, due to the exactly
known line period, an exact moment of exposure results for each
consecutive line number N. The orientation parameters of the
camera change with the consecutive line number N corresponding
to the flight movement, and from the line number N the
orientation parameters of the camera can be approximately
determined when the flight movements are approximately known.
It is not necessary for the present task that the
orientation parameters of each line period N are determined because
the change of the orientation parameters results more or less
continuously and it is therefor sufficient to determine the six

--2--

~25465~
26648-2

orientation parameters along the flight path at specific
intervals, i.e. update intervals. These points are called below
the update points. Orientation parameters located in between
could be determined if required as a function of the orientation
parameters of the adjacent update points, for instance by linear
interpolation. The magnitude of the orientation interval depends
on the "waviness" of the flight movements and the desired accuracy
of reconstruction.
b) The same applies for the terrain to be reconstructed
which is represented by a so-called digital terrain elevation of
altitude model. It consists of regularly or irregularly arranged
ground points whose horizontal coordinates X, Y and their vertical
coordinates Z are to be determined. The point density to be
selected depends on the "waviness" of the terrain and the accuracy
requirement with which the terrain is to be represented. Also
in this case, ground points located in between can be determined
if required by interpolation.
Each ground point and consequently also each digital
elevation model point, to be selected during evaluation, is
projected during flying over with the camera at three different
moments or consecutive line periods NA, NB, NC at three differ-
ent locations (with always one orientation set, consisting of six
parameters) on the three sensor lines A, B, C and there generates
an image point to which always the image coordinates x, y in the
image plane are assigned (Figure 1).
It is the task of the invention to determine merely
from these image coordinate pairs x, y and their associated

465~L

26648-2

consecutive line period numbers N the orientation parameters X0,
Y~, Z~ , X Of the update points and the terrain coordinates
X, Y, Z of the digital elevatior. model points, as well as to
proc~uce the rectified plan position of each individual image point
(orthophotos~ and rectified stereographs (stereo-partners).
According to the invention there is provided, in a
stereophotogrammetric surveying and evaluation method to obtain
orientation data of a camera flying over a terrain, and a digital
terrain elevation model, which camera is not attitude controlled,

said camera comprising three sensor lines A, B, C, arranged
substantially transversely to th~ flight direction in the image
plane of the camera for continuous line by line scanning of the
terrain flown over, and generation of overlapping line image strips
As, Bs, Cs, taken always from a different perspective, the line
images each consisting of a plurality of adjacent image points,
the improvement comprising:
(a) always using all three line sensors and thereby genera-
ting three overlapping line image strips As, Bs, Cs,
(b) synchronizing the line image generation of the sensor

lines A, ~, C, and registering the consecutive numbers N of the
line images,
~c~ selecting in one of the line image strips image points,
arranged mesh-like and corresponding to the points of the digital
terrain elevation model and determined by means of area correlation
in the two other image strips the corresponding (homologous)
image points and their image coordinates x, y and th~ asc.ociated
line image numbers NA, NB, Nc,

~,S465~

26648-2

~ d) determining by means of spatial intersection the
digital elev~ti^n model coordinates X, Y, Z. using the approxi-
mately known flight movements for each digital elevation model
point with its associated line image numbers NA, N~, Nc, the
approximate orientation parameters of said camera and the image
point coordinates x~, YA, and xB, y~ and xc, Yc~ estaklishing
beam int~rsection conditions for image point beams, belonging
to each digital elevation model poin~, which image point beams
are defined by the digital altitude model point, the positions
of the perspective center of said camera corresponding to the
line image numbers NA, NB, Nc, and the image points, located on
the sensor lines A, B, C with the respective x and y coordinates,
wherein the orientation parameters are represented as functions
of update points which are arranged in certain intervals along
the flight path, that error equations are established according
to indirect observations, and that the most probable and final
values of the orientation parameters in the update points and the
digital elevation model coordinates are determined by means
of at least-squares adjustment process.
The invention is described in greater detail with refer-
ence to the accompanying drawings, in which:
Figure 1 shows the surveying process with a three sensor
line camera. The terrain point (digital elevation model point) P
is imaged in the line period N~ onto the sensor line A (image
coordinates XA, YA), and in the line periods NB and NC on the
sensor lines B (XB~ YB) or C (XC, Yc)-

:~5~65~

26648-2

Figure . shows the three line image strips As, BS~ ~S
with the individual image lines and the digital altitude or ele-
vation model image p~ints, selected ir the strip BS and corre-
lating in the strips AS and Cs.
Figure 3 shows a di.gital elevation model point with
its points PA, PB~ Pc~ projected onto the ground plar.e.
Figure ~ shows schematically the image coordinates of
the digital photogrammetric system~
Figures 5(a) and (b) are side elevation and plan views,

respectively of the computer mod~l of the digital phGtogrammetric




-5a-

~2,54fi5~

26648-2

system.
Accordingly in a first process a) three image strips are
taken synchronously with the opto-electronic line camera, wherein
a cycle counter registers the consecutive line periods or image
line numbers N of the synchronized sensors A, B and C and the
discrete image point signals, associated with the line images, are
preferably stored in a digital manner.
In a second process b), preferably in the image strip
BS generated by the center sensor line B, mesh-like arranged

image points which correspond to the digital elevation model
points are selected according to specific criteria with the
computer and, by means of area correlation in the two other
image strips AS and Cs, always the corresponding image points
of the same digitial elevation model point are found (Figure 2).
The following is additionally commented on this: each individual
image point of a line image strip, for instarce As, is always
defined by the line image number N and a discrete sensor element
(sensor pixel) which lies on the sensor line and whose brightness
signal is registered. Since the position of the sensor line in

the image plane of the camera and the position of each pixel
in the sensor line is known, the x and y coordinates in the image
plane can be calculated for each sensor pixel. The image strips
therefore are composed in the flight direction of the consecutive
line images N and in the line direction of the consecutive
pixels of a line or a sensor and results in an image point matrix

with which the surface correlation takes place. The pixel number
within the respective sensor line defines clearly the image

coordinates x, y and the line image number N defines the position
--6--

i2Js465l

266~8-2

of the camera for this moment of exposure and the associated
orientation parameters. As a result of this process, there is a
list which comprises usually for each digital elevation model point
three (at the beginning and the end of the strip only two)
points, associated with the sensor lines A, B or C, and their
coordinate pairs x, y as well as an assigned line image numbers

A' B ' C
In a third process c), based on the approximately

known flight movements (for instance a constant speed, constant
flight direction and flight altitude and normal flight attitude

X= are assumed) for each digital elevation model point
from the associated line image numbers or moments of exposure
NA, NB, NC approximate orientation parameter sets (always X0, Y0,
Zo~ , X) are calculated and with those and with the associated
image point coordinates XA~ YA, and XB~ YB C C
approximate digital elevation model coordinates X/ Y, Z are
calculated with the aid of the spatial intersection.
In a fourth process d), beam intersection conditions

for each digital elevation model point are established, wherein
one can either utili~e the so-called coplanarity conditions
or the colinearity equations. They contain the observed image
coordinates x and y, the approximately calculated orientation
parameters for the associated moment of exposure N which are

represented as functions of the orientation parameters in the

update points, and the approximate digital elevation model
coordinates. Error equations according to indirect o~servations
are established in a known manner by a least-square adjustment

:~L7,5~65~

26648-2

the most probable and final values of the orientation parameters
and the digital elevation model points in any local system of
coordinates and scale are determined. By introducing a few
so-called control points, this model can be inserted in a known
manner into a superposed geodetical or geographical system of
coordinates and can be orientated absolutely.
It is to be noted that, as already described, to
each digital elevation model point usually three image beams
or rays are assigned which are defined by this digital elevation
model point, the positions X0, Y0, Z0 of the perspective center
(lens) of the camera in the three moments of exposure NA, NB and
Nc, as well as the corresponding three image points x, y on the
sensor lines A, B, C. The condition that the beams intersect in
the digital elevation model point can be mathematically fulfilled
with the so-called coplanarity condition or with the aid of the
so-called colinearity equations.
With the aid of the digital elevation model coordin-
ates found in this manner and the corresponding image coordinates
in the image strip Bs, now all image points of the image strip
BS can be transformed mesh-like onto the correct distortion-
free ground plane position by means of known transformation
methods (Figure 3).
In the same manner all image points of the image
strips AS and BS can be transformed so that distortion-free
stereo partners result. This takes place in that the digital
elevation model points P are projected by means of an oblique
parallel projection ontothe ground plane according to PA and PC



--8--

~Z,5465~

26648-2

(Figure 3). Due to the mesh-like transformation of all image
points of the strips AS and Cs into this ground plane position,
distortion-free stereographs are produced. For calculation
of the transformation parameters always the mesh points in the
image strips As, Bs, Cs and the points PA, PBl C g
in the ground plane are used.
To perform the interpretation method according to
the invention a stereograph screen is connected with a computer
which performs the area correlation. On the screen, parallel
to the automatic correlation process, the point selection in the
image strip BS and the point correlation in the image strips AS
and Cs runs along andis visibly shown for the operator. The
operator consequently has the opportunity to follow the corre-
lation process and, in case of difficulties, can apply correct-
ions by means of interactive intervention and/or if necessary
start the interrupted correlation process again.
This visual interactive intervention is also necessary
in order to


l~,S4651


,
'
a) initiate the st,art of correlation,



b) identi~y and mark control points,



c) be able to undertake interpretative eva~uation
' of the objects.



he stereograph screen representation takes place
ei.ther _ -

. . .
. a) 'by superposition of the two parti'al stereo images
in complementary colors ~for instance red and green) on one
color imaye screen and observation with corresponding ab'sorption
filters and generation of'one floating mark which can be moved
relative to the image in two dimensions (x, y) and wherein the
partial imayes can be moved with respect to one another in both
directiOns (x, Y?- '



b) by representation of the .two partial stereo images
on two halves of an image'screen or two separate image screens
and generati.orl of two floatincJ markr., assi.gned to the partial

images, which can be moved toyetller or at least one floatiny
mark alone relative to the image in both directions (x and y).
The observation takes place in this case.with the aid of a
stereo optic.
' ,

.. .



. . .

54~;51




The floating marks are generated by means of a special
floating mark generator from the yenerating electron beam and
mark in the image storage the respectively set image point.
The floating marks are positioned either by the computer or
manually by the observer. The floating marks in both images
can be moved together and relative to one another by means of
appropriate setting elements (for instance rollers, manual
wheels etc). Basically it makes no difference whether the
marks are moved in the images or whether the marks-remain in
the center on the image screen and the images are moved re- ~
lative to the stationary marks by so-called panning or scrolling.
The latter solution is generally the better since the head of-
the observer can always remain in the same position.



Such an image screen is generally important for the
interpretative and measuring evaluation wherein either the
stereo floating marks are guided and set by the observer or
can be positioned by computer.


,
With appropriate hardware interfaces,this image screen
can also be used for a real-tirne observation of the photographed
object.




Hard copy printers and image writi~g instruments,
known per se, permit the issuance of graphic results. The




,, .
I! ~

~7~ 5~



digital data can be put out with alpha numerical printers, magnetic tapes
and disks.
In the aircraft, flying body or satellite the camera system
(three line stereo camera) is, if necessary, installed with a high density
digital tape recorder, known per se.
For real-time transmission, the image data can also be given
by telemetry to the ground station and there be stored on high density
digital tapes. Subsequently, either in real-time or off line, the conver-
sion of the high density digital tape data into computer compatible data
takes place and the automatic and interactive evaluation, further process-
ing, storage and optionally output of the image data with the aid of the
stereograph screen and the graphic and digital storage and output units.
Proc ural stages for the DPS Process Survey
1. Synchronous surveying of the terrain with three sensor lines
A, B, C according to the "push-broom" process (see "Advanced Scanners and
Imaging Systems for Earth Observations" pp. 309 - 405, NASA Working Group
Report,1973, Washington D.C.).
The three sensor lines A, B, C, are driven with the same
cycle N and in keeping with this produce the image strips As, Bs, Cs. In
each instance, each cycle generates one image line for the three image
strips.
The image cycle N has a cycle periodicity of ~t in the order
of magnitude of milliseconds. Each cycle number N thus always describes an
image line in the three image strips. Ilowever, in each instance it also
describes a position of the camera or platform, respectively, since the
camera platform moves over the terrain.
The position and the inclination of the camera, that vary with
the (cycle)-time is characterized by the "orientation parameter." These
orientation parameters are as follows:

- the three spatial coordinates XN, YN~ ZN of the camera station




1~

~ ,5`46~


(more precisely, the centre of projection of the lens);
- the three angles of inclination ~N ~lateral inclination, roll
angle)'~ N (longitudinal inclination, angle of pitch)~ N ~angle of yaw)
of the camera,
wherein the subscript N stands for the current value of the cycle time N.
These orientation parameters can be in the form of a single
vector p.
In the final analysis, not all the orientation parameters for
the total image cycle N are calculated, but only the intervals which in
each instance are at a specific interval of several cycles ~N from each
other. These orientation points are designated Pj ~see Figure 5).
The interval between these orientation points is determined by
the evenness with which the platform moves and the requirement for pre-
cision. The closer together these points are located, the greater is the
precision of model reconstruction, and the greater the computation time
that is involved.
The six orientation parameters can be interpolated linearly
or non-linearly between the points Pj.
The image data, i.e., the radiation ~brightness) values of the
image points that are read line by line and serially, are stored on magnetic
tape. Each image line is identified by the cycle number N.
Interpretation
2. Within the middle image strip BS the computer uses prescribed
criteria (line ~nd pixel interval, adequate contrast) in more or less
regular, mesh-like arrangement, to determine the pixels of the terrain
points Pi that are to be determined, the coordinates of which are to be
calculated n the course of the interpretation. The terrain coordinates
Xi, Yi, Zi of a DEM ~Digital Elevation Model) point Pi can be represented
in toto as a vector k. The computer locates the corresponding pixels of
the DEM-point Pi in the strips A and C by area correlation. This


-- L4 --

~L~,Sa~


procedure of mesh-like action and correlation is familiar (see Panton D.J.,
1978," A Flexible Approach to Digital Stereo Mapping, Photogrammetric
Engineering and Remote Sensing," Vol~ 44, No. 12, pp~ 1499 - 1512).
This pixel correlation results in each instance in the line
number N and the pixel number m of the pixels that are identified in the
three image strips As, B , Cs that are associated with a DEM-point. Since
the precise position of the sensor lines A, B, C in the image plane of
the camera are known, and since the pixel interval of the sensor lines
is also known, it is possible to compute the precise image coordinates
XA~ YA, and xB~ YB and xc, Yc from the pixel numbers mA, mB, m!C.
3. The movement of the flight is known in approximation (e.g.,
rectilinear flight at a specified altitude Z at a specified speed; the
angles of inclination ~,~ , ~ can first be set at approximately zero). For
this reason, the approximate orientation parameters of the camera can be
cited at any desired time. Each DEM-point is surveyed from three dif-
ferent stations, the image cycle numbers of which, NA, NB, NC have already
been determined. The approximated orientation data for the camera in
these three instant survey standpoints can be computed from the approximated
flight data and these cycle numbers N, and the cycle period ~t (the path
interval that corresponds to a cycle interval is known on the basis of
flight speed and cycle periodicity ~t) The approximated terrain coordin-
ates of the DEM-points Pi can be calculated fromthe approximated orientation
parameters and the image coordinates determiTIed in l'os~ 2 by simple inter-
section or by proportional computation (see Albertz Kreiling, "Photo-
grammetrisches Taschenbuch," pp. 214 - 215, Herbert Wichmann Verlag,
Karlsruhe).
4. There are two possibibilities for determining the orientation
parameters and the terrain coordinates of the DEM-points Pi:
a) for each two beams that intersect at a point Pi, coplanarity
equations (see pages 208 - 211 of the Albertz/Kreiling Publication) are


-- 1,~ --

~'.5~6~
, .~



set up, which is to say that in general ~up to the beginning and the end
of the strips) there are three beam combinations of two beams each avail-
able for each point Pi, and as a consequence three coplanarity e~uations
can be set up. These coplanarity equations contain the approximated values
of the orientation parameters p as well as the determined image coor-
dinates x, y. The orientation parameters p are determined in a comparison
process, and the precise tcrrain coordinates of the points Pi determined
thereby and with the image coordinates as described in Section 3.
b) In the second method both the orientation parameters p and the
terrain coordinates k are determined in a simultaneous comparison process.
The departure points are the so-called colinearity equations (see Albert/
Kreiling "Photogrammetrisches Taschenbuch," p. 204 - 205, Herbert Wichman
Verlang, Karlsruhe).
The colinearity equations contain the orientation parameters
p, the terrain coordinates k, and the image coordinates x and y. The co-
linearity equations are the linear equations of those beams that contain
the image point, the centre of projection or centre of perspective, and the
terrain point. Normally, three such pairs of equations can be set up for x
and y for each terrain point Pi. Only procedure b) will be described below.
5. Setting up the error equations according to the so-called
"conditional observations":
Three imag~ rays EA, EB, EC are associated with each terrain point Pi
(Figure 4). Two colinearity equations are set up with the help of the
approximated terrain coordinates Xi, Yi, Zi and the approximated orienta-
N~ N~ ZN~ ~N ~ ~ N~ ~ or the image coordinates x and
y are calculated, respectively:



(Xi N) Qll ~ i N) Q21 ( i N) Q31
x = ~ = F (P, K)
( i N) Q13 ( i N) Q23 ( i N) Q33 x
(1)

_ ~ _
\~

S~î5:~l



~Xi XN~ Q12 ~Yi YN~ Q22 ( i N) Q32
_ = Fy ~P~ K)
(Xi XN) Q13 ~ i N) Q23 ~Zi N) Q33 ~2)


Qll Q33 are the coefficients of the
rotation matrix which contain the camera inclinations ~, ~ see
Albertz/Kreiling, p. 204).
Naturally, these image coordinates x and y that are calculated
with the approximated orientation parameters p and the approximated terrain
coordinates k deviate from the measured values for x and y. At this stage
the orientation parameter p and the terrain coordinate k are varied by a
comparison method according to the method of the least square to the point
that the sum of the square of the residual differences v between the measur-
ed image coordinates x, y and the calculated image coordinates x, y


Vx = x - x = Fx ~P~ k) - x ~3)

v = y - y = F ~p, k) - y ~4)
is reduced to a minimum
[vv] min. ~5)
(see Jordan, Eggert/Kneissel, "Handbuch der Vermessungskunde," Vol. I,
pp. 590 - 600, (J.B. Metzlersche Verlagsbuchhandlung:Stuttgart). 10th Ed.
1961). According to this, the above-mentioned error equations (3) and (4)
can be written as follows as general matrices and vectors:
V = Ap + Bk - H = ~ ~ - 11 (6)

In the above, v stands for the residual error, A, B,and M are the coeffi-

cient matrices, ~ is the vector that contains all the unknown p and k that
are to be determined, H is the vector of the so-called absolute elements.
From this, one sets out the so-called "normal equations":

T T
(M G M) ~ = M G H ~7)



_ ~ _

1~25~651

26648-2

wherein 5 is the so-called "weight matrix" of the observations,
i.e., a precision factor. These normal equations are resolved in
the already familiar manner by the so-called "Gaussian Algorithm"
(see Jordan/Eggert/Kneissel, "Handbuch der Vermessungskunde",
Vol. I pp. 428 - 450). A FORTRAN computer program for solving
equational systems of this type can be found, for example, in
J. Scharf, "FORTRAN fuer Anfaenger," pp. 125 - 128, Verlag R.
Oldenbourg, Wien Muenchen, 1978.




-17-

Representative Drawing

Sorry, the representative drawing for patent document number 1254651 was not found.

Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date 1989-05-23
(22) Filed 1984-06-05
(45) Issued 1989-05-23
Expired 2006-05-23

Abandonment History

There is no abandonment history.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $0.00 1984-06-05
Registration of a document - section 124 $100.00 1998-10-22
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
LFK-LENTFLUGKORPERSYSTEME GMBH
Past Owners on Record
DAIMLER-BENZ AEROSPACE AG
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Drawings 1993-09-03 4 80
Claims 1993-09-03 2 78
Abstract 1993-09-03 1 25
Cover Page 1993-09-03 1 15
Description 1993-09-03 18 631