Sélection de la langue

Search

Sommaire du brevet 2404654 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Demande de brevet: (11) CA 2404654
(54) Titre français: PROCEDE ET SYSTEME DE FUSION D'IMAGES
(54) Titre anglais: METHOD AND SYSTEM FOR FUSING IMAGES
Statut: Morte
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • H04N 5/225 (2006.01)
  • H04N 5/262 (2006.01)
(72) Inventeurs :
  • OSTROMEK, TIMOTHY (Etats-Unis d'Amérique)
(73) Titulaires :
  • L-3 COMMUNICATIONS CORPORATION (Etats-Unis d'Amérique)
(71) Demandeurs :
  • LITTON SYSTEMS, INC. (Etats-Unis d'Amérique)
(74) Agent: R. WILLIAM WRAY & ASSOCIATES
(74) Co-agent:
(45) Délivré:
(86) Date de dépôt PCT: 2001-04-13
(87) Mise à la disponibilité du public: 2001-11-08
Requête d'examen: 2006-03-29
Licence disponible: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Oui
(86) Numéro de la demande PCT: PCT/US2001/012260
(87) Numéro de publication internationale PCT: WO2001/084828
(85) Entrée nationale: 2002-09-27

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
09/561,260 Etats-Unis d'Amérique 2000-04-27

Abrégés

Abrégé français

La présente invention concerne un système de fusion d'images, comprenant des détecteurs (102 et 104), qui permettent de produire des ensembles de données d'image. Un processeur d'informations (108) reçoit et échantillonne lesdits ensembles de données d'image, afin de produire des données d'échantillon permettant de calculer un ensemble d'images fusionnées. Un système d'affichage (110) reçoit ledit ensemble d'images fusionnées et affiche une image fusionnée, produite à partir dudit ensemble d'images fusionnées. La première des quatre étapes du procédé de fusion d'images selon cette invention consiste recevoir des ensembles de données d'image produits par des détecteurs (102 et 104). La deuxième étape consiste à échantillonner lesdits ensembles de données d'image, afin de produire des données d'échantillon. La troisième étape consiste à calculer un ensemble d'images fusionnées, à partir des données d'échantillon. La quatrième étape consiste à afficher sur un système d'affichage (110) une image fusionnée, produite à partir de l'ensemble d'images fusionnées. La première des quatre étapes permettant de calculer un ensemble d'images fusionnées consiste à échantillonner des ensembles de données d'image, produits à partir de détecteurs (102 et 104), afin de produire des données d'échantillon. La deuxième étape consiste à déterminer des paramètres de fusion d'images, à partir des données d'échantillon. La troisième étape consiste à calculer des facteurs de pondération, à partir desdits paramètres de fusion d'images. La quatrième étape consiste à calculer un ensemble d'images fusionnées, à partir des facteurs de pondération, ledit ensemble d'images fusionnées étant utilisé afin de produire l'image fusionnée.


Abrégé anglais




A system for fusing images comprises sensors (102 and 104) for generating sets
of image data. An information processor (108) receives and samples the sets of
image data to generate sample data for computing a fused image array. A
display (110) receives the fused image array and displays a generated fused
image. Step 1 of 4 for fusing images receives sets of image data generated by
sensors (102 and 104). Step 2 samples the sets of image data to produce sample
data. Step 3 computes a fused image array from the sample data. Step 4
displays a fused image generated from the fused image array on a display
(110). Step 1 of 4 for computing a fused image array samples sets of image
data generated from sensors (102 and 104) to produce sample data. Image fusion
metrics from the sample data are determined in step 2. Step 3 calculates
weighting factors from the image fusion metrics. Step 4 computes a fused image
array from the weighting factors, wherein the fused image array is used to
generate the fused image.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.





11
WHAT IS CLAIMED IS:

1. A system for fusing images, the system comprising:
a. two or more sensors for generating two or more sets of image data;
b. an information processor for receiving and sampling the sets of image
data to generate sample data and for computing a fused image array from the
sample
data; and
c. a display for receiving the fused image array and displaying a fused
image generated from the fused image array.

2. The system of Claim 1 further comprising a field programmable gate array
coupled to the sensors and the information processor.

3. The system of Claim 1 further comprising one or more converters operable to
convert analog signals from the sensors to digital data for use by the
information
processor.

4. The system of Claim 1 wherein the fused image array assigns a value to one
or
more pixels of the fused image, wherein the value describes the relative
weights of the
sets of image data.




12

5. A method for fusing images, the method comprising:
a. receiving two or more sets of image data generated by two or more
sensors;
b. sampling the sets of image data to produce sample data;
c. computing a fused image array from the sample data; and
d. displaying a fused image generated from the fused image array.

6. The method of Claim 5, wherein the sampling step further comprises sampling
with a fixed array pattern.

7. The method of Claim 5, wherein the sampling step further comprises sampling
with a varied array pattern.

8. The method of Claim 5, wherein the sampling step further comprises sampling
randomly.

9. The method of Claim 5, wherein the computing step further comprises
determining one or more image fusion metrics, wherein the image fusion metrics
are
values assigned to one or more pixels of the sample data.

10. The method of Claim 5, wherein the computing step further comprises
calculating one or more weighting factors from the image fusion metrics,
wherein the
weighting factors are values assigned to one or more pixels of the fused
image.

11. The method of Claim 10, wherein the computing step further comprises
calculating the weighting factors by interpolation of the image fusion
metrics.

12. The method of Claim 10, wherein the computing step further comprises
calculating the weighting factors by linear interpolation of the image fusion
metrics.




13

13. The method of Claim 5, further comprising performing the foregoing steps
automatically using an information processor.




14

14. A method for computing a fused image array, the method comprising:
a. sampling the sets of image data generated from two or more sensors to
produce sample data;
b. determining one or more image fusion metrics from the sample data,
wherein the image fusion metrics are values assigned to one or more pixels of
the
sample data;
c. calculating one or more weighting factors from the image fusion
metrics, wherein the weighting factors are values assigned to one or more
pixels of a
fused image; and
d. computing a fused image array from the weighting factors, wherein the
fused image array is used to generate the fused image.
]
15. The method of Claim 14, wherein the fused image array describes the
relative
weights of the sets of image data at one or more pixels of the fused image.

16. The method of Claim 14, wherein the sampling step further comprises
sampling with a fixed array pattern.

17. The method of Claim 14, wherein the sampling step further comprises
sampling with a varied array pattern.

18. The method of Claim 14, wherein the sampling step further comprises
sampling randomly.

19. The method of Claim 14, wherein the calculating step further comprises
calculating the weighting factors by interpolation of the image fusion
metrics.

20. The method of Claim 14, wherein the calculating step further comprises
calculating the weighting factors by linear interpolation of the image fusion
metrics.





15

21. The method of Claim 14, further comprising performing the foregoing steps
automatically using an information processor.

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.



CA 02404654 2002-09-27
WO 01/84828 PCT/USO1/12260
METHOD AND SYSTEM FOR FUSING IMAGES
TECHNICAL FIELD OF THE INVENTION
This invention relates generally to the field of electro-optical systems and
more specifically to a method and system for fusing images.


CA 02404654 2002-09-27
WO 01/84828 PCT/USO1/12260
2
BACKGROUND OF THE INVENTION
Image fusion involves combining two or more images produced by two or
more image sensors into one single image. Producing one image that mitigates
the
weak aspects of the individual images while retaining the strong ones is a
complicated
task, often requiring a mainframe computer. Known approaches to image fusion
have
not been able to produce a small, lightweight system that consumes minimal
power.
Known approaches to fusing images require a great deal of computing power.
To illustrate, suppose that two image sensors each have the same pixel
arrangement.
Let Nh be the number of horizontal pixels, and let N" be the number of
vertical pixels,
such that the total number of pixels per sensor is Nh ~ N". Let the frame rate
of the
display be Szd, expressed in Hz. The time td allowed for processing each frame
is
given by:
'Ld = 1 ~ S~d
All processing for each displayed image must be done within this time to keep
the
system operating in real time. To calculate the processing time per pixel,
first
compute the total number of pixels of both sensors. Given that each image
sensor has
the same pixel arrangement, the total number of pixels for both sensors is:
2 ~ Ny,NV
The processing time zp per pixel is given by:
zp = z~ _ 1
2NhN,, S2~ ~ 2NHNv
The processing time tp per pixel is the maximum amount of time allotted per
pixel to
calculate a display pixel from the two corresponding sensor pixels, while
allowing for
real time processing. For example, given an average system where ~d = 30 Hz,
N,, _
640, and NV = 480, the processing time zp per pixel is:
zp = 108.5 ns
For handheld or portable applications, processor speed S2p is limited to about
150 MHz. A maximum of approximately two instructions per cycle is allowed in
current microprocessors and digital signal processors. The time required per
instruction z is given by:


CA 02404654 2002-09-27
WO 01/84828 PCT/USO1/12260
3
z~ - -
1 = 3.33 ns
2S2 p ,
The number of instruction cycles allowed for each pixel in real time
processixig is
given by:
z
N; _ -'' = 32 instruction cycles
z;
Thirty-two instruction cycles per pixel is often not a sufficient number of
cycles,
especially considering the fact that a simple "divide, floating point" could
easily
require 10 to 100 instruction cycles to complete. Practical image fusion
systems
generally require over 100 instruction cycles per pixel, and sophisticated
image fusion
algorithms often require over 1,000 instruction cycles per pixel.
Consequently,
current image fusion systems are confined to mainframe computers.
While known approaches have not been applied to handheld or portable
applications, the challenges in the field of image fusion have continued to
increase
with demands for small, lightweight systems that consume minimal power.
Therefore, a need has arisen for a new method and system for fusing images.


CA 02404654 2002-09-27
WO 01/84828 PCT/USO1/12260
4
SUMMARY OF THE INVENTION
In accordance with the present invention, a method and system for fusing
images are provided that substantially eliminate or reduce disadvantages . and
problems associated with previously developed systems and methods.
A system for fusing images is disclosed. The system comprises two or more
sensors for generating two or more sets of image data. An information
processor
receives and samples the sets of image data to generate sample data and
computes a
fused image array from the sample data. A display receives the fused image
array and
displays a fused image generated from the fused image array.
A four-step method for fusing images is disclosed. Step one calls for
receiving sets of image data generated by sensors. Step two provides for
sampling the
sets of image data to produce sample data. In step three, the method provides
for
computing a fused image array from the sample data. The last step calls for
displaying a fused image generated from the fused image array.
A four-step method for computing a fused image array is disclosed. Step one
calls for sampling sets of image data generated from sensors to produce sample
data.
Step two provides for determining image fusion metrics from the sample data.
Step
three calls for calculating weighting factors from the image fusion metrics.
The last
step provides for computing a fused image array from the weighting factors,
wherein
the fused image array is used to generate the fused image.
A technical advantage of the present invention is that it computes the fused
image from sampled sensor data. By sampling the sensor data, the invention
reduces
the number of instruction cycles required to compute a fused image. Reducing
the
number of instruction cycles allows for smaller, lightweight image fusion
systems that
consume minimal power.


CA 02404654 2002-09-27
WO 01/84828 PCT/US01/12260
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention and for further
features and advantages, reference is now made to the following description,
taken in
conjunction with the accompanying drawings, in which:
5 FIGURE 1 is a system block diagram of one embodiment of the present
invention;
FIGURE 2 is a flowchart demonstrating one method of fusing images in
accordance with the present invention;
FIGURE 3A illustrates sampling with a fixed array pattern in accordance with
the present invention;
FIGURE 3B illustrates sampling with a varied array pattern in accordance
with the present invention;
FIGURE 3C illustrates sampling randomly in accordance with the present
invention; and
FIGURE 4 illustrates a method of computing weighting factors in accordance
with the present invention.


CA 02404654 2002-09-27
WO 01/84828 PCT/USO1/12260
6
DETAILED DESCRIPTION OF THE DRAWINGS
FIGURE 1 is a system block diagram of one embodiment of -the present
invention. In this embodiment, a sensor A 102 and a sensor B 104 detect one or
more
physical objects 106 in order to generate image data to send to an information
processor 108, which fuses the sets of image data to produce a single fused
image to
be displayed by a display 110. The sensor A 102 detects the physical objects
106 and
generates sensor data, which is sent to an amplifier A 112. Amplifier A 112
amplifies
the sensor data and then sends it to an analog-to-digital converter A 114. The
analog-
to-digital converter A 114 converts the analog sensor data to digital data,
and sends
the data to an image buffer A 116 to store the data. The sensor B operates in
a similar
fashion. The sensor B 104 detects the physical objects 106 and sends the data
to
amplifier B 118. The amplifier B 118 sends amplified data to an analog-to-
digital
converter B 120, which sends converted data to an image buffer B 122. A field
programmable gate array 124 receives the data generated by the sensor A 102
and the
sensor B 104. The information processor 108 receives the data from the field
programmable gate array 124. The information processor 108 generates a fused
image from the sets of sensor data, and uses an information processor buffer
126 to
store data while generating the fused image. The information processor 108
sends the
fused image data to a display buffer 128, which stores the data until it is to
be
displayed on the display 110.
FIGURE 2 is a flowchart demonstrating one method of image fusion in
accordance with the present invention. The following steps may be performed
automatically using an information processor 108. The method begins with step
202,
where two or more image sensors generate two or more sets of image data. As
above,
suppose that there are two image sensors, each with the same pixel
arrangement. Let
Ny, be the number of horizontal pixels, and let N,, be the number of vertical
pixels,
such that the total number of pixels per sensor is Nh ~ N,,. The sensors may
comprise, for example, visible light or infrared light image detectors. Assume
that
detectable variations in the proportion of the fused image computed from one
set of
image data and from the other set of image data occur in time ts, where:
zs > 1/S2d.


CA 02404654 2002-09-27
WO 01/84828 PCTNSO1/12260
7
Hence, the computation of the proportion does not need to be calculated at
each
frame. Also, assume that the information required to form a metric that
adjusts the
system to a given wavelength ~, proportion can be derived from fewer than N,,
~Nv
pixels.
The method then proceeds to step 204 where the sets of image data are
sampled to produce sample data. FIGURES 3A, 3B, and 3C illustrate three
methods
of sampling image data in accordance with the present invention. FIGURE 3A
illustrates sampling with a fixed array pattern. The sampled pixels (i, j)
302, 304,
306, 308, 310, 312, 314, 316, and 318 may be described by:
i = pOh, where p = 1, 2 ..., Int
h
and
j = q0,, , where q = 1, 2, ..., Int
v
One possible arrangement is to have 0y, = 2 for the horizontal difference
between one
sampled pixel to the next sampled pixel, and 0" = 2 for the vertical
difference
between one sampled pixel to the next sampled pixel. The groups of pixels 320
and
322, each with 2 sampled pixels 302 and 304, and 308 and 310, respectively,
are
sampling blocks. FIGURE 3B illustrates sampling with a varied array pattern.
FIGURE 3C illustrates random sampling. A sequence of sampling patterns may
also
be used, repeating at any given number of sampling cycles, or never repeating,
as in a
random pattern for each continued sampling cycle.
Referring again to FIGURE 2, in steps 206 to 210, a fused image array is
computed from the sample data. In step 206, image fusion metrics are
calculated
from the sample data. The image fusion metrics are values assigned to the
pixels of
the sample data. These values, for example, may give the relative weight of
the data
from each sensor, such that the data from the sensor that produces the better
image is
given more weight. Or, these values may be used to provide a control for the
production of, for example, a false color image. All the pixels may be
assigned the


CA 02404654 2002-09-27
WO 01/84828 PCT/USO1/12260
8
same metric, ~3, or each sample pixel may assigned its own metric, (3;;, where
the
subscript ij designates the pixel in the ith row and jth column.
In step 208, weighting factors a;~, where the subscript ij designates the
pixel in
the ith row and jth column, are calculated from the image fusion metrics. The
weighting factors are values assigned to the pixels of the fused image. The
weighting
factors may be computed by, for example, linear interpolation of the image
fusion
metrics.
FIGURE 4 illustrates a method of computing weighting factors in accordance
with the present invention. For example, suppose that the sample data was
sampled
using a fixed array pattern, where every fifth point 402, 404, 406, and 408 is
sampled
in both the horizontal and vertical direction, that is, 0,, = 0" = O = 5. A
sampling
block 410 comprises to two sampled points 402 and 404. The weighting factors
a,~ of
the first row may be computed in the following manner. First, an incremental
value
for the first row in the horizontal direction 8~,~ is calculated using the
following
formula:
Shl = (~16 - ~~~)~0.
Then, the weighting factors between (3~, and ~3i6 in the horizontal direction
are
calculated using the following formula:
ay=fin +bh~ (1-1)
The weighting factors in the vertical direction between ~3» and ~36~ are
calculated in a
similar manner, using the following equations:
Svl _ (h'61 - ~i~)~~
art=~n+sn (i-1)
Referring again to FIGURE 2, the method then proceeds to step 210, where a
fused image array, which is used to generate a fused image, is computed from
the
weighting factors. An array of weighting factors a;~ generates the following
fused
image array:
v~ (~ = v~ (A~.a~ + v~ (B).(1 _ a~)
where i E { l, ..., Nh}, j E { 1, ..., N"}, the superscripts (c~ denotes
display, (A) denotes
sensor A, and (B) denotes sensor B, and V;~ corresponds to the voltage at
pixel (i,j).


CA 02404654 2002-09-27
WO 01/84828 PCT/USOi/12260
9
The fused image array describes the relative weights of the data from each
sensor.
Weighting factor a;; gives the relative weight of the voltage from sensor A at
pixel
(i,j); weighting factor (1-a~) gives the relative weight of the voltage from
sensor B at
pixel (i,j). This example shows a linear weight; other schemes, however, can
be used.
The method then proceeds to step 212, where the fused image generated from the
fused image array is displayed on a display 110.
By sampling the image data, this embodiment allows for more instruction
cycles to calculate [3~ for each sampled pixel. To calculate the number of
instruction
cycles available for each sampled pixel, first calculate the total number of
instruction
cycles per sampled pixel, and then subtract number of cycles per pixel needed
to
sample the pixels and to compute the fused image metrics and the fused image
array.
For example, assume that data is sampled using fixed array sampling. The total
number of instructions for each sampled pixel is given by:
z,.
z;
where is is the processing time per sampled pixel, which is given by:
1
is =
52~, ~ 2n,,nv
where nh and n" are the number of sampled pixels in the horizontal direction
and in
the vertical direction, respectively. Sampling each sampling block, without
double
counting borders, requires about (0 + 1 )[2(0 - 1 ) + 6] instruction cycles.
Each
sampling block contains two sampled pixels, so each sampled pixel loses 1/2(0
+
1 )[2(0 - 1 ) + 6] instruction cycles per pixel. Computing the fused image
array from
the weighting factors requires approximately four instruction cycles for each
calculation, that is:
4N,, N
n,, nv
Therefore, the time left per pixel for calculating the image fusion metrics
~3~ is:
Nt - z.5 - 1/2 (0 + 1) [2(D-1) + (] - 4NHNV
z. nnn~


CA 02404654 2002-09-27
WO 01/84828 PCT/USO1/12260
Using the values given above: S2d = 30 Hz, N,, = 640, Nv = 480, 0h = w = 5,
n,, = 128,
and n,, = 96, the number of instruction cycles is computed to be:
N; = 269 instruction cycles.
This is a dramatic improvement compared with the 32 cycles allotted in
conventional
5 methods. The extra cycles may be used for more complex calculations of ~3~
or other
features. Moreover, if ~3~ is assumed to be the same for all pixels, even more
additional cycles may be available to determine ~i;~, allowing for a more
sophisticated
manipulation.
Although an embodiment of the invention and its advantages are described in
10 detail, a person skilled in the art could make various alternations,
additions, and
omissions without departing from the spirit and scope of the present invention
as
defined by the appended claims.

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , États administratifs , Taxes périodiques et Historique des paiements devraient être consultées.

États administratifs

Titre Date
Date de délivrance prévu Non disponible
(86) Date de dépôt PCT 2001-04-13
(87) Date de publication PCT 2001-11-08
(85) Entrée nationale 2002-09-27
Requête d'examen 2006-03-29
Demande morte 2009-12-11

Historique d'abandonnement

Date d'abandonnement Raison Reinstatement Date
2008-12-11 R30(2) - Absence de réponse

Historique des paiements

Type de taxes Anniversaire Échéance Montant payé Date payée
Enregistrement de documents 100,00 $ 2002-09-27
Le dépôt d'une demande de brevet 300,00 $ 2002-09-27
Taxe de maintien en état - Demande - nouvelle loi 2 2003-04-14 100,00 $ 2003-04-14
Taxe de maintien en état - Demande - nouvelle loi 3 2004-04-13 100,00 $ 2004-04-13
Taxe de maintien en état - Demande - nouvelle loi 4 2005-04-13 100,00 $ 2005-04-11
Requête d'examen 800,00 $ 2006-03-29
Taxe de maintien en état - Demande - nouvelle loi 5 2006-04-13 200,00 $ 2006-04-10
Taxe de maintien en état - Demande - nouvelle loi 6 2007-04-13 200,00 $ 2007-04-02
Taxe de maintien en état - Demande - nouvelle loi 7 2008-04-14 200,00 $ 2008-04-14
Taxe de maintien en état - Demande - nouvelle loi 8 2009-04-14 200,00 $ 2009-04-14
Enregistrement de documents 100,00 $ 2012-09-17
Enregistrement de documents 100,00 $ 2013-08-22
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
L-3 COMMUNICATIONS CORPORATION
Titulaires antérieures au dossier
LITTON SYSTEMS, INC.
NORTHROP GRUMMAN GUIDANCE AND ELECTRONICS COMPANY, INC.
OSTROMEK, TIMOTHY
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(yyyy-mm-dd) 
Nombre de pages   Taille de l'image (Ko) 
Dessins représentatifs 2003-01-23 1 7
Page couverture 2003-01-23 1 45
Abrégé 2002-09-27 1 65
Revendications 2002-09-27 5 98
Dessins 2002-09-27 2 39
Description 2002-09-27 10 335
PCT 2002-09-27 2 79
Cession 2002-09-27 8 247
PCT 2003-03-10 1 37
Taxes 2003-04-14 1 33
PCT 2002-09-28 3 145
Taxes 2004-04-13 1 33
Taxes 2005-04-11 1 30
Poursuite-Amendment 2006-03-29 1 40
Taxes 2006-04-10 1 38
Poursuite-Amendment 2006-11-29 1 27
Taxes 2007-04-02 1 37
Poursuite-Amendment 2008-06-11 3 71
Taxes 2008-04-14 1 44
Correspondance 2012-10-24 1 18
Cession 2012-09-17 15 1 231
Cession 2013-01-02 2 58
Correspondance 2013-01-15 1 16
Cession 2013-08-22 2 56