Sélection de la langue

Search

Sommaire du brevet 2822150 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Brevet: (11) CA 2822150
(54) Titre français: PROCEDE ET SYSTEME POUR FUSIONNER DE MULTIPLES IMAGES
(54) Titre anglais: METHOD AND SYSTEM FOR FUSING MULTIPLE IMAGES
Statut: Accordé et délivré
Données bibliographiques
Abrégés

Abrégé français

Procédé et système permettant de combiner de linformation provenant de plusieurs images source pour former une image fusionnée. Limage fusionnée est générée par la combinaison des images source, en fonction des caractéristiques locales et des caractéristiques générales calculées à partir des images source. Les caractéristiques locales sont calculées pour des régions locales, dans chaque image source. Pour chaque image source, les caractéristiques locales calculées sont ensuite traitées afin de former une matrice de pondération locale. Les caractéristiques générales sont calculées pour les images source. Pour chaque image source, les caractéristiques générales calculées sont ensuite traitées afin de former un vecteur de pondération général. Pour chaque image source, la matrice de pondération locale correspondante et le vecteur de pondération général correspondant sont combinés afin de former une matrice de pondération finale. Les images source sont ensuite pondérées par la matrice de pondération finale afin de générer limage fusionnée.


Abrégé anglais

A method and system is provided for combining information from a plurality of source images to form a fused image. The fused image is generated by the combination of the source images based on both local features and global features computed from the source images. Local features are computed for local regions in each source image. For each source image, the computed local features are further processed to form a local weight matrix. Global features are computed for the source images. For each source image, the computed global features are further processed to form a global weight vector. For each source image, its corresponding local weight matrix and its corresponding global weight vector are combined to form a final weight matrix. The source images are then weighted by the final weight matrices to generate the fused image.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CLAIMS
I claim:
1. A method of producing a fused image, comprising the steps of:
a) receiving, at a computing device, a plurality of source images;
b) determining, at the computing device, a local feature matrix for each of
the source
images, for a feature of each of the source images;
c) determining, at the computing device, a local weight matrix for each of the
source
images, using the local feature matrix associated with the source image;
d) determining, at the computing device, a global weight vector for each of
the source
images;
e) determining, at the computing device, a final weight matrix for each of the
source
images, using the local weight matrix and global weight vector associated with
the
source image; and
f) using the final weight matrices for the source images to combine the source
images
into a fused image.
2. The method of claim 1 wherein in step b) a plurality of scales are
included when
determining the local feature matrix for each of the source images.
3. The method of claim 2 wherein in step b) a local contrast value is
determined for each of
the source images.
4. The method of claim 3, wherein the local contrast value is modulated by
a sigmoid
shaped function.
5. The method of claim 3 wherein each local contrast value has a magnitude,
and the
magnitude is used to determine the local feature matrix.
6. The
method of claim 5 wherein in step c), ~ denotes a local feature matrix for the
kth
source image at the nth scale, and ~ is expressed as:

<IMG>
wherein N c is a predefined number of scales used in determining the local
feature matrices, ~
is a matrix of the magnitudes of the local contrast values; [M]ind denotes an
element of a matrix
M , where the location of the element in the matrix M is indicated by an index
vector ind ;
[M].dwnarw..function. denotes an operator of down sampling a matrix M by a
factor .function. in each dimension; and
func(.,.) is a set function.
7. The method of claim 6 wherein func(.,.) is a maximum function for
selection of the
maximum of first and second input values, so that ~ is expressed as:
<IMG>
8. The method of claim 3 wherein in step b) the local contrast values are
expressed as:
<IMG>
wherein C~ denotes the matrix of local contrast values determined for a k th
source
image I k at a n th scale; [M]ind denotes an element of a matrix M , where the
location of
the element in the matrix M is indicated by an index vector ind ; .PSI.
denotes a filter; .PHI.
denotes a low-pass filter; ~ denotes an operator of convolution; G~ denotes
the n th
scale of the k th source image; and .theta. is a free parameter.
9. The method of claim 3 wherein color saturation is used with the local
contrast values.
26

10. The method of claim 1, wherein in step c) a plurality of scales are
included when
determining the local weight matrix for each of the source images.
11. The method of claim 10 wherein a local similarity matrix is constructed
to store similarity
values between adjacent image regions.
12. The method of claim 11 wherein a local similarity pyramid is
constructed for each local
similarity matrix.
13. The method of claim 12 wherein a set function is used to compute the
local similarity
pyramids.
14. The method of claim 12 wherein a binary function is used to compute the
local similarity
pyramids.
15. The method of claim 14 wherein the binary function is:
<IMG>
wherein W~ denotes the similarity matrix for the direction d at the n th
scale; [M]ind
denotes an element of a matrix M , where a location of this element in the
matrix M is
indicated by an index vector ind ; [M]~ denotes a matrix element that is
closest to the
matrix element [M]ind in the direction d ; and * denotes the operator of
multiplication.
16. The method of claim 1 wherein in step d):
d.1) at least a global feature is determined for at least one source image;
d.2) the global feature is transformed using a pre-defined function; and
d.3) a global weight vector is constructed for each source image using the
transformed
global feature.
17. The method of claim 1 wherein in step e) the local weight matrix and
global weight
vector for each source image are combined using a predetermined combination
function.
27

18. The method of claim 16 wherein the global feature is an average
luminance of all of the
source images.
19. The method of claim 18 wherein the average luminance, ~ , is
transformed to obtain two
global weights denoted by V0 and V1 , by two non-liner functions:
V0 = .alpha.0 + .beta.0 exp (-~)
V1 = .alpha.1 + .beta.1 exp(-~)
wherein .alpha.0, .alpha.1, .beta.0 and .beta.1 are free parameters.
20. The method of claim 19 wherein, the global weight vector, V k for the
kth source image,
is constructed by concatenating V0 and V1 , so that V k = V0 , V1 ) .
21. The method of claim 19 wherein a final weight matrix X k is computed by
combining the
global weight vectors ( V k s) and the local weight matrices ( U k s) using:
X k = [V k]0 U k - [V k]1
wherein [T]i denotes the i th element in a vector T
22. The method of claim 16 wherein the global feature is an average
luminance of each
source image.
23. The method of claim 22 wherein global weight, V2,k , is calculated by
71;
<IMG>
wherein ~ k denotes the average luminance of the k th source image, and
.delta. and .eta. are
free parameters.
28

24. The method of claim 23 wherein, the global weight vector, V k for the
kth source image,
is constructed by concatenating V 0 and V,1, and V2,k, so that V k = (V 0,V1,
V2,k) wherein V0
and V1 are global weights determined by two non-liner functions:
V0 = .alpha.0 + .beta.0 exp(- ~)
V1 = .alpha.1 + .beta.1 exp(- ~)
and .alpha.0, .alpha.1, .beta.0, and .beta.1 are free parameters.
25. The method of claim 24 wherein a final weight matrix X k is computed by
combining the
global weight vectors ( V k s) and the local weight matrices (U k s) using:
X k = [V k ]0 U k ¨ [V k] 1 + [V k]2
and wherein [T]i denotes the i th element in a vector T .
26. A system for fusing images, comprising:
a) a computing device having a processor and a memory; the memory
configured to
store a plurality of source images;
b) means for fusing an image, comprising:
i. means for determining a local feature matrix for each of the source
images, for a
feature of each of the source images;
ii. means for determining a local weight matrix for each of the source
images, using
the local feature matrix associated with the source image;
iii. means for determining a global weight vector for each of the source
images;
iv. means for determining a final weight matrix for each of the source
images, using
the local weight matrix and global weight vector associated with the source
image;
and
v. means for using the final weight matrices for the source images to
combine the source
images into a fused image.
29

27. The system of claim 26 further comprising an image source for providing
the source
images.
28. The system of claim 27 wherein the image source is an image sensor.
29. The system of claim 28 wherein the means for fusing an image is stored
within processor
executable instructions of the computing device.
30. The system of claim 28 wherein the means for fusing an image are within
a software
module.
31. The system of claim 28 wherein the means for fusing an image are within
a hardware
module in communication with the processor.

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 02822150 2013-07-26
METHOD AND SYSTEM FOR FUSING MULTIPLE IMAGES
FIELD OF THE INVENTION
[0001] The present invention relates to the field of image processing, and in
particular to
a method and system for combining information from a plurality of images to
form a
single image.
BACKGROUND OF THE INVENTION
[0002] Image fusion has been found useful in many applications. A single
recorded
image from an image sensor may contain insufficient details of a scene due to
the
incompatibility between the image sensor's capture range and the
characteristics of
the scene. For example, because a natural scene can have a high dynamic range
(HDR)
that exceeds the dynamic range of an image sensor, a single recorded image is
likely
to exhibit under- or over-exposure in some regions, which leads to detail loss
in those
regions. Image fusion can solve such problems by combining local details from
a
plurality of images recorded by an image sensor under different settings of an
imaging device, such as under different exposure settings, or from a plurality
of
images recorded by different image sensors, each of which captures some but
not all
characteristics of the scene.
[0003] One type of image fusion method known in the art is based on multi-
scale
decomposition (MSD). Two types of commonly used MSD schemes include pyramid
transform, such as Laplacian pyramid transform, and wavelet transform, such as
discrete wavelet transform. Images are decomposed into multi-scale
representations
(MSRs), each of which contains an approximation scale generated by low-pass
filtering and one or more detail scales generated by high-pass or band-pass
filtering.
The fused image is reconstructed by inverse MSD from a combined MSR.
[0004] Another type of image fusion methods known in the art computes local
features at
the original image scale, and then, by solving an optimization problem,
generates the
fused image or the fusion weights, which are to be used as weighting factors
when the

CA 02822150 2013-07-26
images are linearly combined. Another type of image fusion methods divides
images
into blocks and generates a fused image by optimizing one or more criteria
within
each block.
[0005] Another type of method that can achieve a similar effect as image
fusion methods
do when fusing images taken under different exposure settings, is the two-
phase
procedure of HDR reconstruction and tone mapping. An HDR image is
reconstructed
from the input images, and then the dynamic range of this HDR image is
compressed
in the tone mapping phase. However, the above types of methods may impose high
spatial computational cost and/or high temporal computational cost, or
introduce
artifacts into a fused image due to non-linear transformations of pixel values
or due to
operations performed only in small local regions.
[0006] Accordingly, what is needed is a method and system that effectively and
efficiently combines useful information from images, especially in the case of
fusing
images taken under different exposure settings.
SUMMARY OF THE INVENTION
[0007] A method of producing a fused image is provided, including the steps
of:
providing a plurality of source images; determining a local feature matrix for
each of
the source images, for a feature of each of the source images; determining a
local
weight matrix for each of the source images, using the local feature matrix
associated
with the source image; determining a global weight vector for each of the
source
images; determining a final weight matrix for each of the source images, using
the
local weight matrix and global weight vector associated with the source image;
and
using the final weight matrices for the source images to combine the source
images
into a fused image.
[0008] A plurality of scales may be included when determining the local
feature matrix
for each of the source images. A local contrast value may be determined for
each of
the source images, the local contrast value modulated by a sigmoid shaped
function.
2

CA 02822150 2013-07-26
Each local contrast value has a magnitude, and the magnitude may be used to
determine the local feature matrix.
[0009] Color saturation may be used with the local contrast values. A
plurality of scales
may be included when determining the local weight matrix for each of the
source
images. A local similarity matrix may be constructed to store similarity
values
between adjacent image regions. A local similarity pyramid may be constructed
for
each local similarity matrix. A binary function or set function may be used to
compute the local similarity pyramids.
[0010] One or more global features may be determined for at least one source
image; the
global feature may be transformed using a pre-defined function; and a global
weight
vector may be constructed for each source image using the transformed global
feature.
[0011] The local weight matrix and the global weight vector for each source
image may
be combined using a predetermined combination function. The global feature may
be
an average luminance of all of the source images or an average luminance of
each
source image.
[0012] A system for fusing images is provided, including: a computing device
having a
processor and a memory; the memory configured to store a plurality of source
images;
means for fusing an image, having: means for determining a local feature
matrix for
each of the source images, for a feature of each of the source images; means
for
determining a local weight matrix for each of the source images, using the
local
feature matrix associated with the source image; means for determining a
global
weight vector for each of the source images; means for determining a final
weight
matrix for each of the source images, using the local weight matrix and global
weight
vector associated with the source image; and means for using the final weight
matrices for the source images to combine the source images into a fused
image.
[0013] An image source may provide the source images. The image source may be
an
image sensor. The means for fusing an image may be stored within processor
3

CA 02822150 2013-07-26
executable instructions of the computing device. The means for fusing an image
may
be within a software module or a hardware module in communication with the
processor.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] FIG. 1 is a flow chart illustrating the procedure of combining a
plurality of images
into a single fused image according to an embodiment of the invention.
[0015] FIG. 2 is a flow chart illustrating the procedure of computing local
feature
matrices according to an embodiment of the invention.
[0016] FIG. 3 is a flow chart illustrating the procedure of computing local
weight
matrices according to an embodiment of the invention.
[0017] FIG. 4 is a flow chart illustrating the procedure of computing global
weight
vectors according to an embodiment of the invention.
[0018] FIG. 5 is a block diagram illustrating an image fusion system according
to an
embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0019] FIG. 1 illustrates the procedure 100 of combining a plurality of source
images into
a single fused image. An "image" herein refers to a matrix of image elements.
A
matrix can be a one-dimensional (1D) matrix or a multi-dimensional matrix.
Examples of an image element include but are not limited to a pixel in the 1D
case, a
pixel in the two-dimensional (2D) case, a voxel in the three-dimensional (3D)
case,
and a doxel, or dynamic voxel, in the four-dimensional (4D) case. A "source
image"
herein refers to an image to be inputted to an image fusion procedure. A
"fused image"
herein refers to an image that is the output of an image fusion procedure.
[0020] For the ease of exposition, the description hereafter generally directs
to the case
wherein source images are taken under different exposure settings, and the
source
4

CA 02822150 2013-07-26
images are 2D images. However, with no or minimal modifications, the method
and
system according to the invention can be applied in cases in which source
images are
taken under other settings or by different imaging devices, such as images
taken
under different focus settings, images taken by different medical imaging
devices,
and images taken by one or more multispectral imaging devices; or in cases in
which
source images are in dimensions other than 2D.
[0021] In step 105, a plurality of source images is obtained from one or more
image
sensors or from one or more image storage devices. The source images are of
the
same size; if not, they can be scaled to the same size. In step 110, for each
source
image, a local feature matrix is computed. A local feature herein represents a
certain
characteristic of an image region, such as the brightness in an image region
or the
color variation in an image region. An "image region" herein refers to a
single image
element or a group of image elements. Each element of a local feature matrix
is a
numerical value that represents a local feature. In step 115, for each source
image, a
local weight matrix is computed using the local feature matrices from step
110. A
"weight matrix" herein refers to a matrix, in which each element is a
numerical value
that corresponds to an image region in a source image and determines the
amount of
contribution from that image region to a fused image. In step 120, for each
source
image, a global weight vector is computed. In step 125, for each source image,
its
local weight matrix and its global weight vector are combined to form a final
weight
matrix.
[0022] In step 130, the fused image is generated by combining the source
images based
on the final weight matrices. Such combination can be performed as a weighted
average of the source images using the final weight matrices as weighting
factors. Let
K denote the number of source images, where K 2. Let IA denote the k th source
image, where 0 k K ¨1. Let Xk denote the k th final weight matrix, which is of
the same size as the source images. Let T denote the fused image. Let Li
denote the

CA 02822150 2013-07-26
operator of element-wise multiplication. Then, the weighted average for
forming the
fused image can be expressed using the following equation:
K-1
-=EXkG 1k (1)
k=0
If the value of an image element in the fused image exceeds a pre-defined
dynamic
range, it can be either scaled or truncated to meet that range.
[00231 Although in FIG. 1, step 120 is depicted to be performed after step 110
and step
115, it can be performed before step 115 or before step 110, or in parallel
with step
110 or step 115 or with both.
[0024] Computing Local Feature Matrices
[0025] FIG. 2 further illustrates step 110 of computing local feature
matrices. One or
more local features can be considered, depending on the characteristics of the
source
images and on the individual application scenarios. Examples of a local
feature
include but are not limited to local contrast in an image region, color
saturation in an
image region, hue in an image region, brightness/luminance in an image region,
color
contrast in an image region, average local contrast in an image region,
average color
saturation in an image region, average hue in an image region, average
luminance in
an image region, average color contrast in an image region, variation of local
contrast
in an image region, variation of color saturation in an image region, hue
variation in
an image region, luminance variation in an image region, variation of color
contrast
in an image region, and color variation in an image region. The local feature
computation (step 110) is performed in a multi-scale fashion, which captures
local
features at different scales. In addition, because the total number of image
elements in
source images can be very large, local feature matrices computed in a multi-
scale
fashion also help to reduce computational cost in subsequent processing. An
alternative is to perform the local feature computation only at a single
scale, but this
6

CA 02822150 2013-07-26
excludes the benefit of using local features from different scales and may
incur higher
computational cost in subsequent processing.
[0026] The computation of local feature matrices is first performed for each
source image
at its original scale, with reference to step 205. When a single local feature
is
considered, there is only one local feature matrix computed for each source
image in
step 205. When multiple local features are considered, a local feature matrix
is
initially computed for each local feature in each source image in step 205. In
step 210,
a coarser-scale image is computed for each source image. In step 215, a single
local
feature matrix (in the case that one local feature is considered) or multiple
local
feature matrices (in the case that multiple local features are considered) are
computed
at the current coarser scale. In step 220, the current coarser-scale local
feature matrix
or matrices are updated using information from those at the previous finer
scale. Steps
210, 215, and 220 are repeated until a pre-defined or pre-computed number of
scales
are reached in step 225. Step 230 checks whether multiple local features are
considered. If multiple local features are considered, then, for each source
image, its
multiple local feature matrices at the current coarsest scale are combined
into a single
local feature matrix, as depicted in step 235. Finally, the local feature
matrices at the
current coarsest scale are normalized in step 240.
[0027] For example, in an embodiment of the invention, step 110, the step of
computing
local feature matrices, can be performed as follows, considering a single
local feature.
[0028] Local contrast is used as the single local feature, which is applicable
to both color
and grayscale source images. Local contrast represents the local luminance
variation
with respect to the surrounding luminance, and local details are normally
associated
with local variations. Therefore, taking local contrast in the luminance
channel as a
local feature helps to preserve local details. Alternatively, local luminance
variation
can be considered alone without taking into account the surrounding luminance.
However, in this way, image regions with the same amount of local luminance
variation are treated equally regardless of their surrounding luminance.
Therefore,
7

CA 02822150 2013-07-26
some image regions with high local contrast may not be effectively
differentiated
from other image regions with low local contrast, which may impair the quality
of the
fused image. Local contrast is normally defined as the ratio between the band-
pass or
high-pass filtered image and the low-pass filtered image. However, under such
a
definition, under-exposed image regions, which are normally noisy, may produce
stronger responses than well-exposed image regions. This makes under-exposed
image regions contribute more to the fused image and reduce the overall
brightness.
Thus, if the response from the low-pass filter in an image region is below a
threshold
0, the response from the band-pass or high-pass filter in that image region,
instead of
the ratio, is taken as the local contrast value in order to suppress noise.
This
computation of local contrast values is performed for each source image at its
original
scale, with reference to step 205.
[0029] For a grayscale image, its intensities or luminance values are
normalized to the
range [0,1] , and then the computation of local contrast values is directly
performed
on the normalized luminance values. For a color image, a grayscale or
luminance
image can be extracted, and then the computation of local contrast values is
performed on this extracted grayscale image. There are various known methods
to
extract the luminance image from a color image, such as converting the color
image
to the LHS (luminance, hue, saturation) color space and then taking the "L"
components.
[0030] Let C k denote the matrix of local contrast values computed for the k
th source
image Ik at its original scale, i.e., the 0th scale. Let [M]id denote an
element of a
matrix M, where the location of this element in the matrix M is indicated by
an
index vector ind . For example, in the 2D case, ind can be expressed as ind =
(i, j) ,
and then [M],,1 represents the element in the i th row and j th column in a
matrix M.
Let lit denote a band-pass or high-pass filter, such as Laplacian filter. Let
0 denote a
low-pass filter, such as Gaussian filter. Let 0 denote the operator of
convolution.
8

CA 02822150 2013-07-26
Then, the computation of local contrast values in step 205 can be expressed as
the
following equation:
ICJ( l [
ind = Vj ik lind
Y' ik lind /[1 Ik lind if [0 'and <0; 2
otherwise. ()
[0031] The response of a band-pass filter can be approximated by the
difference between
the original image and the response of a low-pass filter. Therefore, the
computation of
local contrast values in step 205 can also be expressed as:
['And [10 Ik lind
ind
c 1 = {([Ik lind [0 lind )/[ 0 k 1 Jind if [0 Idind < 0;
L k
otherwise. (3)
The matrix of the magnitudes of the local contrast values, denoted by -e'k ,
can be
taken as one local feature matrix at the original scale.
[0032] In step 210, a coarser-scale image of each source image is computed by
downsampling the image by a pre-defined factor fin each dimension. Normally, f
is taken to be two (2) or a power of two (2). For example, when f =2 , the
downsampling can be performed as filtering the image with a low-pass filter
and then
selecting every other element in each dimension or as directly selecting every
other
element in each dimension. Let N, denote the pre-defined number of scales used
in
computing local feature matrices, where 2 .
Let Gnk denote the n th scale of the
th source image, where 0n5_N,-1 . Let [M]j denote the operator of
downsampling a matrix M by a factor f in each dimension. Then, computing a
coarser scale of each source image can be expressed by the following equation:
Ik , n=0;
Gnk =(4)
[Gnk-11 1 < n<N ¨1.
[0033] In step 215, the same local feature computation performed in step 205
is
performed, except that now the computation is performed at a coarser scale of
each
9

CA 02822150 2013-07-26
source image. Let Ck denote the matrix of local contrast values computed for
the k th
source image Ik at its n th scale. Then, Ckn can be computed using the
following
equation:
{ Lig 0 Gnk 1 ,
ind
[ k ]ind = [
I' nk _A
V 0 Gnk lind /[ j G 1nd ' th ind
if Po eGrwlise.< 0;
ci (5)
[0034] The response of a band-pass filter can be approximated by the
difference between
the original image and the response of a low-pass filter. Therefore, the
computation of
local contrast values in step 215 can also be expressed as:
[Ckn law = { [Gknlind ¨ P Gknlind ' if [
([Gkn lind ¨ [Cb Gkn lind VP
Gn ]Ind
k . , (11 G'klind <0;
otherwise. (6)
The matrix of the magnitudes of the local contrast values at the n th scale,
denoted by
can be taken as one local feature matrix at the n th scale, where 0 i i ._. N,-
1.
[0035] In steps 205 and 215, an alternative to directly taking the matrix tnk
of the
magnitudes of the local contrast values as a local feature matrix at the n th
scale is
taking a modulated version of t kn , in order to further suppress noise in
image regions
with high luminance variations. This modulation can be performed by applying a
sigmoid shaped function to t kn . One such sigmoid shaped function is the
logistic
psychometric function proposed in Garcia-Perez and Alcala-Quintana 2007 (The
transducer model for contrast detection and discrimination: Formal relations,
implications, and an empirical test, Spatial Vision, vol. 20, nos. 1-2, pp. 5-
43, 2007)
in which case t kn is modulated by the following equation:
0.5
[tkni = 0.5 +
ind
1+ exp (-- (log (Ltk"iind ) +1.5) /0.11) (7)
where exp 0 is an exponential function and log(.) is a logarithmic function.

CA 02822150 2013-07-26
[0036] In order to obtain the best representative information from finer
scales, e'k is
updated using the information from e'k-' for any n > 1, as depicted in step
220. This
information update can be performed using a set function. A "set function"
herein
refers to a function, the input of which is a set of variables. When the input
set to the
set function has two elements, a binary function can be used. A "binary
function"
herein refers to a set function, the input of which is an ordered pair of
variables. Let
C l, denote the updated matrix of the magnitudes of the local contrast values
at the n
th scale. 'kn can be taken as one local feature matrix at the n th scale,
where
0 ._ . i , i _<_ N, ¨1. Let func (=,.) denote a set function. Then, ekn can be
expressed using
the following equation:
ind n = 0;
( (8)
[nkl= indfunc [ "k 1 , [ekl-111 \ , 1 n < N ¨1.
hind' [_1
J ¨ c
For example, func(.,) can be a maximum function, which is to choose the
maximum
from two input values. Then, enk can be expressed using the following
equation:
[ekli = { ( Le;c1ind , n = 0;
n [en-lp-
lind
rid
(9)
max [-t ,'ic k
, 1 ._. n N, ¨1.
[0037] Steps 210, 215, and 220 are repeated until N, , a pre-defined number of
scales, is
reached (step 225). Because a single local feature is considered, after step
230, step
240 is performed. The normalization is an element-wise normalization across
all local
feature matrices at the current coarsest scale. Let Y: denote the normalized
local
feature matrix associated with the n th scale of the k th source image. Then,
ykrv,-;
11

CA 02822150 2013-07-26
denotes the output from step 240 for the k th source image. The computation of
Yk"
can be expressed using the following equation:
ryN,--Ii [
L k ind K-I ind
(10)
Jind
m=0
[0038] As another example, in an alternative embodiment, step 110, the step of
computing local feature matrices, can be performed as follows, considering two
local
features.
[0039] For color source images, two local features can be considered: local
contrast and
color saturation. Since local contrast only works in the luminance channel,
using local
contrast alone may not produce satisfactory results for color images in some
cases, for
example where high local contrast is achieved at the cost of low colorfulness
or color
saturation. Objects captured at proper exposures normally exhibit more
saturated
colors. Therefore, color saturation can be used as another local feature
complementary to local contrast for color source images.
[0040] The computation of local contrast values in steps 205, 210, 215, 220,
and 225 is
the same as that described in the previous embodiment, where a single local
feature is
considered in step 110 of computing local feature matrices. The computation of
color
saturation values in steps 205, 210, 215, 220, and 225 and the computation in
steps
230, 235, and 240 are described below.
[0041] Let Skn denote the matrix of color saturation values computed for the k
th source
image 1k at the n th scale. Let Rd, Grk" , and Blnk denote the red channel,
the green
channel, and the blue channel of the k th source image 1k at the n th scale,
respectively. Then, in steps 205 and 215, Skn can be computed for each source
image
following the color saturation definition in the LHS color space, as expressed
in the
following equation:
12

CA 02822150 2013-07-26
min ([12,d7, , [Gil" , [Blnk
[S'ij =1 .nd ind .nd
(11)
ind (IRClnk lidd [Grk" lind + [131"k lind )/3
Other alternatives include but are not limited to the definition of color
saturation in
the HSV (hue, saturation, value) color space and the definition of color
saturation in
the CIELAB (International Commission on Illumination 1976 L*a*b*) color space.
[0042] In step 210, a coarser-scale image of each source image is computed in
the same
way as described in the previous embodiment. Steps 210, 215, and 220 are
repeated
until N, , a pre-defined number of scales, is reached in step 225. Because two
local
features are considered, after step 230, step 235 is performed. The two local
feature
matrices for each source image at the coarsest scale are combined using a
binary
function comb (.,.) . For example, comb(,.) can be the multiplication
function. Let
Cfk" denote the combined local feature matrix for the k th source image at the
(N, ¨1) th scale. Then, CfH can be computed using the following equation:
[cfkiv,--1
d = comb ([Cs k'vc-1] ,[skni,-11 'kNrc-11 ind L rskrvc-11
(12)
ind ind
[0043] In step 240, element-wise normalization is performed on the combined
local
feature matrices. The normalized local feature matrix Yk" associated with the
(N, ¨1) th scale of the k th source image can be computed using the following
equation:
[Cf/7`
[ k YAlc-11 = K-1 ind
ind (13)
E[cfmk-ii
ind
tn=0
[0044] Computing Local Weight Matrices
[0045] FIG. 3 further illustrates step 115 of computing local weight matrices.
The
computation is performed in a hierarchical manner in order to achieve higher
computational speed with lower memory usage. An alternative is to perform the
13

CA 02822150 2013-07-26
computation only at a single scale without the hierarchy, but this may incur
higher
computational cost. In step 305, one or more local similarity matrices are
computed at
the original scale (i.e., the 0th scale). Each element in a local similarity
matrix is a
similarity value, which represents the degree of similarity between adjacent
image
regions in the source images.
[0046] In step 310, for each local similarity matrix computed from step 305, a
local
similarity pyramid is constructed. The local similarity matrix at a coarser
scale of a
local similarity pyramid is computed by reducing its previous finer scale.
Although
standard downsampling schemes can be used, a reduction scheme that respects
boundaries between dissimilar image regions in the source images is preferred,
as will
be described below. The local similarity pyramids have the same height (i.e.,
number
of scales), which can be pre-defined or pre-computed. The height of local
similarity
pyramids, denoted by Ns, is larger than N,, the pre-defined number of scales
used in
step 110 of computing local feature matrices. A local similarity matrix at the
n t h
scale of a local similarity pyramid corresponds to the local feature matrices
at the n th
scale, where 0 n 5_ Nc-1.
[0047] In step 315, the local feature matrices at scale N, ¨1 from step 110
are further
reduced to a coarser scale taking into account the local similarity matrix or
matrices.
In step 315, a local feature matrix at the current scale is first updated
based on the
local similarity matrix or matrices at the same scale, and is then reduced in
spatial
resolution. Although standard downsampling schemes can be used, the purpose of
using local similarity matrix or matrices in step 315 is to respect boundaries
between
dissimilar image regions in the source images. An element in a coarser-scale
local
feature matrix can be computed as a weighted average of its corresponding
matrix
elements in the finer-scale local feature matrix. The weighting factors used
in such
weighted average can be determined based on the local similarity matrix or
matrices
at the finer scale. Step 315 is repeated until scale Ns-1, the coarsest scale
of local
similarity pyramids, is reached (step 320).
14

CA 02822150 2013-07-26
[0048] In step 325, the local feature matrices at the current scale are
smoothed. Smoothed
local feature matrices help to remove unnatural seams in the fused image. For
example, the smoothing can be performed by applying one or more of the
following
schemes: a low-pass filter, such as Gaussian filter, a relaxation scheme, such
as
Gauss-Seidel relaxation, or an edge-preserving filter, such as bilateral
filter. A
smoothing scheme that uses local similarity matrix or matrices may be used,
for the
same reason as mentioned above for step 315, which is to respect boundaries
between
dissimilar image regions in the source images. At each scale, a smoothing
scheme can
be applied zero, one, or more times. Step 330 checks whether the finest scale
of the
local similarity pyramids (i.e., the 0th scale) is reached. If the 0th scale
is not reached,
step 335 is performed; otherwise, step 340 is performed.
[0049] In step 335, the local feature matrices at the current scale are
expanded to a finer
scale, and this finer scale becomes the current scale for subsequent
processing. A
standard upsampling scheme can be used for this expansion operation, which is
to
perform interpolation between adjacent matrix elements. In addition, a scheme
that
employs local similarity matrix or matrices can be used, in order to respect
boundaries between dissimilar image regions in the source images. An element
in a
finer-scale local feature matrix can be computed as a weighted average of its
corresponding matrix elements in the coarser-scale local feature matrix. The
weighting factors used in such weighted average can be determined based on the
local
similarity matrix or matrices at the finer scale. Steps 325 and 335 are
repeated until
the 0th scale is reached (step 330) . In step 340, the finest-scale local
feature matrices
are normalized to form the local weight matrices.
[0050] For example, in an embodiment of the invention, step 115, the step of
computing
local weight matrices, can be performed as follows.
[0051] In step 305, each local similarity matrix captures local similarities
along one
direction. One or more similarity matrices can be used. For d -dimensional
images,
any direction in the d -dimensional space can be considered, such as the
horizontal

CA 02822150 2013-07-26
direction, the vertical direction, and the diagonal directions. Let Wcni
denote the
similarity matrix for the direction d at the n th scale. Let [MiLd denote the
matrix
element that is closest to the matrix element [M].nd in the direction d. Then,
W, , can
be computed using the following equation:
K-1 dist -
(Lik lind Lrik 1 Jind
Wd 1= exp ___________________________________________________________ (14)
ind
k=0
where dist (=,.) denotes a binary function that computes the distance of two
input
entities, and a is a free parameter. For example, dist (.,.) can be the
function of
computing Euclidean distance, the function of computing Manhattan distance, or
the
function of computing Mahalanobis distance. Wd can be computed based on all
source images, as expressed in Equation 14, or it can be computed based on
some of
the source images.
[0052] For the 2D case, two local similarity matrices that capture local
similarities along
two directions can be considered: a horizontal similarity matrix that stores
the
similarity values between adjacent image regions along the horizontal
direction, and a
vertical similarity matrix that stores the similarity values between adjacent
image
regions along the vertical direction. Let W: denote the horizontal similarity
matrix at
the n th scale. Then, W: can be computed using the following equation:
K-I diSt([1kti ,[1k1 j+ir
[VV n exp ______________________________________ (15)
,
k=0
Let W," denote the vertical similarity matrix at the n th scale. Then, Vsi,
can be
computed using the following equation:
K-1 diSt([I ¨ [1
k ,,j k ,i+i,j)
[NV nexp ________________________________________ (16)
k=0
16

CA 02822150 2013-07-26
The horizontal and vertical similarity matrices can be computed based on all
source
images, as expressed in Equation 15 and Equation 16, or they can be computed
based
on some of the source images.
[0053] In step 310, for each local similarity matrix computed from step 305, a
local
similarity pyramid is constructed. A coarser-scale similarity matrix can be
computed
using a set function. When the input set to the set function has two elements,
a binary
function can be used. One such binary function can be the minimum function,
and
then, the reduction scheme for reducing local similarity matrices at the n th
scale,
where 0 Ns¨ 2 , can be expressed in the following equations:
[Wdn+11,nd= min ([Wd" Lind 'EWµni 12.ind (17)
where * denotes the operator of multiplication.
[0054] For the 2D case, the horizontal similarity matrix at a coarser scale of
the
horizontal similarity pyramid and the vertical similarity matrix at a coarser
scale of
the vertical similarity pyramid can be computed using a binary function. One
such
binary function can be the minimum function, and then, the reduction scheme
for
reducing local similarity matrices at the n th scale, where 0 ¨ 2 ,
can be
expressed in the following equations:
min ([1V: 121,21' (18)
[W,"+' = min ([117122 / ,[VV:] (19)
I 1
[0055] In step 315, the local feature matrices at scale N, ¨1 from step 110
are further
reduced to a coarser scale taking into account the local similarity matrices.
Rather
than the previously described reduction schemes, a modified version of the
restriction
operator of the multigrid linear system solver in Grady et al. 2005 (A
geometric
multigrid approach to solving the 2D inhomogeneous Laplace equation with
internal
dirichlet boundary conditions, IEEE International Conference on Image
Processing,
vol. 2, pp. 642-645, 2005) can be used as the reduction scheme for the local
feature
17

CA 02822150 2013-07-26
matrices. This reduction scheme for reducing a local feature matrix Y: at the
n th
scale, where n Ns-2, can be expressed in the following three equations
for the 2D case:
[Whn] -E[VST:1 LY:1
[ Ykn = Y:
21,2 j+1 j+1 2j2 [W +[Whn
2i,2
I l
21,2/+1 2/,2 j+2
(20)
21,2/ 2i,2 1+1
+ [Wyn [Y. kn + 1/:" = = [Ykn]
[vk k
21+2
" [Yn
21+1,2_1 it +1,2] 2/,2 J 2/,2J
,2 (21)
[W] [Wti! _1+1,2
21,2 j 12i+1,2
[Whnl LY:] .+PVV7,1 [1(11
Lyn+11 =[yn 21+1,21 2i+1,2 2i+1,21+1 j+2
k 12i+1,21+1 [w"
]2i+1,2 [W;:12i+1,2 J+1 [WV" ]2/,2 j+1 + [W7 12/+1,2 j+1
.1
(22)
LW: 12i,2,1+1 [Y:12/,2 j+1 [W12/+1,2 j+1 LY: 12j+2,2 1+1
[Wfl' + /8j'l + LW
2i+1,2 2i+1,2 j+1 2 j+1 2/+1,2 J+1
Padding to W', W,7, and Y: can be added, when a matrix index is out of range.
Step
315 is repeated until scale Nõ-1 is reached (step 320).
[0056] In step 325, the local feature matrices at the n th scale can be
smoothed based on
the local similarity matrices at that scale using the following equation for
the 2D case:
[1( knl
[Y:114 = LW hn] +[VV:1 4-[1? V:1 +LW 1,1
j-1 i,j 1-1,j
[Wr] [vkl + LW h" [Ykl
, 1-1 ,j+1
(23)
+7 [Whni. +[w:],1 -1-[W] +[w:],,
[AV] [V] +[W,,n1 ,, [:],1,
+7 [IV] +[VsPh] [wvn] [1V]i .
1,j-1 i,j 1-1,j
where y is a free parameter.
[0057] Step 330 checks whether the finest scale of the local similarity
pyramids (i.e., the
0th scale) is reached. If the 0th scale is not reached, step 335 is performed;
otherwise,
18

CA 02822150 2013-07-26
step 340 is performed. In step 335, the local feature matrices at the n th
scale are
expanded to a finer scale. Rather than the previously described expansion
schemes, a
modified version of the prolongation operator of the multigrid linear system
solver in
Grady et al. 2005 (A geometric multigrid approach to solving the 2D
inhomogeneous
Laplace equation with internal dirichlet boundary conditions, IEEE
International
Conference on Image Processing, vol. 2, pp. 642-645, 2005) can be used as an
expansion scheme for the local feature matrices. This expansion scheme for
expanding a local feature matrix Y: at the n th scale, where 1 n 1.-1 , can
be
expressed in the following four equations for the 2D case:
kn 1
= [Y:11,,
(24)
[wnhvn-1 ^ wn,-1]
k k
J2, +1,2 -1 [ 12/+1,2 j-I L I J21+1,2 j [
1
[ r 2/+1,2/+1
Y(25)
2/+1,2 [Whn-11 [Whn-11
2/+1,2 j-1 21+1,2 j
[W1' -112 [Yr]
[y n-11
=
1-1,2 j+1[1r-1 i2/-1,2 /+1 [VV'n' ]2/,2 /+1 [Y:12/+1,2 j+1
^
[wn-1 (26)
2/-1,2 j+1 12/,2 j+1
[wri12/,2 j-1 [Ykn-1]2i,2 /-1 [Whn 1]2/,2 [Yri]2/,2
21,2 j[[wri + 1W,n, [W'711 2/,2jwr112/,2 j-1 2,21 -12/-1,2 j
rk yn-ii +rw-i [y n-11
(27)
j L J2i-1,2] L J21,2/ k
2/+1,2]
+ ] [VV
21,2 j-1 ]
rWn-11
21,2 2/-1,2j v -12/,2,/
Padding to W:-1, Wv7-I , and Yr' can be added, when a matrix index is out of
range.
[0058] Steps 325 and 335 are repeated until the finest scale of the local
similarity
pyramids (i.e., the 0th scale) is reached (step 330). In step 340, element-
wise
normalization is performed on the finest-scale local feature matrices to form
the local
weight matrices. The local weight matrix Uk associated with the 0th scale of
the k th
source image can be computed using the following equation:
19

CA 02822150 2013-07-26
LY/i 1ind
[Udind = K-1 (28)
I[ymoi
m=0
[0059] Computing Global Weight Vectors and Computing Final Weight Matrices
[0060] FIG. 4 further illustrates step 120 of computing global weight vectors.
In step 405,
one or more global features are computed for individual source images and/or
for
some source images and/or for all source images. A global feature herein
represents a
certain characteristic of one or more source images. Examples of a global
feature
include but are not limited to average luminance of one, some or all source
images,
average color saturation of one, some, or all source images, difference
between the
average luminance of one non-empty set of source images and that of another
non-
empty set of source images, and difference between the average color
saturation of
one non-empty set of source images and that of another non-empty set of source
images. In step 410, the global features are transformed using pre-defined
function(s).
Unlike local weight matrices, which only affect local characteristics of the
fused
image, these global weight vectors affect global characteristics of the fused
image. In
step 415, a global weight vector for each source image can be constructed by
concatenating the global weights associated with that source image. The
elements of a
global weight vector can be stored as a vector or in a matrix.
[0061] In order for both local features and global features to contribute to
the fused
image, the local weight matrices and the global weight vectors are combined
using a
pre-defined function to form a final weight matrix for each source image, as
depicted
in step 125 in FIG. 1.
[0062] For example, in an embodiment of the invention, steps 120 and 125 can
be
performed as follows, where the global weight vectors contribute to enhanced
global
contrast of the fused image.

CA 02822150 2013-07-26
[0063] In step 405, the average luminance of all source images is computed.
Let L
denote the average luminance of all source images. In step 410, L can be
transformed
by two non-liner functions, which results in two global weights, denoted by Vo
and VI .
The two non-linear functions can take the following forms:
Vo ao /30exP(¨L) (29)
= + exp(¨L) (30)
where a, , a, , )30 , and are
free parameters. Let Vk denote the global weight
vector for the k th source image. In step 415, Vk can be constructed by
concatenating
Vo and VI , i.e., Vk = (Vo,Vi) .
[0064] In step 125, the global weight vectors ( Vk 's) and the local weight
matrices ( Uk 'S)
are combined to generate the final weight matrices ( Xk 's). Let {T11 denote
the i th
element in a vector T. The combination function can take the following form:
Xk = [k I0 Uk ¨ [Vk](31)
[0065] In this embodiment, the contribution of the global feature L to the
fused image is
enhanced global contrast. If L is low, it indicates that the amount of under-
exposed
image regions in the source images may be large. Hence, in step 410, Equation
26
results in a larger Vo and Equation 27 results in a larger VI , when Po and
are
positive. When global weight vectors are combined with local weight matrices
in step
125 using Equation 28, [Vk ]0 helps to increase the global contrast of a fused
image,
i.e., to extend the dynamic range of a fused image; and [Vic" helps to select
the
middle portion of the extended dynamic range avoiding over- or under-exposure
in a
fused image.
21

CA 02822150 2013-07-26
[0066] As another example, in an alternative embodiment, steps 120 and 125 can
be
performed as follows, where global weight vectors contribute to both enhanced
global
contrast and enhanced brightness in the fused image.
[0067] In step 405, the average luminance of each source image and the average
luminance of all source images are computed. Let Lk denote the average
luminance
of the k th source image, and let L still denote the average luminance of all
source
images. In step 410, L is transformed in the same way as previously described,
which
results in two global weights, Vo and V; . Let V2,k denote the global weight
computed
from L. V2,k can be computed by transforming Lk using the following function:
V
rk, Lk >17.
, = (32)
2,k
0, otherwise.
where 8 and g are free parameters. In step 415, V k can be constructed by
concatenating V0, V, ,and V2,k , i.e., Vk = (Vo, VI, V2,k).
[0068] In step 125, a final weight matrix Xk can be computed by combining the
global
weight vectors ( Vk 's) and the local weight matrices (Uk 's) in the following
way:
Xk = [Vk ]O Uk --[Vk ii + [Vk 12 (33)
[0069] In this embodiment, the contribution of the global feature L to the
fused image is
the same as that in the previous embodiment, i.e., enhanced global contrast.
The
contribution of the global feature rk to the fused image is enhanced
brightness. 11 'S
computed from Equation 29 in step 405 favor those image regions from source
images with higher average luminance values, i.e., from brighter source
images, when
8 is positive. When global weight vectors are combined with local weight
matrices
in step 125 using Equation 30, image regions with more local features in those
22

CA 02822150 2013-07-26
brighter source images can receive higher weights from the final weight
matrices, so
that those image regions look brighter in a fused image.
[0070] A System
[0071] FIG. 5 depicts an exemplary image fusion system 500. The image fusion
system
500 includes image source(s) 505, a computing device 510, and display(s) 545.
The
image source(s) 505 can be one or more image sensors or one or more image
storage
devices. Examples of the computing device 510 include but are not limited to a
personal computer, server, computer cluster, and a smart phone. The computing
device 510 includes three interconnected components: processor(s) 515, an
image
fusion module 520, and memory 525. Processor(s) 515 can be a single processor
or
multiple processors. Examples of a processor include but are not limited to a
central
processing unit (CPU) and a graphics processing unit (GPU). Memory 525 stores
source images 530, fused image(s) 535, and intermediate data 540. Intermediate
data
540 are used and/or generated by the processor(s) 515 and the image fusion
module
520. The image fusion module 520 contains processor executable instructions,
which
can be executed by the processor(s) 515 one or more times to compute fused
image(s)
535 from the source images 530 following the image fusion procedure 100. Image
fusion module 520 may be incorporated as hardware, software or both, and may
be
within computing device 510 or in communication with computing device 510.
[0072] The system 500 functions in the following manner. The image source(s)
505
sends source images 530 to the computing device 510. These source images 530
are
stored in the memory 525. The image fusion module 520 uploads instructions to
the
processor(s) 515. The processor(s) 515 executes the uploaded instructions and
generates fused image(s) 535. The fused image(s) 535 is then sent to
display(s) 545.
[0073] The above-described embodiments have been provided as examples, for
clarity in
understanding the invention. A person with skill in the art will recognize
that
alterations, modifications and variations may be effected to the embodiments
23

CA 02822150 2013-07-26
described above while remaining within the scope of the invention as defined
by
claims appended hereto.
24

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Représentant commun nommé 2019-10-30
Représentant commun nommé 2019-10-30
Exigences relatives à la nomination d'un agent - jugée conforme 2016-11-01
Inactive : Lettre officielle 2016-11-01
Inactive : Lettre officielle 2016-11-01
Exigences relatives à la révocation de la nomination d'un agent - jugée conforme 2016-11-01
Demande visant la révocation de la nomination d'un agent 2016-10-25
Demande visant la nomination d'un agent 2016-10-25
Accordé par délivrance 2016-04-12
Inactive : Page couverture publiée 2016-04-11
Préoctroi 2016-01-18
Inactive : Taxe finale reçue 2016-01-18
Un avis d'acceptation est envoyé 2015-08-18
Lettre envoyée 2015-08-18
month 2015-08-18
Un avis d'acceptation est envoyé 2015-08-18
Inactive : QS réussi 2015-08-12
Inactive : Approuvée aux fins d'acceptation (AFA) 2015-08-12
Modification reçue - modification volontaire 2015-08-06
Inactive : Dem. de l'examinateur par.30(2) Règles 2015-07-08
Inactive : Dem. de l'examinateur art.29 Règles 2015-07-08
Inactive : Rapport - Aucun CQ 2015-07-07
Lettre envoyée 2015-07-06
Modification reçue - modification volontaire 2015-06-18
Toutes les exigences pour l'examen - jugée conforme 2015-06-18
Requête d'examen reçue 2015-06-18
Avancement de l'examen demandé - PPH 2015-06-18
Avancement de l'examen jugé conforme - PPH 2015-06-18
Exigences pour une requête d'examen - jugée conforme 2015-06-18
Inactive : Lettre officielle 2015-06-16
Requête visant le maintien en état reçue 2015-06-03
Inactive : Page couverture publiée 2015-02-02
Demande publiée (accessible au public) 2015-01-26
Inactive : Certificat de dépôt - Sans RE (Anglais) 2013-08-15
Inactive : Certificat de dépôt - Sans RE (Anglais) 2013-08-13
Inactive : CIB en 1re position 2013-08-12
Inactive : CIB attribuée 2013-08-12
Demande reçue - nationale ordinaire 2013-08-05
Inactive : Pré-classement 2013-07-26

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Le dernier paiement a été reçu le 2015-06-18

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Les taxes sur les brevets sont ajustées au 1er janvier de chaque année. Les montants ci-dessus sont les montants actuels s'ils sont reçus au plus tard le 31 décembre de l'année en cours.
Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
Taxe pour le dépôt - générale 2013-07-26
2015-06-03
TM (demande, 2e anniv.) - générale 02 2015-07-27 2015-06-18
Requête d'examen - générale 2015-06-18
Taxe finale - générale 2016-01-18
TM (brevet, 3e anniv.) - générale 2016-07-26 2016-07-11
TM (brevet, 8e anniv.) - générale 2021-07-26 2017-06-19
TM (brevet, 4e anniv.) - générale 2017-07-26 2017-06-19
TM (brevet, 6e anniv.) - générale 2019-07-26 2017-06-19
TM (brevet, 7e anniv.) - générale 2020-07-27 2017-06-19
TM (brevet, 5e anniv.) - générale 2018-07-26 2017-06-19
TM (brevet, 9e anniv.) - générale 2022-07-26 2022-07-18
TM (brevet, 10e anniv.) - générale 2023-07-26 2023-07-20
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
RUI SHEN
Titulaires antérieures au dossier
S.O.
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document (Temporairement non-disponible). Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(yyyy-mm-dd) 
Nombre de pages   Taille de l'image (Ko) 
Revendications 2013-07-25 6 172
Abrégé 2013-07-25 1 21
Dessins 2013-07-25 5 105
Dessin représentatif 2014-12-29 1 13
Page couverture 2015-02-01 1 42
Revendications 2015-06-17 6 170
Description 2013-07-25 24 976
Revendications 2015-08-05 6 170
Page couverture 2016-02-24 2 47
Dessin représentatif 2016-02-24 1 12
Certificat de dépôt (anglais) 2013-08-14 1 156
Rappel de taxe de maintien due 2015-03-29 1 110
Accusé de réception de la requête d'examen 2015-07-05 1 187
Avis du commissaire - Demande jugée acceptable 2015-08-17 1 161
Taxes 2015-06-02 1 31
Courtoisie - Lettre du bureau 2015-06-15 1 27
Taxes 2015-06-17 1 24
Requête d'examen 2015-06-17 16 531
Demande de l'examinateur 2015-07-07 3 230
Modification 2015-08-05 4 123
Correspondance 2016-01-17 3 111
Taxes 2016-07-10 1 25
Correspondance 2016-10-24 4 120
Courtoisie - Lettre du bureau 2016-10-31 2 98
Courtoisie - Lettre du bureau 2016-10-31 2 96
Paiement de taxe périodique 2022-07-17 1 24