Skip to main content

Enhanced image fusion using directional contrast rules in fuzzy transform domain

Abstract

In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.

Background

Image fusion is the process of fusing different images to increase the amount of significant information. These images are obtained either from different imaging modalities or from single modality. Different image fusion methods have been developed in literature (James and Dasarathy 2014). Recently, fuzzy logic based image fusion methods (Seng et al. 2010; Kayani et al. 2007) are gaining interest of researchers. Fuzzy logic has revealed to provide a basis for the approximate description of different functions. Motivated from fuzzy logic and system modeling, Perfilieva (2006) introduced fuzzy-transform (FTR/F-transform) that maps a set of functions in one space into a finite dimensional vector in another space. Researchers have successfully applied FTR in many applications including image compression, image fusion, image denoising, time series application etc. Perfilieva and Dankova (2008) proposed simple FTR based image fusion algorithm based on one-level decomposition and complete F-transform based image fusion algorithm based on high level decomposition. Maximum absolute value corresponding to least degraded part of input image was used as an operator for performing fusion. However, these algorithms could not be successfully applied to fuse images. The fused image obtained was of poor contrast as the FTR components corresponding to the smoother part in the image was not within the range of fusion operator. So, to obtain a fused image of good contrast, Perfilieva et al. modified original images by enhancing their contrast and then applied the proposed algorithm on newly obtained input images. This paper proposes image fusion method that fuses original images as such where directional contrast is enhanced using FTR. The rest of the paper is organized as follows: Second section gives literature review, third section presents the proposed method, results are given and discussed in fourth section and finally, conclusion is drawn in fifth section.

Literature review

The main aim of image fusion methods (Piella 2003) is to preserve all salient, interrelated and relevant information present in input images without introducing any inconsistency, noise and artifact in the fused image. An important requirement for successful fusion of input images is to have accurate geometric alignment that requires proper matching of image coordinates. This can be achieved through a process known as image registration (Bhattacharya and Das 2011). Commonly used spatial domain pixel-level algorithms (Zoran 2009) include averaging based, select maxima or minima based, intensity hue saturation (IHS) transform based, principal component analysis (PCA) based, Brovey transform (BT) based fusion methods. In the transform-domain, first the input images are transformed into frequency domain and then fusion takes place according to some fusion rules in transform domain. Finally inverse transform is done to achieve a final fused image. Guihong et al. (2001) proposed modulus maxima value of the wavelet coefficients at each point as a fusion rule to produce a fused image. Modulus maxima based fusion rule extracts sharp signal transitions and singularity features but is also sensitive to noise and artifacts. Some authors also proposed image fusion based on multiwavelet transform that possess many desirable properties such as orthogonality, symmetry and smoothness. Liu et al. (2010) proposed that either gradient based or weighted average based fusion method can be used for determining the fused low frequency coefficients where, either an algorithm based on maximum value or directional contrast or classification scheme can be used for determining the fused high frequency coefficients. On the other hand, Tongzhou et al. (2009) proposed a feature-based fusion rule to fuse original sub-images. Combination of four different fusion rules: average, addition, principal component selection and select maxima were used to fuse the coefficients of low frequency sub-band and high frequency sub-band. Since directional contrast using FTR has good approximation properties and is successful in preserving true image edges, researchers (Vajgl et al. 2012; Dankova and Valăsek 2006) have also proposed fusion of multiple images using fuzzy transform. Motivated from these properties and various advantages of FTR this paper proposes fusion based on FTR domain with directional contrast.

FTR, introduced by Perfilieva (2005, 2006), is a powerful transformation technique that is capable of preserving features especially for fuzzy models. It has been successfully applied to a wide range of applications such as image fusion (Perfilieva and Dankova 2008; Vajgl et al. 2012; Dankova and Valăsek 2006), image compression (Perfilieva and Baets 2010; Martino et al. 2008), noise removal, data analysis, solution of differential and integral equations (Ezzati and Mokhtari 2012) etc. FTR establishes a correspondence between a set of functions in a closed interval into a finite (say N) dimensional vector space. It has an advantage of producing a simple and unique representation of an original function when used in place of original function and it makes complex computations easier. FTR is as useful as traditional transforms such as wavelet transform and Fourier transform, but FTR has a potential advantage over these transforms as it can use several basis functions of different shapes whereas wavelet transform utilizes a single mother wavelet to define all basis functions and Fourier transform uses only a single kind of basis function i.e. e jwx (Patanè 2011).

The performance of image fusion algorithms is usually bounded by two factors: the algorithm quality and the quality of the registration results (He et al. 2010). A multimodal image registration and fusion module (MIRF) is proposed in (Ghantous and Bayoumi 2013). MIRF is able to automatically register and fuse images with the use of multi-resolution decomposition based on Dual-Tree Complex Wavelet Transform (DT-CWT). An important requirement for successful fusion of input images is to have accurate geometric alignment that requires proper matching of image coordinates (Petrovic and Xydeas 2005).

The performance and visual quality of image is retained using discrete cosine harmonic wavelet (DCHWT) based image fusion with reduced computation (Kumar 2013). A fused image with maximum number of measures achieving their desirable value is considered to be a better quality of fused image. Many objective measures have been developed in literature for assessing the performance of image fusion algorithms. The measures generally used for evaluating the performance of fusion algorithms are based on the amount of information that has been transferred from the input images into fused image (Kotwal and Chaudhuri 2013; Haghighat et al. 2011; Arathi and Soman 2009; Wang et al. 2004; Zhang et al. 2011; Liu and Laganiere 2007).

Multilevel Dual-Tree Complex Wavelet Transform (DT-WT) is also a comparable method but it requires the design of special filters with desirable properties: approximate half-sample delay property, perfect reconstruction (orthogonal or bi-orthogonal), finite support, vanishing moments (good stop band) and linear phase characteristics. Also since DT-WT involves complex coefficients, processing these (both real and imaginary) coefficients increases the computational complexity and the memory requirement, thereby increasing the cost of fusion method (Singh and Khare 2014).

A fusion framework is proposed for multimodal medical images based on non-subsampled contourlet transform (NSCT) (Wang et al. 2013), based on which we can represent low-frequency information of the image sparsely in order to extract the salient features of images. Furthermore, it can reduce the calculation cost of the fusion algorithm with sparse representation by the way of non-overlapping blocking, thereby increasing complexity of fusion.

The shift-invariant shearlet transform (SIST) method can efficiently capture both of the spatial feature information and the functional information contents. Besides, different from the average and maximum schemes the dependencies of the SIST coefficients of the cross-scale and inter sub-bands have been fully considered in the proposed fusion rule, and therefore more information from the source images can be transferred into the fused images (Wang et al. 2014).

The contrast feature measures the difference of the intensity value at some pixel from the neighbouring pixels which is presented as directive contrast in NSCT domain method (Bhatnagar et al. 2013). For fusion, two different rules are used by which more information can be preserved in the fused image with improved quality. That is why, in our proposed method images are fused according to directional contrast based fusion and select maximum based fusion. The proposed fusion algorithm is also compared subjectively as well as objectively with MIRF (Ghantous and Bayoumi 2013), DCHWT (Kumar 2013), Multilevel DT-WT (Singh and Khare 2014), NSCT (Wang et al. 2013), SIST (Wang et al. 2014) and Directive Contrast in NSCT (Bhatnagar et al. 2013).

Proposed method

Selection of proper fusion rules should be carefully made in order to provide a better quality of fused image. In this work directional contrast rule in fuzzy transform (FTR) domain is proposed. Contrast enhancement is based on emphasizing the difference of brightness in an image to improve its perceptual quality (Gonzalez and Woods 2002; Peli 1990). The spatial content is equally important for defining the contrast. Using this property we have considered two bands of frequency one is high another is low, where each frequency band is a function of the contrast. We define metrics to measure the contrast enhancement, and luminance/brightness to measure the image quality of the contrast-enhanced images. The proposed method is based on fusion of two different tone images. This is achieved using fuzzy technique which is described in paper (Hanmandlu et al. 2003). The block diagram of proposed method is illustrated in Fig. 1. The block diagram for fuzzy transformation and defuzzification is presented in Fig. 2.

Fig. 1
figure 1

Block diagram of proposed algorithm

Fig. 2
figure 2

Block diagram for fuzzy transformation and defuzzification

The performance of the proposed method is evaluated using quantitative measures and subjective perceptual image quality evaluation. So high and low frequency components in a (2w1 + 1) × (2w2 + 1) window is calculated and the values with maximum and minimum contrast is chosen as the fused transformed component. Here w1 and w2 being positive integer. The use of maximum and minimum contrast is used to find out normalized value which is a part of proposed algorithm. The contrast of an image can be defined as,

$$R = \frac{{L - L_{B} }}{{L_{B} }} = \frac{{L_{H} }}{{L_{B} }}$$
(1)

where, R is the contrast of the image, L is the local grey level, LB is the local brightness of the background (corresponding to low frequency component), LH = L − LB corresponds to high frequency component. After one level of wavelet decomposition, there will be four frequency bands, namely three high frequency components D H i,j , D V i,j and D D i,j (corresponding to the “foreground” i.e. Horizontal, vertical and Diagonal) and one low frequency component C i,j (corresponding to the “background”). The value of i and j depends on the block of images. The exact relationship is explained in proposed algorithm. Because of orthogonally of decomposition, there isn’t relativity between the high frequency component (“foreground”) and the low frequency component (“background”), and so the improvement seen using directional contrast is reasonable.

Proposed algorithm

Input images X and Y are initially divided into blocks of size M × N. Since images generally contain different types of spatial degradation, that disrupts its smoothness, hence each M × N block of both images is fuzzy transformed into sub-blocks (SB) of size (m1 × n1), (m2 × n2) and (m3 × n3) using FTR. Fusion is performed for each block, performing following steps. Assume SB X referes to subblock of image X and SB Y refers to sub-block of image Y. DWT is applied to these sub-blocks. After one level of wavelet decomposition, there will be four frequency bands, namely three high frequency components D H i,j , D V i,j and D D i,j (corresponding to the “foreground” i.e. Horizontal, vertical and Diagonal) and one low frequency component C j (corresponding to the “background”). The components of fused image is represented as, SB Fused  → [C F i,j , D HF i,j , D VF i,j , D DF i,j ]. The notation F refers to fused image. C F j refers to low frequency component of fused image. D HF j , D V j , D DF j referes to high frequency component of fused image corresponding to horizontal, vertical and diagonal directional metrics respectively.

$$C_{i,j}^{F} = \frac{{C_{j}^{{SB_{X} }} + C_{j}^{{SB_{Y} }} }}{2}$$
(2)
$$D_{i,j}^{HF} = FTA \left(D_{i,j}^{{H_{{SB_{{l_{X} }} }} }} ,D_{i,j}^{{H_{{SB_{{l_{Y} }} }} }} \right)$$
(2a)
$$D_{i,j}^{VF} = FTA \left(D_{i,j}^{{V_{{SB_{{l_{X} }} }} }} ,D_{i,j}^{{V_{{SB_{{l_{Y} }} }} }} \right)$$
(2b)
$$D_{i,j}^{DF} = FTA \left(D_{i,j}^{{D_{{SB_{{l_{X} }} }} }} ,D_{i,j}^{{D_{{SB_{{l_{Y} }} }} }} \right)$$
(2c)
  • i = 1, 2, …, m1, j = 1, 2, …, n1 and l = m1 × n1 for subblocks of size m1 × n1.

  • i = 1, 2, …, m2, j = 1, 2, …, n2 and l = m2 × n2 for subblocks of size m2 × n2

  • i = 1, 2, …, m3, j = 1, 2, …, n3 and l = m3 × n3 for subblocks of size m3 × n3

l is sub-block of input images A and B and i, j specify location. The window is centered at location i, j.

In the proposed technique, fuzzy intensification is suggested on the basis of optimization of directional contrast using fuzzy transformation. A Gaussian membership function that transforms the saturation and intensity histograms of HSV colour model. The fuzzifier and intensification parameters are evaluated automatically for the input colour image by optimizing the contrast in the fuzzy domain. It is observed that the “index of fuzziness” decrease with enhancement. It has been found that RGB colour model is not suitable for enhancement because the colour components are not decoupled. On the other hand, in HSV colour model, hue (H), the colour content is separated from saturation (S), which can be used to dilute the colour content and V, the intensity (Value) of the colour content. By preserving H, and changing only S and V, it is possible to enhance colour image (Ezzati and Mokhtari 2012; Kumar 2013). Therefore, we need to convert RGB into HSV for the purpose. A Gaussian type membership function is used to model S and V property of the image. This membership function uses only one fuzzifier and is evaluated by maximizing fuzzy contrast, which is cumulative fuzzy variance about the crossover point. The colour image is first converted from RGB to HSV domain to preserve the hue of the image. We have considered an image set i.e., Room for the demonstration. A clear improvement is seen as far as the details and restorations of colours are concerned.

Fuzzy transformation algorithm

  • Step 1: Calculate normalized value of each input pixel contrast using

$$x_{norm} \left( {i,j} \right) = \frac{{(x\left( {i,j} \right) - x_{min} )}}{{(x_{max} - x_{min} )}}$$
(7)

x max and x min are maximum and minimum contrast of pixel in each block.

\(x\left( {i,j} \right) = \frac{1}{{C _{i,j} }}\sqrt {D_{i,j}^{H2} + D_{i,j}^{V2} + D_{i,j}^{D2} }\), where x(i, j) is absolute value of the image gradient taken as a simple indicator of the image contrast. Contrast enhancement (CE) operator cab be represented as, \(CE = \frac{1}{i \times j}\left( {\mathop \sum \nolimits_{i} \mathop \sum \nolimits_{j} D} \right)\).

  • Step 2: Calculate the crossover membership value of each block using

$$\mu_{crossover} = \frac{{(1 + \frac{{x_{min} }}{{x_{max} }})}}{{2(1 - \frac{{x_{min} }}{{x_{max} }})}}$$
(8)
  • Step 3: Fuzzify the image using the following steps

If \(0 < x_{norm(i,j)} < \mu_{crossover} ,\) then

$$f_{img} \left( {i,j} \right) = \frac{{x_{norm}^{{2^{r} }} (i,j)}}{{(1 - \mu_{crossover} )^{{2^{(r - 1)} }} }}$$
(9)

where r is radius of Gaussian membership function and \(f_{img} \left( {i,j} \right)\) is final fuzzified image.

else, \(\mu_{crossover} \_ x_{norm(i,j)} < 1\), and

$$f_{img} \left( {i,j} \right) = 1 - \frac{{x_{norm}^{{2^{r} }} \left( {i,j} \right)}}{{\left( {1 - \mu_{crossover} } \right)^{{2^{{\left( {r - 1} \right)}} }} }}$$
(10)
  • Step 4: The pixel intensity of defuzzified image (df img (i, j)) is obtained using,

$$df_{img} \left( {i,j} \right) = f_{img} \left( {i,j} \right) \times \left( {x_{max} - x_{min} } \right)$$
(11)

Select maximum fusion rule

These fused transformed sub-blocks are then inverse transformed into original size blocks using inverse FTR. These inverse FTR blocks are further fused using select maximum based fusion rule for producing a final fused block of size M × N. After enhancing the images using directional contrast, they are fused using discrete wavelet transform (DWT) for extracting various features of the images at different levels. Select maximum fusion rule is applied as follows,

  • Step 1: Obtain sub-block decomposition of both images.

  • Step 2: Apply fusion rule as

$$F_{M \times N} \left( {i,j} \right) = \left\{ {\begin{array}{*{20}c} {InvF^{{SB_{m1 \times n1} \left( {i,j} \right) }} ,\quad {\text{if}}\quad InvF^{{SB_{m1 \times n1} \left( {i,j} \right) }} \ge (InvF^{{SB_{m2 \times n2} \left( {i,j} \right)}} \;{\text{and}}\quad InvF^{{SB_{m3 \times n3} (i,j)}} )} \\ {InvF^{{SB_{m2 \times n2} (i,j)}} ,\quad {\text{if}}\quad InvF^{{SB_{m2 \times n2} (i,j)}} \ge (InvF^{{SB_{m1 \times n1} \left( {i,j} \right) }} \;{\text{and}}\quad InvF^{{SB_{m3 \times n3} (i,j)}} )} \\ {InvF^{{SB_{m3 \times n3} (i,j)}} ,\quad {\text{if}}\quad InvF^{{SB_{m3 \times n3} (i,j)}} \ge (InvF^{{SB_{m1 \times n1} \left( {i,j} \right) }} \;{\text{and}}\quad InvF^{{SB_{m2 \times n2} (i,j)}} )} \\ \end{array} } \right.$$
(12)

where \(InvF^{{SB_{m1 \times n1} \left( {i,j} \right)}}\) represents inverse FTR of subblock of size \(m1 \times n1.\) This maximum value of inverse FTR ensures that the dominant features of input images are incorporated as completely as possible in the final fused image.

Inverse FTR

The Inverse FTR is calculated using DWT coefficients for extracting various features of the images at different levels by choosing the I Xcoef and \(I_{Ycoef }\). \(I_{Xcoef}\) and \(I_{Ycoef }\) are the DWT coefficients of Image X and Image Y images respectively.

Apply fusion rule as,

$$fused_{coef} = \left\{ {\begin{array}{ll} {I_{Xcoef} } & {if \quad I_{Xcoef} > I_{Ycoef} } \\ {I_{Ycoef} } & {otherwise} \\ \end{array} } \right.$$
(13)

DWT increases directional information and introduces no blocking artifacts, thereby providing better perceptual image quality. Finally, take the inverse DWT of fused image coefficients to obtain the final fused image. Reconstruct the image, F using these fused F M×N blocks.

Results and discussions

Different images are fused to evaluate the performance of the proposed algorithm. The fusion algorithm is performed by decomposing input images into non-overlapping blocks of size 8 × 8 and then fuzzy transforming them into sub blocks of size 3 × 3, 5 × 5 and 7 × 7. From these results it is observed that small size sub-blocks are at coarser resolution level, representing approximation information such as texture of input images whereas larger size sub-blocks are at high resolution level, containing detail information such as edges and boundaries of input images. However the proposed method is successful in fusing the approximation as well as the finer details of input images in the fused image. Experimentally, it has been found that a 3 × 3 window size is more effective in terms of their entropy values reported. Since FTR has the capability of preserving monotonicity and Lipschitz continuity (Perfilieva and Baets 2010) of a function/true image edges, the proposed fusion method provides better fusion results. Figure 3 shows the qualitative comparison of various fusion methods. Visual results indicate that the proposed algorithm produces a better quality of fused image with important information well preserved in the resultant image. Figures 4 and 5 show histogram for pixel intensity without contrast enhancement and with contrast enhancement.

Fig. 3
figure 3

Qualitative result from different image fusion methods

Fig. 4
figure 4

Pixel intensity without CE

Fig. 5
figure 5

Pixel Intensity with increased contrast

Performance measures

Performance measures such as: edge strength(Q F XY ) proposed by Petrovic and Xydeas (2005), fusion loss(FL F XY ) (Kumar 2013), fusion artifacts (FA F XY ) (Kotwal and Chaudhuri 2013), entropy (H F XY ) (Haghighat et al. 2011), standard deviation (SD F XY ) (Haghighat et al. 2011), feature mutual information (FMI F XY ) (Arathi and Soman 2009), fusion factor (FF F XY ) (Wang et al. 2004), fusion symmetry (FS F XY ) (Wang et al. 2004), structural similarity index measure (SSIM XY ) (Zhang et al. 2011) and feature similarity index measure (FSIM F XY ) (Liu and Laganiere 2007) are widely used in evaluating the performance of fusion methods. A fused image with maximum number of measures achieving their desirable value is considered to be a better quality of fused image. Many objective measures have been developed in literature for assessing the performance of image fusion algorithms but none of the measure has been considered as a standard measure. The main reason of not defining a proper quality measure for image fusion is the difficulty in defining an ideal fused image. The measures generally used for evaluating the performance of fusion algorithms are based on the amount of information that has been transferred from the input images into fused image.

Edge strength

Edge strength (Q F XY ) measure, proposed by Petrovic and Xydeas (2005), determines the relative amount of edge information that is transferred from the input images into the fused image. It is determined by using the edge preservation values Q XF (i, j) and Q YF (ij) for image X and Y and is calculated as:

$$Q_{XY}^{F} = \frac{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} [Q_{XF} \left( {i,j} \right)w_{X} \left( {i,j} \right) + Q_{YF} \left( {i,j} \right)w_{Y} \left( {i,j} \right)]}}{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} [w_{X} \left( {i,j} \right) + w_{Y} \left( {i,j} \right)]}}$$
(14)

where w X (i, j) and w Y (i, j) are the weights assigned to Q XF (ij) and Q YF (ij) respectively. The large value of Q XF (i,j) depicts better edge information in the fused image.

Fusion loss

In practice, not all of the information present in the input images is transferred into the fused image. Some information of the input images get necessarily lost in the fusion process. This loss of input information is obtained as a perceptual weighted summation of local fusion loss, defined as (1 − Q XF (ij)) and (1 − Q YF (ij)) for images X and Y respectively, over locations where gradient strength in the fused image is weaker than its value in the input images. Mathematically, fusion loss (FL F XY ) (Kumar 2013) is defined as:

$$FL_{XY}^{F} = \frac{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} p(i,j)[(1 - Q_{XF} \left( {i,j} \right))w_{X} \left( {i,j} \right) + (1 - Q_{YF} \left( {i,j} \right))w_{Y} \left( {i,j} \right)]}}{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} [w_{X} \left( {i,j} \right) + w_{Y} \left( {i,j} \right)]}}$$
(15)

where,

$$p\left( {i,j} \right) = \left\{ {\begin{array}{ll} {1, \quad if\,s_{F} \left( {i,j} \right) < s_{X} \left( {i,j} \right) or \, s_{F} \left( {i,j} \right) < s_{Y} \left( {i,j} \right)} \\ {0, \quad else} \\ \end{array} } \right.$$

where s X (i, j), s Y (i, j) and s F (i, j) represents (information about) gradient strength at location \((i, j)\) in \(X, Y\) and \(F\) respectively.

Fusion artifacts

Many times fusion process itself creates unwanted artifacts in the fused image, which may affect the performance of certain fusion applications. These artifacts are obtained as a perceptual weighted summation of fusion noise at locations where fused gradient strength is stronger than that of its value in both the input images. Mathematically, fusion artifacts (Kotwal and Chaudhuri 2013) are calculated as:

$$FA_{XY}^{F} = \frac{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} q(i,j)[(1 - Q_{XF} \left( {i,j} \right))w_{X} \left( {i,j} \right) + (1 - Q_{YF} \left( {i,j} \right))w_{Y} \left( {i,j} \right)]}}{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} [w_{X} \left( {i,j} \right) + w_{Y} \left( {i,j} \right)]}}$$
(16)

where, \(q\left( {i,j} \right) = \left\{ {\begin{array}{ll} {1,} & {if \quad s_{F} \left( {i,j} \right) < s_{X} \left( {i,j} \right) or \quad s_{F} \left( {i,j} \right) < s_{Y} \left( {i,j} \right)} \\ {0,} & { else } \\ \end{array} } \right.\)

Entropy

Entropy \((H _{XY}^{F} )\) (Haghighat et al. 2011) of an image is an important statistical parameter used to measure the quantity of information contained in it. The value of entropy depicts the amount of information present in the image. Mathematically,

$$H _{XY}^{F} = - \mathop \sum \limits_{k = 0}^{L - 1} p_{k} log_{2} \left( {p_{k} } \right)$$
(17)

where L is the number of gray levels in an image and p k is the probability associated with kth gray level of image F.

Standard deviation

Standard deviation (S F XY ) (Haghighat et al. 2011) is defined as the square root of variance and is used to determine the details contained in an image by measuring the contrast level present in it. The large value of standard deviation means that the image has higher degree of clarity and contrast. Mathematically,

$$S_{XY}^{F} = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} (f(i,j) - \mu )^{2} }}{M \times N}}$$
(18)

where \(f (i, j)\) is the intensity of pixel in ith row and jth column and \(\mu\) is the mean of the image F.

Feature mutual information

Feature mutual information (\(FMI_{XY}^{F}\)) metric proposed by Haghighat (Arathi and Soman 2009) calculates the amount of feature information, FIXF and FIYF transferred from X and Y into F respectively. Mathematically,

$$FMI_{XY}^{F} = FI_{XF} + FI_{YF}$$
(19)

where, \(FI_{XF} = \mathop \sum \nolimits_{F,X} p_{FX} (i,j,k,l)log_{2} \frac{{p_{FX} (i,j,k,l)}}{{p_{F} (i,j)p_{X} (k,l)}}\) and \(FI_{YF} = \mathop \sum \nolimits_{F,Y} p_{FY} (i,j,k,l)log_{2} \frac{{p_{FY} (i,j,k,l)}}{{p_{F} (i,j)p_{Y} (k,l)}}\) where \(p_{FX}\) (\(p_{FY}\)) are the joint distribution function between \(X (Y)\) and \(F\).

Fusion factor

Fusion factor (\(FF_{XY}^{F}\)) (Wang et al. 2004) determines the amount of mutual information between each individual input image and the fused image. The large value of \(FF_{XY}^{F}\) means that good enough amount of information has been transferred from the source images to the fused image. Mathematically,

$$FF_{XY}^{F} = MI_{XF} + MI_{YF}$$
(20)

where \(MI_{XF} (MI_{YF} )\) are the mutual information between \(X (Y)\) and F.

Fusion symmetry

The parameter fusion symmetry (\(FS_{XY}^{F}\)) (Wang et al. 2004) was introduced to indicate the symmetry of the process with respect to the input images. Smaller the value of \(FS_{XY}^{F}\), better is the performance of the fusion process. Mathematically,

$$FS_{XY}^{F} = \frac{{(MI_{XF} - MI_{YF} )}}{{2(MI_{XF} + MI_{YF} )}}$$
(21)

Structural similarity index measure

Structural similarity index measure (\(SSIM_{XY}^{F}\)) (Zhang et al. 2011) is used for determining the structural similarity between two images as it takes into account the characteristics of human visual system. The \(SSIM_{XY}^{F}\) of the image X and F is given as

$$SSIM_{XY}^{F} = \frac{{\mathop \sum \nolimits_{j = 1}^{W} SSIM(x_{j, } f_{j} )}}{W}$$
(22)

where W is the total number of windows chosen in the image. \(SSIM(x_{j, } f_{j} )\) is the similarity of the image in the jth local window xj and fj and is given by,

$$SSIM\left( {x_{j, } f_{j} } \right) = \frac{{(2\mu_{{x_{j} }} \mu_{{f_{j} }} + k_{1}^{2} L^{2} )(2\sigma_{{x_{j} }} \mu_{{f_{j} }} + k_{2}^{2} L^{2} )}}{{(\mu_{{x_{j} }}^{2} + \mu_{{f_{j} }}^{2} + k_{1}^{2} L^{2} )(\sigma_{{x_{j} }}^{2} + \sigma_{{f_{j} }}^{2} + k_{2}^{2} L^{2} )}}$$
(23)

where L is the dynamic range of pixel values, μ x and \(\mu_{f}\) are the local means, σ 2 x and \(\sigma_{f}^{2}\) are the variance and σ 2 xf is the cross-covariance for windows, x and f respectively. Similarly, SSIM XY of Y and F can be obtained. The overall structural similarity index measure of the image X, Y and F is then defined as:

$$SSIM_{XY}^{F} = avg(SSIM_{XF,} SSIM_{YF} )$$
(24)

Feature similarity index measure

Feature similarity index measure (FSIM F XY ) proposed by Liu and Laganiere (2007) measures the similarity between a pair of images based on the combination of phase congruency (PC) and gradient magnitude (GM). The former provides information about local structures in an image and the latter provides the contrast information. The feature similarity index is defined as:

$$FSIM_{XF} = \frac{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} S_{XF} \left( {i,j} \right){ \hbox{max} }[PC_{X} \left( {i,j} \right),F\left( {i,j} \right) ]}}{{\mathop \sum \nolimits_{i = 1}^{M} \mathop \sum \nolimits_{j = 1}^{N} { \hbox{max} }[PC_{X} \left( {i,j} \right),F\left( {i,j} \right) ]}}$$
(25)

where PC X and PC F are the phase congruency values determined for X and F respectively. S XF (i, j) is the local similarity value. Similarly, the feature similarity index (FSIM YF ) for grayscale images Y and F can be obtained. The overall feature similarity index is then defined as:

$$FSIM_{XY}^{F} = avg(FSIM_{XF} , FSIM_{YF} )$$
(26)

Values of various objective measures obtained using different methods are compared in Fig. 6. From observing these values, it is clear that high values of edge strength, entropy, standard deviation, feature mutual information, fusion factor, structural symmetry index measure and feature similarity index measure and low values of fusion loss, fusion artifacts and fusion symmetry imply a better quality of fused image. The contrast information of the source images is emphasized and enhanced in the proposed method. Consequently, the fusion rules of proposed algorithm based on maximum directional contrast can enhance the contrast, local characteristic and details from source images. Figure 6a, d compares edge strength and entropy metric. It is observed that using proposed method edge information is preserved and it has less contrast distortions. On the other hand, the existing methods like (Kumar 2013; Singh and Khare 2014; Wang et al. 2013; Bhatnagar et al. 2013) use the concept of wavelet decomposition to obtain wavelet coefficients, but they have lower value of entropy of fused image as compared to proposed method. This is due to the fact that, for wavelet decomposition of the source images, a proper selection of mother wavelet is vital, otherwise distortion and noise is observed in output image. For our proposed method as higher valued wavelet coefficients carry salient information about images such as edges, and corners, therefore, we have chosen maximum selection rule for fusion. Figure 6b compares fusion loss. It is observed that method (Kumar 2013) shows some comparable results, but in this method the loss is incurred due to sparse relationship where directional details are concerned rather than fine. Figure 6c compares fusion artifacts. Kumar (2013), Wang et al. (2014) shows comparable results with the proposed method. In SIST method cross-scale and inter sub-bands have been fully considered in the fusion rule but maximum and minimum fusion rules are not considered. This means if noise is stronger at any corner or edge it will get weighted summation which degrades the image quality. When other wavelet based methods are considered, low-frequency coefficients are fused by the averaging method, meaning the fused coefficients are the average of the corresponding coefficients of the source images. The high-frequency coefficients are fused by choosing absolute maximum. Figure 6e compares standard deviation. The results from DCHWT method is comparable to proposed method, but this method has shift variance problem at the cost of an over-complete signal representation. Figure 6f compares feature mutual information. It is observed that the larger absolute values of high-frequency coefficients correspond to the sharper brightness in the image and lead to the salient features such as edges, lines, region boundaries, and so on. However, these are very sensitive to the noise and therefore, the noise will be taken as the useful information and misinterpret the actual information in the fused images. To select high-frequency coefficients is necessary to ensure better information interpretation. Figure 6g, h compares fusion factor and fusion symmetry respectively. These are related to mutual information content of image. It is observed that proposed method shows overall better values. Figure 6i, j compares structural similarity index measure and feature similarity index measure. Some methods show similar performance in some cases and poor performance in other cases. From these results, it is concluded that the metrics obtain their best value using the proposed method. Thus, both subjective and objective comparison proves the superiority of proposed algorithm.

Fig. 6
figure 6

Quantitative comparison of various fusion methods. a Edge strength (Q XY F), b Fusion loss (FL XY F), c Fusion artifacts FA XY F, d Entropy (H XY F), e Standard deviation (S XY F), f Feature mutual information (FMI XY F), g Fusion Factor (FF XY F), h Fusion symmetry (FS XY F), i Structural similarity index measure (SSIM XY F ), j Feature similarity index measure (FSIM XY F)

Conclusion

This paper proposes image fusion method based on contrast based fusion rule in FTR domain. Capability of FTR in preserving monotonicity and Lipschitz continuity of a function helps in efficient reconstruction of fused image. Choice of directional contrast rule to fuse FTR components and select maximum based rule to fuse inverse-FTR components extracts all prominent information that is present in input images and provides more informative fused image. Results obtained from proposed algorithm set of images are visually as well as quantitatively compared with those obtained using other standard and recent methods. The fused image obtained using proposed method contains richer feature and detailed information than other fused images. Both visual and quantitative results prove the superiority of proposed algorithm.

References

  • Arathi T, Soman K (2009) Performance evaluation of information theoretic image fusion metrics over quantitative metrics. In: International conference on advances in recent technologies in communication and computing, (ARTCom’09), Oct 27–28, pp 225–227. doi: http://doi.ieeecomputersociety.org/10.1109/ARTCom.2009.192

  • Bhatnagar G, Wu Q, Liu Z (2013) Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Trans Multimedia 15(5):1014–1024

    Article  Google Scholar 

  • Bhattacharya M, Das A (2011) Multimodality medical image registration and fusion techniques using mutual information and genetic algorithm based approaches. Software Tools and Algorithms for Biological Systems. Adv Exp Med Biol 696:441–449

    Article  CAS  PubMed  Google Scholar 

  • Dankova M, Valăsek R (2006) Full fuzzy transform and the problem of image fusion. J Elect Eng 7(12):82–84

    MATH  Google Scholar 

  • Ezzati R, Mokhtari F (2012) Numerical solution of fredholm integral equations of the second kind by using fuzzy transforms. Int J Phys Sci 7(10):1578–1583

    Article  Google Scholar 

  • Ghantous M, Bayoumi M (2013) MIRF: a multimodal image registration and fusion module based on DT-CWT. J Sign Process Syst 71(1):41–55

    Article  Google Scholar 

  • Gonzalez RC, Woods RE (2002) Digital image processing. Prentice Hall, Upper Saddle River

    Google Scholar 

  • Qu G, Zhang D, Yan P (2001) Medical image fusion by wavelet transform modulus maxima. Opt Express 9(4):184–190

    Article  ADS  CAS  PubMed  Google Scholar 

  • Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features. Comput Electr Eng 37(5):744–756

    Article  MATH  Google Scholar 

  • Hanmandlu M, Jha D, Sharma R (2003) Color image enhancement by fuzzy intensification. Pattern Recogn Lett 24:81–87

    Article  MATH  Google Scholar 

  • He C, Liu Q, Li H, Wang H (2010) Multimodal medical image fusion based on IHS and PCA. Procedia Eng 7(1):280–285

    Article  Google Scholar 

  • James AP, Dasarathy BV (2014) Medical image fusion: a survey of the state of the art. Inf Fusion 19:4–19

    Article  Google Scholar 

  • Kayani B, Mirza AM, Bangash A, Iftikhar H (2007) Pixel and feature level multiresolution image fusion based on fuzzy logic. In: Sobh T (ed) Innovations and advanced techniques in computer and information sciences and engineering, Springer, Berlin, ISBN: 978-1-4020-6267-4, pp 129–132

  • Kotwal K, Chaudhuri S (2013) A novel approach to quantitative evaluation of hyperspectral image fusion techniques. Inf Fusion 14(1):5–18

    Article  Google Scholar 

  • Kumar BS (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. SIViP 7(6):1125–1143

    Article  Google Scholar 

  • Liu Z, Laganiere R (2007) Phase congruence measurement for image similarity assessment. Pattern Recogn Lett 28(1):166–172

    Article  Google Scholar 

  • Liu Y, Yang J, Sun J (2010) PET/CT medical image fusion algorithm based on multiwavelet transform. 2nd international conference on advanced computer control (ICACC), 27–29 March, vol 2, pp 264–268

  • Martino F Di, Loia V, Perfilieva I, Sessa S (2008) An image coding/decoding method based on direct and inverse fuzzy transforms. Int J Approx Reason 48(1):110–131

    Article  MATH  Google Scholar 

  • Patanè G (2011) Fuzzy transform and least-squares approximation: analogies, differences, and generalizations. Fuzzy Sets Syst 180(1):41–54

    Article  MathSciNet  MATH  Google Scholar 

  • Peli E (1990) Contrast in complex images. J Opt Soc Am A 7(10):2030–2040

    Article  ADS  Google Scholar 

  • Perfilieva I (2005) Fuzzy transforms. In: Transactions on rough sets II. Lecture Notes in Computer Science, vol 3135, pp 63–81

  • Perfilieva I (2006) Fuzzy transforms: theory and applications. Fuzzy Sets Syst 157(8):993–1023

    Article  MathSciNet  MATH  Google Scholar 

  • Perfilieva I, Baets B De (2010) Fuzzy transforms of monotone functions with application to image compression. Inf Sci 180(17):3304–3315

    Article  MathSciNet  MATH  Google Scholar 

  • Perfilieva I, Dankova M (2008) Image fusion on the basis of fuzzy transforms. In: Computational intelligence in decision and control, world scientific proceedings series on computer engineering and information science, Computational intelligence in decision and control proceedings of the 8th international FLINS conference Madrid, Spain, 21–24 Sep, vol 1, pp 471–476

  • Petrovic V, Xydeas C (2005) Objective image fusion performance characterisation. Tenth IEEE international conference on computer vision (ICCV 2005), 17–21 Oct, vol 2, pp 1866–1871

  • Piella G (2003) A general framework for multiresolution image fusion: from pixels to regions. Inf Fusion 4(4):259–280

    Article  Google Scholar 

  • Seng CH, Bouzerdoum A, Tivive FH, Amin M G (2010) Fuzzy logic-based image fusion for multi-view through-the-wall radar. International conference on digital image computing: techniques and applications (DICTA) Sydney, NSW, pp 423–428, 1–3 Dec. doi:10.1109/DICTA.2010.78

  • Singh R, Khare A (2014) Fusion of multimodal medical images using Daubechies complex wavelet transform—a multiresolution approach. Inf Fusion 19:49–60

    Article  Google Scholar 

  • Tongzhou Z, Yanli W, Haihui W, Hongxian S, Shen G (2009) Approach of medical image fusion based on multiwavelet transform. Chinese Control and Decision Conference, (CCDC’09), Guilin, 17–19 June 2009, pp 3411–3416. doi: 10.1109/CCDC.2009.5191886

  • Vajgl M, Perfilieva I, Hod’akova P (2012) Advanced f-transform-based image fusion. In: Advances in fuzzy systems—Special issue on fuzzy functions, relations, and fuzzy transforms, vol 2012, Article no. 4, Jan 2012

  • Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  ADS  PubMed  Google Scholar 

  • Wang J, Peng J, Feng X, He G, Wu J, Yan K (2013) Image fusion with non-subsampled contourlet transform and sparse representation. J Electron Imag 22(4):043019

    Article  ADS  Google Scholar 

  • Wang L, Li B, Tian LF (2014) Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift invariant shearlet coefficients”. Inf Fusion 19:20–28

    Article  MathSciNet  Google Scholar 

  • Zhang L, Zhang D, Mou X (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  • Zoran LF (2009) Quality evaluation of multiresolution remote sensing images fusion. UPB Sci Bull Series C 71:38–52

    Google Scholar 

Download references

Authors’ contributions

AN Conducted analysis, calculations, analysis of the data, and wrote the manuscript. HGR Supervised the experimental work and manuscript preparation. Both authors read and approved the final manuscript.

Acknowledgements

Dr Nandal is no longer working at National Institute of Technology, Hamirpur, but was working there at the time that the research was carried out.

Competing interests

Both authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amita Nandal.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nandal, A., Rosales, H.G. Enhanced image fusion using directional contrast rules in fuzzy transform domain. SpringerPlus 5, 1846 (2016). https://doi.org/10.1186/s40064-016-3511-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40064-016-3511-8

Keywords