Enhanced image fusion using directional contrast rules in fuzzy transform domain
 Amita Nandal^{1, 2}Email author and
 Hamurabi Gamboa Rosales^{3}
Received: 26 May 2016
Accepted: 11 October 2016
Published: 22 October 2016
Abstract
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several nonoverlapping blocks. The components of these subblocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused subblocks are then transformed into original size blocks using inverseFTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
Keywords
Background
Image fusion is the process of fusing different images to increase the amount of significant information. These images are obtained either from different imaging modalities or from single modality. Different image fusion methods have been developed in literature (James and Dasarathy 2014). Recently, fuzzy logic based image fusion methods (Seng et al. 2010; Kayani et al. 2007) are gaining interest of researchers. Fuzzy logic has revealed to provide a basis for the approximate description of different functions. Motivated from fuzzy logic and system modeling, Perfilieva (2006) introduced fuzzytransform (FTR/Ftransform) that maps a set of functions in one space into a finite dimensional vector in another space. Researchers have successfully applied FTR in many applications including image compression, image fusion, image denoising, time series application etc. Perfilieva and Dankova (2008) proposed simple FTR based image fusion algorithm based on onelevel decomposition and complete Ftransform based image fusion algorithm based on high level decomposition. Maximum absolute value corresponding to least degraded part of input image was used as an operator for performing fusion. However, these algorithms could not be successfully applied to fuse images. The fused image obtained was of poor contrast as the FTR components corresponding to the smoother part in the image was not within the range of fusion operator. So, to obtain a fused image of good contrast, Perfilieva et al. modified original images by enhancing their contrast and then applied the proposed algorithm on newly obtained input images. This paper proposes image fusion method that fuses original images as such where directional contrast is enhanced using FTR. The rest of the paper is organized as follows: Second section gives literature review, third section presents the proposed method, results are given and discussed in fourth section and finally, conclusion is drawn in fifth section.
Literature review
The main aim of image fusion methods (Piella 2003) is to preserve all salient, interrelated and relevant information present in input images without introducing any inconsistency, noise and artifact in the fused image. An important requirement for successful fusion of input images is to have accurate geometric alignment that requires proper matching of image coordinates. This can be achieved through a process known as image registration (Bhattacharya and Das 2011). Commonly used spatial domain pixellevel algorithms (Zoran 2009) include averaging based, select maxima or minima based, intensity hue saturation (IHS) transform based, principal component analysis (PCA) based, Brovey transform (BT) based fusion methods. In the transformdomain, first the input images are transformed into frequency domain and then fusion takes place according to some fusion rules in transform domain. Finally inverse transform is done to achieve a final fused image. Guihong et al. (2001) proposed modulus maxima value of the wavelet coefficients at each point as a fusion rule to produce a fused image. Modulus maxima based fusion rule extracts sharp signal transitions and singularity features but is also sensitive to noise and artifacts. Some authors also proposed image fusion based on multiwavelet transform that possess many desirable properties such as orthogonality, symmetry and smoothness. Liu et al. (2010) proposed that either gradient based or weighted average based fusion method can be used for determining the fused low frequency coefficients where, either an algorithm based on maximum value or directional contrast or classification scheme can be used for determining the fused high frequency coefficients. On the other hand, Tongzhou et al. (2009) proposed a featurebased fusion rule to fuse original subimages. Combination of four different fusion rules: average, addition, principal component selection and select maxima were used to fuse the coefficients of low frequency subband and high frequency subband. Since directional contrast using FTR has good approximation properties and is successful in preserving true image edges, researchers (Vajgl et al. 2012; Dankova and Valăsek 2006) have also proposed fusion of multiple images using fuzzy transform. Motivated from these properties and various advantages of FTR this paper proposes fusion based on FTR domain with directional contrast.
FTR, introduced by Perfilieva (2005, 2006), is a powerful transformation technique that is capable of preserving features especially for fuzzy models. It has been successfully applied to a wide range of applications such as image fusion (Perfilieva and Dankova 2008; Vajgl et al. 2012; Dankova and Valăsek 2006), image compression (Perfilieva and Baets 2010; Martino et al. 2008), noise removal, data analysis, solution of differential and integral equations (Ezzati and Mokhtari 2012) etc. FTR establishes a correspondence between a set of functions in a closed interval into a finite (say N) dimensional vector space. It has an advantage of producing a simple and unique representation of an original function when used in place of original function and it makes complex computations easier. FTR is as useful as traditional transforms such as wavelet transform and Fourier transform, but FTR has a potential advantage over these transforms as it can use several basis functions of different shapes whereas wavelet transform utilizes a single mother wavelet to define all basis functions and Fourier transform uses only a single kind of basis function i.e. e ^{ jwx } (Patanè 2011).
The performance of image fusion algorithms is usually bounded by two factors: the algorithm quality and the quality of the registration results (He et al. 2010). A multimodal image registration and fusion module (MIRF) is proposed in (Ghantous and Bayoumi 2013). MIRF is able to automatically register and fuse images with the use of multiresolution decomposition based on DualTree Complex Wavelet Transform (DTCWT). An important requirement for successful fusion of input images is to have accurate geometric alignment that requires proper matching of image coordinates (Petrovic and Xydeas 2005).
The performance and visual quality of image is retained using discrete cosine harmonic wavelet (DCHWT) based image fusion with reduced computation (Kumar 2013). A fused image with maximum number of measures achieving their desirable value is considered to be a better quality of fused image. Many objective measures have been developed in literature for assessing the performance of image fusion algorithms. The measures generally used for evaluating the performance of fusion algorithms are based on the amount of information that has been transferred from the input images into fused image (Kotwal and Chaudhuri 2013; Haghighat et al. 2011; Arathi and Soman 2009; Wang et al. 2004; Zhang et al. 2011; Liu and Laganiere 2007).
Multilevel DualTree Complex Wavelet Transform (DTWT) is also a comparable method but it requires the design of special filters with desirable properties: approximate halfsample delay property, perfect reconstruction (orthogonal or biorthogonal), finite support, vanishing moments (good stop band) and linear phase characteristics. Also since DTWT involves complex coefficients, processing these (both real and imaginary) coefficients increases the computational complexity and the memory requirement, thereby increasing the cost of fusion method (Singh and Khare 2014).
A fusion framework is proposed for multimodal medical images based on nonsubsampled contourlet transform (NSCT) (Wang et al. 2013), based on which we can represent lowfrequency information of the image sparsely in order to extract the salient features of images. Furthermore, it can reduce the calculation cost of the fusion algorithm with sparse representation by the way of nonoverlapping blocking, thereby increasing complexity of fusion.
The shiftinvariant shearlet transform (SIST) method can efficiently capture both of the spatial feature information and the functional information contents. Besides, different from the average and maximum schemes the dependencies of the SIST coefficients of the crossscale and inter subbands have been fully considered in the proposed fusion rule, and therefore more information from the source images can be transferred into the fused images (Wang et al. 2014).
The contrast feature measures the difference of the intensity value at some pixel from the neighbouring pixels which is presented as directive contrast in NSCT domain method (Bhatnagar et al. 2013). For fusion, two different rules are used by which more information can be preserved in the fused image with improved quality. That is why, in our proposed method images are fused according to directional contrast based fusion and select maximum based fusion. The proposed fusion algorithm is also compared subjectively as well as objectively with MIRF (Ghantous and Bayoumi 2013), DCHWT (Kumar 2013), Multilevel DTWT (Singh and Khare 2014), NSCT (Wang et al. 2013), SIST (Wang et al. 2014) and Directive Contrast in NSCT (Bhatnagar et al. 2013).
Proposed method
Proposed algorithm

i = 1, 2, …, m1, j = 1, 2, …, n1 and l = m1 × n1 for subblocks of size m1 × n1.

i = 1, 2, …, m2, j = 1, 2, …, n2 and l = m2 × n2 for subblocks of size m2 × n2

i = 1, 2, …, m3, j = 1, 2, …, n3 and l = m3 × n3 for subblocks of size m3 × n3
In the proposed technique, fuzzy intensification is suggested on the basis of optimization of directional contrast using fuzzy transformation. A Gaussian membership function that transforms the saturation and intensity histograms of HSV colour model. The fuzzifier and intensification parameters are evaluated automatically for the input colour image by optimizing the contrast in the fuzzy domain. It is observed that the “index of fuzziness” decrease with enhancement. It has been found that RGB colour model is not suitable for enhancement because the colour components are not decoupled. On the other hand, in HSV colour model, hue (H), the colour content is separated from saturation (S), which can be used to dilute the colour content and V, the intensity (Value) of the colour content. By preserving H, and changing only S and V, it is possible to enhance colour image (Ezzati and Mokhtari 2012; Kumar 2013). Therefore, we need to convert RGB into HSV for the purpose. A Gaussian type membership function is used to model S and V property of the image. This membership function uses only one fuzzifier and is evaluated by maximizing fuzzy contrast, which is cumulative fuzzy variance about the crossover point. The colour image is first converted from RGB to HSV domain to preserve the hue of the image. We have considered an image set i.e., Room for the demonstration. A clear improvement is seen as far as the details and restorations of colours are concerned.
Fuzzy transformation algorithm

Step 1: Calculate normalized value of each input pixel contrast using
x _{ max } and x _{ min } are maximum and minimum contrast of pixel in each block.

Step 2: Calculate the crossover membership value of each block using

Step 3: Fuzzify the image using the following steps

Step 4: The pixel intensity of defuzzified image (df _{ img }(i, j)) is obtained using,
Select maximum fusion rule

Step 1: Obtain subblock decomposition of both images.

Step 2: Apply fusion rule as
Inverse FTR
The Inverse FTR is calculated using DWT coefficients for extracting various features of the images at different levels by choosing the I _{ Xcoef } and \(I_{Ycoef }\). \(I_{Xcoef}\) and \(I_{Ycoef }\) are the DWT coefficients of Image X and Image Y images respectively.
Results and discussions
Performance measures
Performance measures such as: edge strength(Q _{ XY } ^{ F } ) proposed by Petrovic and Xydeas (2005), fusion loss(FL _{ XY } ^{ F } ) (Kumar 2013), fusion artifacts (FA _{ XY } ^{ F } ) (Kotwal and Chaudhuri 2013), entropy (H _{ XY } ^{ F } ) (Haghighat et al. 2011), standard deviation (SD _{ XY } ^{ F } ) (Haghighat et al. 2011), feature mutual information (FMI _{ XY } ^{ F } ) (Arathi and Soman 2009), fusion factor (FF _{ XY } ^{ F } ) (Wang et al. 2004), fusion symmetry (FS _{ XY } ^{ F } ) (Wang et al. 2004), structural similarity index measure (SSIM _{ XY }) (Zhang et al. 2011) and feature similarity index measure (FSIM _{ XY } ^{ F } ) (Liu and Laganiere 2007) are widely used in evaluating the performance of fusion methods. A fused image with maximum number of measures achieving their desirable value is considered to be a better quality of fused image. Many objective measures have been developed in literature for assessing the performance of image fusion algorithms but none of the measure has been considered as a standard measure. The main reason of not defining a proper quality measure for image fusion is the difficulty in defining an ideal fused image. The measures generally used for evaluating the performance of fusion algorithms are based on the amount of information that has been transferred from the input images into fused image.
Edge strength
Fusion loss
Fusion artifacts
Entropy
Standard deviation
Feature mutual information
Fusion factor
Fusion symmetry
Structural similarity index measure
Feature similarity index measure
Conclusion
This paper proposes image fusion method based on contrast based fusion rule in FTR domain. Capability of FTR in preserving monotonicity and Lipschitz continuity of a function helps in efficient reconstruction of fused image. Choice of directional contrast rule to fuse FTR components and select maximum based rule to fuse inverseFTR components extracts all prominent information that is present in input images and provides more informative fused image. Results obtained from proposed algorithm set of images are visually as well as quantitatively compared with those obtained using other standard and recent methods. The fused image obtained using proposed method contains richer feature and detailed information than other fused images. Both visual and quantitative results prove the superiority of proposed algorithm.
Declarations
Authors’ contributions
AN Conducted analysis, calculations, analysis of the data, and wrote the manuscript. HGR Supervised the experimental work and manuscript preparation. Both authors read and approved the final manuscript.
Acknowledgements
Dr Nandal is no longer working at National Institute of Technology, Hamirpur, but was working there at the time that the research was carried out.
Competing interests
Both authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Arathi T, Soman K (2009) Performance evaluation of information theoretic image fusion metrics over quantitative metrics. In: International conference on advances in recent technologies in communication and computing, (ARTCom’09), Oct 27–28, pp 225–227. doi: http://doi.ieeecomputersociety.org/10.1109/ARTCom.2009.192
 Bhatnagar G, Wu Q, Liu Z (2013) Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Trans Multimedia 15(5):1014–1024View ArticleGoogle Scholar
 Bhattacharya M, Das A (2011) Multimodality medical image registration and fusion techniques using mutual information and genetic algorithm based approaches. Software Tools and Algorithms for Biological Systems. Adv Exp Med Biol 696:441–449View ArticlePubMedGoogle Scholar
 Dankova M, Valăsek R (2006) Full fuzzy transform and the problem of image fusion. J Elect Eng 7(12):82–84MATHGoogle Scholar
 Ezzati R, Mokhtari F (2012) Numerical solution of fredholm integral equations of the second kind by using fuzzy transforms. Int J Phys Sci 7(10):1578–1583View ArticleGoogle Scholar
 Ghantous M, Bayoumi M (2013) MIRF: a multimodal image registration and fusion module based on DTCWT. J Sign Process Syst 71(1):41–55View ArticleGoogle Scholar
 Gonzalez RC, Woods RE (2002) Digital image processing. Prentice Hall, Upper Saddle RiverGoogle Scholar
 Qu G, Zhang D, Yan P (2001) Medical image fusion by wavelet transform modulus maxima. Opt Express 9(4):184–190ADSView ArticlePubMedGoogle Scholar
 Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A nonreference image fusion metric based on mutual information of image features. Comput Electr Eng 37(5):744–756View ArticleMATHGoogle Scholar
 Hanmandlu M, Jha D, Sharma R (2003) Color image enhancement by fuzzy intensification. Pattern Recogn Lett 24:81–87View ArticleMATHGoogle Scholar
 He C, Liu Q, Li H, Wang H (2010) Multimodal medical image fusion based on IHS and PCA. Procedia Eng 7(1):280–285View ArticleGoogle Scholar
 James AP, Dasarathy BV (2014) Medical image fusion: a survey of the state of the art. Inf Fusion 19:4–19View ArticleGoogle Scholar
 Kayani B, Mirza AM, Bangash A, Iftikhar H (2007) Pixel and feature level multiresolution image fusion based on fuzzy logic. In: Sobh T (ed) Innovations and advanced techniques in computer and information sciences and engineering, Springer, Berlin, ISBN: 9781402062674, pp 129–132Google Scholar
 Kotwal K, Chaudhuri S (2013) A novel approach to quantitative evaluation of hyperspectral image fusion techniques. Inf Fusion 14(1):5–18View ArticleGoogle Scholar
 Kumar BS (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. SIViP 7(6):1125–1143View ArticleGoogle Scholar
 Liu Z, Laganiere R (2007) Phase congruence measurement for image similarity assessment. Pattern Recogn Lett 28(1):166–172View ArticleGoogle Scholar
 Liu Y, Yang J, Sun J (2010) PET/CT medical image fusion algorithm based on multiwavelet transform. 2nd international conference on advanced computer control (ICACC), 27–29 March, vol 2, pp 264–268Google Scholar
 Martino F Di, Loia V, Perfilieva I, Sessa S (2008) An image coding/decoding method based on direct and inverse fuzzy transforms. Int J Approx Reason 48(1):110–131View ArticleMATHGoogle Scholar
 Patanè G (2011) Fuzzy transform and leastsquares approximation: analogies, differences, and generalizations. Fuzzy Sets Syst 180(1):41–54MathSciNetView ArticleMATHGoogle Scholar
 Peli E (1990) Contrast in complex images. J Opt Soc Am A 7(10):2030–2040ADSView ArticleGoogle Scholar
 Perfilieva I (2005) Fuzzy transforms. In: Transactions on rough sets II. Lecture Notes in Computer Science, vol 3135, pp 63–81Google Scholar
 Perfilieva I (2006) Fuzzy transforms: theory and applications. Fuzzy Sets Syst 157(8):993–1023MathSciNetView ArticleMATHGoogle Scholar
 Perfilieva I, Baets B De (2010) Fuzzy transforms of monotone functions with application to image compression. Inf Sci 180(17):3304–3315MathSciNetView ArticleMATHGoogle Scholar
 Perfilieva I, Dankova M (2008) Image fusion on the basis of fuzzy transforms. In: Computational intelligence in decision and control, world scientific proceedings series on computer engineering and information science, Computational intelligence in decision and control proceedings of the 8th international FLINS conference Madrid, Spain, 21–24 Sep, vol 1, pp 471–476Google Scholar
 Petrovic V, Xydeas C (2005) Objective image fusion performance characterisation. Tenth IEEE international conference on computer vision (ICCV 2005), 17–21 Oct, vol 2, pp 1866–1871Google Scholar
 Piella G (2003) A general framework for multiresolution image fusion: from pixels to regions. Inf Fusion 4(4):259–280View ArticleGoogle Scholar
 Seng CH, Bouzerdoum A, Tivive FH, Amin M G (2010) Fuzzy logicbased image fusion for multiview throughthewall radar. International conference on digital image computing: techniques and applications (DICTA) Sydney, NSW, pp 423–428, 1–3 Dec. doi:10.1109/DICTA.2010.78
 Singh R, Khare A (2014) Fusion of multimodal medical images using Daubechies complex wavelet transform—a multiresolution approach. Inf Fusion 19:49–60View ArticleGoogle Scholar
 Tongzhou Z, Yanli W, Haihui W, Hongxian S, Shen G (2009) Approach of medical image fusion based on multiwavelet transform. Chinese Control and Decision Conference, (CCDC’09), Guilin, 17–19 June 2009, pp 3411–3416. doi: 10.1109/CCDC.2009.5191886
 Vajgl M, Perfilieva I, Hod’akova P (2012) Advanced ftransformbased image fusion. In: Advances in fuzzy systems—Special issue on fuzzy functions, relations, and fuzzy transforms, vol 2012, Article no. 4, Jan 2012Google Scholar
 Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612ADSView ArticlePubMedGoogle Scholar
 Wang J, Peng J, Feng X, He G, Wu J, Yan K (2013) Image fusion with nonsubsampled contourlet transform and sparse representation. J Electron Imag 22(4):043019ADSView ArticleGoogle Scholar
 Wang L, Li B, Tian LF (2014) Multimodal medical image fusion using the interscale and intrascale dependencies between image shift invariant shearlet coefficients”. Inf Fusion 19:20–28MathSciNetView ArticleGoogle Scholar
 Zhang L, Zhang D, Mou X (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386ADSMathSciNetView ArticlePubMedGoogle Scholar
 Zoran LF (2009) Quality evaluation of multiresolution remote sensing images fusion. UPB Sci Bull Series C 71:38–52Google Scholar