Skip to main content

Color demosaicking via fully directional estimation

Abstract

Given a natural image from the single sensor, the key task is to properly reconstruct the full color image. This paper presents an effectively demosaicking algorithm based on fully directional estimation using Bayer color filter array pattern. The proposed method smoothly keeps access to current reconstruction implementations, and outperforms the horizontal and vertical estimating approaches in terms of the perceptual quality. To analyze the target of existing methods, the proposed algorithm use the multiscale gradients in single green channels as the diagonal information for the auxiliary interpolation. Furthermore, two group of weights (one is from the horizontal and vertical directions, another is from the diagonal and anti-diagonal directions) are built. Combinational weight is better suited for representing neighbor information. Another contribution is to better use the prior result. While calculating the same type of color difference, we divide all the color difference values into two interleaved parts. Estimated value in the first part will guide the subsequent color difference in the second part. It less brings the artifact of the interpolation procedure. Experimental results show that this adaptive algorithm is efficient both in the objective and subjective output measures.

Background

In many consumer electronics systems, such as pocket devices and mobile phones, single imaging sensor devices which are designed based on the color filter array have widely been used for the lower cost. Each pixel in the sensor can only capture the one of color components. The missing color are interpolated by the local or nonlocal similar region. This process is also named color demosaicking. The typical pattern is arranged as Bayer pattern (Bayer 1976) shown in Fig. 1, where the number of the green pixels is twice as the one of red and blue pixels. Because the sensor obtains the true values in the specified color channel, the missed color values have to be reconstructed in the terms of high correlation between all primary color channels. The most common methods in color demosaicking are derived from the color difference correlation property.

High correlation between all pairs of color channels measured over benchmark images indicates a commonly exploited property to devise the interpolation method. As the well-known second-order differential method, Adams and Hamilton (1996) interpolated the missing color values along the smooth edge direction named as ACPI. Motivated by directional interpolation scheme, the latter methods expanded ACPI by enough employing the directional weighted estimator. For example, Zhang proposed the horizontal and vertical direction weights via linear minimum mean square error estimation (Zhang and Wu 2005). In this demosaicking method, the larger directional variance means the smaller weight. Another type of early methods belonged to nonheuristic method. The high frequencies of the green values primarily guided the interpolation of red and blue channels (Gunturk et al. 2002). Later, multiple method fusion was formulated as an optimal problem. By analyzing the color local property, linear minimum mean-square estimation and support vector regression were grouped into a unified scheme (Zhang et al. 2009). Based on the high-frequency information preservation, the effective luminance at three color channels was designed using the Fourier transforms of down-sampled signals (Lian et al. 2007). Two detailed comparisons were also provided for an early assessment of the performance in the famous review papers (Li et al. 2008; Menon and Calvagno 2011).

Fig. 1
figure 1

The Bayer patten arrangement

In previous methods, direction effect is usually calculated by consecutive narrow line edges. Therefore, many instable weights seriously mislead the estimation of color difference. To solve this problem, new edge-sensing measure called integrated gradient can effectively extract gradient variance at the border. The edge indicator could serve as an individual guide to many successive demosaicking methods (Chung and Chan 2010). If we consider all from the digital devices, the prior knowledge from natural images will leverage on the properties of exploring intrinsic correlation (Menon and Calvagno 2009). Recent works showed gradients were the ultimate factor for extracting directional data from digital images (Pekkucuksen and Altunbasak 2013). Multiscale gradients (MG) estimated horizontal and vertical color difference using more scales into the difference equation. If the method properly used the relationships of intra and intercolor correlation and will balance the color difference results (Jaiswal et al. 2014). The interpolation errors based on geometric duality using in the low resolution image partially compensated the missing information in demosaicking (Kim et al. 2014). After the initial interpolation is completed, the refinement method in the distinct channels could significantly improve the ultimate performance (Menon and Calvagno 2011). In fact, the principle of the smoothness of color difference led to the weight fusion in the local image (Zhou et al. 2012). Recently, a type of algorithm based on residual interpolation (RI) was proposed (Kiku et al. 2013). It gives another interpolation structure and uses color residual to interpolate all the missing points. Its succeeding versions (Kiku et al. 2014; Monno et al. 2015; Ye and Ma 2015) generates the distinct definitions of the residuals. RI is greatly efficient to run on the image which has the weaker correlation between colors (Kiku et al. 2016).

The traditional methods in demosaicking depended on the balance between horizontal and vertical directions (Menon and Calvagno 2011; Pekkucuksen and Altunbasak 2013). In this section, we provide a new attempt to estimate the fully directional weight from the color difference and design two group of weights (one is from the horizontal and vertical directions, another is from the diagonal and anti-diagonal directions). It is seldom seen in the aforementioned methods.

Proposed color demosaicking method

Green channel estimation

Since the number of the green pixels is the most prevalent, many demosaicking methods try to interpolate all green pixels first. Difference gradient-based interpolation in various directions at each pixel will guide the interpolation along the smooth edge. One solution to tackle the problem of avoiding cross the edge is to adopt the second-order Laplacian interpolation filter (Zhang and Wu 2005). For red and green rows, directional interpolations at red and green points can be given by

$$\begin{aligned} R^{-}_{i,j}= & {} \frac{R_{i,j-1}+R_{i,j+1}}{2}+\frac{2G_{i,j}-G_{i,j-2}-G_{i,j+2}}{4} \end{aligned}$$
(1)
$$\begin{aligned} G^{-}_{i,j}= \frac{G_{i,j-1}+G_{i,j+1}}{2}+\frac{2R_{i,j}-R_{i,j-2}-R_{i,j+2}}{4} \end{aligned}$$
(2)

where the superscript − means the operation in the horizontal estimate. Similarly, we compute the vertical estimate as \(R^{|}_{i,j}\) and \(G^{|}_{i,j}\) at the coordinate (i, j). The interpolated direction estimate will result in directional color difference shown in

$$\begin{aligned} d^{-_{gr}}_{i,j}=\left\{ \begin{array}{ll} G^{-}_{i,j}-R_{i,j},& \text {missing}\quad G_{i,j} \\ G_{i,j}-R^{-}_{i,j},& \text {missing}\quad R_{i,j} \end{array} \right. \end{aligned}$$
(3)

and the second order color differential (Pekkucuksen and Altunbasak 2013) followed by

$$\begin{aligned} D^{-_{gr}}_{i,j} &= \left| \dfrac{R_{i,j-1}-R_{i,j+1}}{2}-\dfrac{G_{i,j-2}-G_{i,j+2}}{4}\right. \nonumber \\& \quad \left. + \, \dfrac{R_{i,j-3}-R_{i,j+3}}{8}-\dfrac{G_{i,j-4}-G_{i,j+4}}{16}\right| \end{aligned}$$
(4)

Multiscale color gradients over a narrow window is equivalent to average the color difference using the lowpass filter. Moreover, we define the second order differential in the main diagonal direction as follows.

$$\begin{aligned} D^{\backslash _{gr}}_{i,j} &= \left| \dfrac{G_{i+1,j+1}-G_{i-1,j-1}}{2}-\dfrac{G_{i+2,j+2}-G_{i-2,j-2}}{4}\right. \nonumber \\&\quad \left. + \, \dfrac{G_{i+3,j+3}-G_{i-3,j-3}}{8}-\dfrac{G_{i+4,j+4}-G_{i-4,j-4}}{16}\right| \end{aligned}$$
(5)

The second order differential \(D^{/_{gr}}_{i,j}\) in the anti-diagonal direction is similarly defined. For the green position, in diagonal directions, only green values can be provided for calculating differential information. Meanwhile, the color differences between green and blue can be obtained in the same way, occurred in the Eqs. (1–5). The green and red pixels combination estimation of the first step are alternatively filtered by

$$\begin{aligned} GR_{i,j}=\left(\omega ^{|}{} {\mathbf{f}}\cdot {\mathbf{D}}^{|_{gr}}_{i-2:i+2,j}+\omega ^{-}{\mathbf{D}}^{-_{gr}}_{i,j-2:j+2}\cdot {\mathbf{f}}^T\right)/M_T; \end{aligned}$$
(6)

where \({\mathbf{f}}=[1/4, 1/2, 1/4]\) and the operator \(\cdot\) denotes the inner product of vectors. \({\mathbf{D}}^{|_{gr}}_{i-2:i+2,j}\) and \({\mathbf{D}}^{-_{gr}}_{i,j-2:j+2}\) are the column and row vectors consisted of \(D^{-_{gr}}\) and \(D^{|_{gr}}\), respectively. The weights for each direction \((\omega ^{-},\omega ^{|})\) are calculated using color difference gradients in the horizontal and vertical directions as:

$$\begin{aligned} \omega ^{-} &= 1/\left[ \left( \mathop {\sum }\limits _{i-2}^{i+2}\mathop {\sum }\limits _{j-2}^{j+2}D^{-_{gr}}_{i,j}\right) ^4+\varepsilon \right] ,\quad \omega ^{|}=1/\left[ \left( \mathop {\sum }\limits _{i-2}^{i+2}\mathop {\sum }\limits _{j-2}^{j+2}D^{|_{gr}}_{i,j}\right) ^4+\varepsilon \right] ,\\ M_T &= \omega ^{-}+\omega ^{|} \end{aligned}$$

where \(\varepsilon\) is a small positive number to avoid zero denominator. \(M_T\) normalizes the total weights. Because horizontal and vertical weights simply decompose the edge into two directions. This is not sufficient to represent the edge shape. To better solve this problem, we first detail directional weights as follows

$$\begin{aligned} \omega ^{\uparrow } &= 1/\left[ \mathop {\sum }\limits _{k=i-2}^{i}\mathop {\sum }\limits _{l=j-1}^{j+1}\left(D^{|_{gr}}_{k,l}\right)^2+\varepsilon \right] \nonumber \\ \omega ^{\downarrow } &= 1/\left[ \mathop {\sum }\limits _{k=i}^{i+2}\mathop {\sum }\limits _{l=j-1}^{j+1}\left(D^{|_{gr}}_{k,l}\right)^2+\varepsilon \right] \nonumber \\ \omega ^{\leftarrow } &= 1/\left[ \mathop {\sum }\limits _{k=i-1}^{i+1}\mathop {\sum }\limits _{l=j-2}^{j}\left(D^{-_{gr}}_{k,l}\right)^2+\varepsilon \right] \nonumber \\ \omega ^{\rightarrow } &= 1/\left[ \mathop {\sum }\limits _{=i-1}^{i+1}\mathop {\sum }\limits _{l=j}^{j+2}\left(D^{-_{gr}}_{k,l}\right)^2+\varepsilon \right] \nonumber \\ \omega &= \omega ^{\uparrow }+\omega ^{\downarrow }+\omega ^{\leftarrow }+\omega ^{\rightarrow } \end{aligned}$$
(7)

Here, all weights are normalized to the [0, 1] interval by dividing the sum \(\omega\) for the sake of simplicity. In the subsequent section, all calculated weights are normalized using the same way. Except for the previous weight factor, supplementary information from main diagonal and anti-diagonal directions is used to have a better decision in a texture region. Because we add new four directional weights, it provides the feasibility of improving the green channel result by updating the initial color difference estimates. Another four directional weights are

$$\begin{aligned} \omega ^{\nwarrow } &= 1/\left[ \left( \mathop {\sum }\limits _{k=0}^{2}\mathop {\sum }\limits _{l=0}^{2}\left(D^{\backslash _{gr}}_{i-k,j+l-k-1}\right)^2\right) +\varepsilon \right] \nonumber \\ \omega ^{\searrow } &= 1/\left[ \left( \mathop {\sum }\limits _{k=0}^{2}\mathop {\sum }\limits _{l=0}^{2}\left(D^{\backslash _{gr}}_{i+k,j+l+k-1}\right)^2\right) +\varepsilon \right] \nonumber \\ \omega ^{\nearrow } &= 1/\left[ \left( \mathop {\sum }\limits _{k=0}^{2}\mathop {\sum }\limits _{l=0}^{2}\left(D^{/_{gr}}_{i-k,j+l+k-1}\right)^2\right) +\varepsilon \right] \nonumber \\ \omega ^{\swarrow } &= 1/\left[ \left( \mathop {\sum }\limits _{k=0}^{2}\mathop {\sum }\limits _{l=0}^{2}\left(D^{/_{gr}}_{i+k,j+l-k-1}\right)^2\right) +\varepsilon \right] \end{aligned}$$
(8)

In next part, we eventually reach a green-red color difference for estimating the missing green values.

$$\begin{aligned} GR_{i,j}=GR_{i,j}*(1-w_1)+{\mathbf{M}}_{1}\otimes {\mathbf{GR}}_{i-2:i+2,j-2:j+2} \end{aligned}$$
(9)

where

$$\begin{aligned} {\mathbf{M}}_{1}=\left[ \begin{array}{ccccc} (1-w_2)\times w_1\times \omega ^{\nwarrow }&\quad{}0&\quad{}w_2\times w_1\times \omega ^{\uparrow }&\quad{}0&\quad{}(1-w_2)\times w_1\times \omega ^{\nearrow }\\ 0&\quad{}0&\quad{}0&\quad{}0&\quad{}0\\ w_2\times w_1\times \omega ^{\leftarrow }&\quad{}0&\quad{}0&\quad{}0&\quad{}w_2\times w_1\times \omega ^{\rightarrow }\\ 0&\quad{}0&\quad{}0&\quad{}0&\quad{}0\\ (1-w_2)\times w_1\times \omega ^{\swarrow }&\quad{}0&\quad{}w_2\times w_1\times \omega ^{\downarrow }&\quad{}0&\quad{}(1-w_2)\times w_1\times \omega ^{\searrow }\\ \end{array}\right] \end{aligned}$$
(10)

In experiments, we set \(w_1=0.6\) and \(w_2=0.8\). In our method, two group of weights (one group is from the horizontal and vertical directions, another is from the diagonal direction) are build to be better suited for representing neighbor information. Ultimate estimation to the green value at the red pixel is designed by

$$\begin{aligned} G_{i,j}=R_{i,j}+GR_{i,j} \end{aligned}$$
(11)

For the green/blue row and column, the same procedures as above can be performed. Until this step, all the green pixels have been interpolated.

Red/blue channel estimation at blue/red position

After the fulfillment of the green channel, we initially reconstruct the red and blue value at the blue and red corresponding pixel. Because the interpolations of red and blue channels are similar at this time, without loss of generality, we only discuss the red channel reconstruction. These red pixels are reconstructed based on \(7\times 7\) windows. The similar weight matrix is also proposed in Pekkucuksen and Altunbasak (2013)

$$\begin{aligned} {\mathbf{M}}_2=\frac{1}{20}\times \left[ \begin{array}{ccccccc} 0&{}\quad 0&{}\quad -1&{}\quad 0&{}\quad -1&{}\quad 0&{}\quad 0\\ 0&{}\quad 0 &{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ -1&{}\quad 0&{}\quad 7&{}\quad 0&{}\quad 7&{}\quad 0&{}\quad -1\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ -1&{}\quad 0&{}\quad 7&{}\quad 0&{}\quad 7&{}\quad 0&{}\quad -1\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad -1&{}\quad 0&{}\quad -1&{}\quad 0&{}\quad 0\\ \end{array}\right] \end{aligned}$$
(12)

The color difference between green and red is derived from the local window at the pixel coordinate (i, j). In essence, this matrix is similar to the negative Laplace filter in order to strengthen the center region.

$$\begin{aligned} R_{i,j}=G_{i,j}-{\mathbf{M}}_2\otimes {\mathbf{GR}}_{i-3:i+3,j-3:j+3} \end{aligned}$$
(13)

where \({\mathbf{GR}}_{i-3:i+3,j-3:j+3}\) is the G/R color difference and the operator \(\otimes\) denotes element-wise multiplication in the equal size matrix and subsequent summation. Furthermore, each point can be refined using the Eq. (14).

$$\begin{aligned} R_{i,j}=G_{i,j}-{\mathbf{M}}_3\otimes {\mathbf{GR}}_{i-1:i+1,j-1:j+1} \end{aligned}$$
(14)

where

$$\begin{aligned} {\mathbf{M}}_3=\left[ \begin{array}{ccc} 0&{}0.5\omega ^{\uparrow }&{}0\\ 0.5\omega ^{\leftarrow }&{}0.5&{}0.5\omega ^{\rightarrow }\\ 0&{}0.5\omega ^{\downarrow }&{}0\\ \end{array}\right] \end{aligned}$$
(15)

Red and blue channel at green component position

In the following, we interpolate the red and blue pixels at green components located in all even coordinates pixels. This procedure includes two phases. In the first phase, we estimate the red value in the green channel using the horizontal and vertical color difference. Points in the even row are interpolated. This procedure can avoid the sensitive estimation because the parameter from the single row and column is enlarged using the inverse ratio. Since the red/green color difference in the interlaced diagonal direction, in the second phase, we reconstruct red/green color difference using fully directional weight in the rest location (all odd coordinates pixels). The detailed interpolating equation is followed by the Eq. (16).

$$\begin{aligned} R_{i,j}=G_{i,j}-{\mathbf{M}}_4\otimes {\mathbf{GR}}_{i-1:i+1,j-1:j+1} \end{aligned}$$
(16)

where

$$\begin{aligned} {\mathbf{M}}_4=\left[ \begin{array}{ccc} 0&{}\quad \omega ^{\uparrow }&{}\quad 0\\ \omega ^{\leftarrow }&{}\quad 0&{}\quad \omega ^{\rightarrow }\\ 0&{}\quad \omega ^{\downarrow }&{}\quad 0\\ \end{array}\right] \end{aligned}$$
(17)

The interpolation is based on the prior value in the same color channel. After pixels in the even rows are interpolated, these recovered color value can serve as the interpolation processing in the odd rows. These prior results can further improve the performance. The Eq. (18) is re-performed at the corresponding. This refinement scheme elaborates the color difference based on the previous estimation.

$$\begin{aligned} R_{i,j}=G_{i,j}-{\mathbf{M}}_5\otimes {\mathbf{GR}}_{i-1:i+1,j-1:j+1} \end{aligned}$$
(18)

where

$$\begin{aligned} {\mathbf{M}}_5=\left[ \begin{array}{ccc} -0.25w_1&{}2w_2\times \omega ^{\uparrow }&{}-0.25w_1\\ 2w_2\times \omega ^{\leftarrow }&{}0&{}2w_2\times \omega ^{\rightarrow }\\ -0.25w_1&{}2w_2\times \omega ^{\downarrow }&{}-0.25w_1\\ \end{array}\right] \end{aligned}$$
(19)

The missing blue values at the green component positions are executed in the likelihood way. The whole demosaicking processing chain is shown in Fig. 2. We only give the processing between green and red channels. G-B estimation is same as this processing. The matrices \(M_i, i=1,2,3,4,5\) corresponds to the processing chain in this flowchart.

Fig. 2
figure 2

Demosaicking processing chain. Two directions mean horizontal and vertical directions, and four directions mean horizontal, vertical and two diagonal directions

Experimental verification

In this section, we investigate the performance of the proposed demosaicking method by analyzing two known benchmark image sets. One image source is the Kodak database containing 24 films captured and then digitized at the resolution of \(512\times 768\). We number these images from one to twenty four. These images is illustrated in Fig. 3. Another image source is McMaster sets containing 18 color images with the size of \(512\times 512\). The McMaster image set is shown in Fig. 4. We sample them according to the Bayer pattern to obtain a grey image and recreate them with different demosaicking techniques, comparing the interpolated images with the original ones. Menon and Calvagno (2011) have systematically investigated the performance beyond ten methods for the Kodak data sets tested in the previous works. Here, we select some representative algorithms and compare the performance of the proposed algorithms, including directional linear minimum mean square-error estimation (DL) (Zhang and Wu 2005), alternating projections (AP) (Gunturk et al. 2002), adaptive filtering (AF) (Lian et al. 2007), integrated gradients (IGD) (Chung and Chan 2010), regularization approaches to demosaicing (RAD) (Menon and Calvagno 2009) and the state-of-art multiscale gradients (MG) (Pekkucuksen and Altunbasak 2013) and residual interpolation (RI) (Kiku et al. 2016) algorithms. Note that we have implemented the MG method and found that it has a slightly performance difference compared to the results occurred at the reference Pekkucuksen and Altunbasak (2013) and the average PSNR values coincide exactly with the MG method. Since this implementation ignores the processing of pixels at the border, we exclude those pixels whose distance to the border is fewer than 10 pixels.

Fig. 3
figure 3

Testing images in Kodak dataset (Refers as image 1 to image 24 from left-to-right and top-to-bottom)

Fig. 4
figure 4

Testing images in McMaster dataset (Refers as image 1 to image 18 from left-to-right and top-to-bottom)

We evaluate these algorithms using objective quality metric color-peak signal-to-noise ratio (CPSNR) and structural similarity index (SSIM) value (Wang et al. 2004). CPSNR is calculated by \(\text {CPSNR}=10log_{10}(255^2/\text {CMSE})\). \(\text {CMSE}\) can be obtained by

$$\begin{aligned} \text {CMSE}=\frac{1}{3MN}\mathop {\sum }\limits _{i=r,g,b}\mathop {\sum }\limits _{x=1}^{M}\mathop {\sum }\limits _{y=1}^{N}(f(x,y,i)-f_d(x,y,i))^2 \end{aligned}$$
(20)

where f and \(f_d\) represent the original and demosaicking image of size \(M\times N\) each. The quantitative comparison (CPSNR) is summarized in Tables 1 and 2 for eight algorithms. The average CPSNR values of the proposed method are better than the closest method (MG) by 0.13 and 0.54 dB in Kodak and McMaster sets. The results of MG is directly quoted from their work Pekkucuksen and Altunbasak (2013) in the Kodak image set. For the McMaster dataset, the performance of RI is the best. The proposed method achieves the best performance in color difference series.

Table 1 CPSNR values for various algorithms in Kodak set

Comparing with PSNR which is an statistical average quality measure, SSIM value achieves high correlation with human perception of image quality, which is designed on the basis of characteristics of human visual system. For computing SSIM, we use the code provided by the original authors with default parameters and average three color channel values. Tables 3 and 4 show that average SSIM values of the proposed algorithm outperform other comparative methods.

Table 2 CPSNR values for various algorithms in McMaster set
Table 3 SSIM values for various algorithms in Kodak set
Table 4 SSIM values for various algorithms in McMaster set
Fig. 5
figure 5

Local of image (No. 19) in the Kodak set, using different methods referred as image 1 to image 9 from left-to-right and top-to-bottom. (1) RI, (2) DL, (3) AP, (4) AF, (5) IGD, (6) RAD, (7) MG, (8) Proposed, (9) original image

It is shown in Fig. 5 that the visual quality comparison of local roof in image (No. 19) of the Kodak set is executed by various interpolation methods. We can see some obvious color artifact using other different methods. Demosaicking image of the proposed method is the most slightly blurred. On the whole, the proposed method produces the most desired visual quality.

Table 5 gives a comparison of computational complexity among the algorithms. The simulations have been conducted in the Matlab platform running on the desktop PC (Intel i7-2600 CPU). It is stated that the proposed is slower than MG, because the proposed method provides the improvement based on MG and keeps the most MG’s architecture. However, AF is the fastest among all the algorithms.

Table 5 Computation time (seconds) for various algorithms

Conclusion

In this paper, an efficiently fully directional estimation-based demosaicking method is developed. Computational weighting parameters adopted here inherit the actual result from eight directional information. Unlike the other standard weight allocation algorithms, new approach allows the adaptive adjustment satisfied to local interpolation and optimal target. The proposed method need integrate the weight allocation interpolation, and finally perform an entirely demosaicking application. At the same time, the quality of the resulting images produced by the proposed approach is better in perception than that produced by those without priority estimation. Experimental results show that the proposed method is more efficient than other methods such as DL, AP, AF, IGD, RAD as well as the state-of-art MG and RI algorithms. The results of PSNR and SSIM proves that the proposed method is valid, and can obtain high performance accuracy and good results in the application.

References

  • Adams JE, Hamilton JF, Jr (1996) Adaptive color plane interpolation in single color electronic camera. U.S. Patent 5506619

  • Bayer B-E (1976) Color imaging array. U.S. Patent 3971065

  • Chung K-H, Chan Y-H (2010) A low complexity color demosaicing algorithm based on integrated gradient. J Electron Imag 19(2):021104

    Article  ADS  Google Scholar 

  • Gunturk BK, Altunbasak Y, Mersereau RM (2002) Color plane interpolation using alternating projections. IEEE Trans Image Process 11(9):997–1013

    Article  ADS  PubMed  Google Scholar 

  • Jaiswal S, Au O-C, Jakhetiya V, Yuan Y, Yang H (2014) Exploitation of inter-color correlation for color image demosaicking. In: Proceedings on IEEE international conference on image processing, pp. 1812–1816

  • Kiku D, Monno Y, Tanaka M, Okutomi M (2013) Residual interpolation for color image demosaicking. In: Proceedings on IEEE international conference on image processing, pp. 2304–2308

  • Kiku D, Monno Y, Tanaka M, Okutomi M (2014) Minimized-Laplacian residual interpolation for color image demosaicking. Proc IS&T/SPIE Electron Imaging 9023:90230L-1

    Google Scholar 

  • Kiku D, Monno Y, Tanaka M, Okutomi M (2016) Beyond color difference: residual interpolation for color image demosaicking. IEEE Trans Image Process 25(3):1288–1300

    MathSciNet  PubMed  Google Scholar 

  • Kim J, Jeon G, Jeong J (2014) Demosaicking using geometric duality and dilated directional differentiation. Opt Commun 324:194–201

    Article  ADS  CAS  Google Scholar 

  • Li X, Gunturk B, Zhang L (2008) Image demosaicking: a systematic survey. Proc SPIE Vis Commun Image Process 6822:68221L

    Google Scholar 

  • Lian N-X, Chang LL, Tan Y-P, Zagorodnov V (2007) Adaptive filtering for color filter array demosaicking. IEEE Trans Image Process 16(10):2515–2525

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  • Menon D, Calvagno G (2009) Regularization approaches to demosaicking. IEEE Trans Image Process 18(10):2209–2220

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  • Menon D, Calvagno G (2011) Color image demosaicking: an overview. Signal Process Image Commun 26(8–9):518–533

    Article  Google Scholar 

  • Monno Y, Kiku D, Tanaka M, Okutomi M (2015) Adaptive residual interpolation for color image demosaicking. In: Proceedings on IEEE international conference on image processing, pp. 3861–3865

  • Pekkucuksen I, Altunbasak Y (2013) Multiscale gradients-based color filter array interpolation. IEEE Trans Image Process 22(1):157–165

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  • Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  ADS  PubMed  Google Scholar 

  • Ye W, Ma K-K (2015) Color image demosaicing using iterative residual interpolation. IEEE Trans Image Process 24(12):5879–5891

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  • Zhang L, Wu XL (2005) Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans Image Process 14(12):2167–2178

    Article  ADS  PubMed  Google Scholar 

  • Zhang F, Wu X, Yang X, Zhang W, Zhang L (2009) Robust color demosaicking with adaptation to varying spectral correlations. IEEE Trans Image Process 18(12):2706–2717

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  • Zhou D, Shen X, Dong W (2012) Colour demosaicking with directional filtering and weighting. IET Image Process 6(8):1084–1092

    Article  MathSciNet  Google Scholar 

Download references

Authors' contributions

In the color display, one key task is to properly reconstruct the full color image from the single sensor. This manuscript presents the demosaicking method using multiple edge decisions. The conventional methods only consider horizontal and vertical estimations. Due to the absence of color difference in the diagonal and anti-diagonal direction, it is difficult to estimate the factor from these directions. In this paper, we use multiscale color gradients to create full-directional parameters, therefore, the proposed method less brings the artifact of the interpolation procedure. To analyze the target of existing methods, the proposed algorithm gives a new refinement process in terms of the corresponding real neighbor samples to minimize visual distortion. Experimental results show that this adaptive algorithm is efficient both in the objective and subjective output quality. All authors read and approved the final manuscript.

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grants (61373151, U1536109, 61572309), Zhejiang provincial Science & Technology Innovation Team focused fund (2013TD03), Key Laboratory of Solid State Disk and Data security of Zhejiang Province (2015E10003), the Priority Academic Program Development of Jiangsu Higer Education Institutions (PAPD) and Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology (CICAEET).

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guorui Feng.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, L., Feng, G., Ren, Y. et al. Color demosaicking via fully directional estimation. SpringerPlus 5, 1736 (2016). https://doi.org/10.1186/s40064-016-3380-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40064-016-3380-1

Keywords