Skip to main content

Hybrid regularizers-based adaptive anisotropic diffusion for image denoising

Abstract

To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the \(H^{-1}\)-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations.

Introduction

With the popularity of image sensor, digital images play a key role in people’s daily life. Unfortunately, images are ineluctably contaminated by noise during acquisition, transmission, and storage. Therefore, image denoising is still an open and complex problem in image processing and computer vision (Chatterjee and Milanfar 2010). Image denoising aims to recovering the original image u from the observed noisy image \(u_0\), where \(u_0=u+n\), and n is the zero-mean Gaussian white noise with standard deviation \(\sigma\).

During the past three decades, lots of approaches for removing noise have been developed from linear models to nonlinear models. Linear models perform well in the smooth area. However, they don’t preserve edges and corners. To overcome the disadvantages of the linear denoising models, nonlinear denoising models have been developed which have a good balance between noise removal and edge-preserving. Nonlinear models based on variation (Rudin et al. 1992) and partial differential equation (PDE) (Perona and Malik 1990) have been widely used for image denoising. The best known variational denoising model is the total variation (TV) model proposed by Rudin et al. (1992), which minimizes the following equation,

$$\min \limits _{u}\left\{ \int _\Omega \left( |\nabla u|+\frac{\lambda }{2}(u-u_0)^2\right) d\Omega \right\}$$
(1)

where \(\Omega \subseteq R^2\) is a bounded open domain with Lipschitz boundary, \(\nabla\) denotes the gradient operator, \(|\nabla u|\) is the TV regularization term, \((u-u_0)^2\) is the fidelity term, \(\lambda >0\) is the regularization parameter, which measures the trade off between the regularization term and the fidelity term.

The classical TV model is efficient for removing noise and preserving the edges. However, it possesses some undesirable properties in the recovered image under some circumstances, such as the staircasing effect. To overcome the deficiency of the original TV model (Strong 1997), developed the adaptive TV regularization based variational model as,

$$\min \limits _u\int _\Omega \left( g(x)|\nabla u|+\frac{\lambda }{2}(u-u_0)^2\right) d\Omega$$
(2)

where g(x) is an adaptive edge-stopping function, which is defined in Strong (1997) as follow,

$$g(x)=\frac{1}{1+{\mathcal {K}}|\nabla G_\rho (x)*u_0|^2},$$
(3)

where \({\mathcal {K}}>0\) is a threshold parameter for balancing the noise removal and edge preservation, and \(G_\rho (x)\) is the Gaussian filter with standard deviation \(\rho\). Seen from (3), g(x) is smaller near the edges and larger away from the boundaries, so the model of (2) has the capability of preserving the edges while removing noise because the diffusion is stopped across edges.

In addition, Nikolova replaced the \(\ell ^2\)-norm with the \(\ell ^1\)-norm in the fidelity term of TV model in Nikolova (2002). Osher et al. (2005) proposed an iterative regularization method of TV model. Chen et al. (2010) presented an adaptive total variation method based on the difference curvature. Wang et al. (2011) put forward a modified TV model.

Numerical experiments demonstrate that the models mentioned above have good performance in the terms of the trade-off between removing noise and preserving the edges. Unfortunately, the staircasing effects appear in the recovered image owing to using the TV-norm as the regularization term. To overcome this shortcoming, high-order PDE filters have been proposed and applied for image denoising successfully (Lysaker et al. 2003; Liu et al. 2011). One of the most classical fourth-order PDEs (LLT) is introduced by Lysaker et al. (2003)

$$\int _\Omega \left( |\nabla ^2 u|+\frac{\lambda }{2}(u-u_0)^2\right) d\Omega$$
(4)

where \(\nabla ^2\) denotes the Laplacian operator. However, a major challenge is that higher-order PDEs blur the edges during image denoising.

To make use of the advantages of both TV filter and high-order PDE filters, some hybrid regularization models are recently proposed, which combined the second-order partial differential equations and the fourth-order partial differential equations (Oh et al. 2013). Li et al. (2007) proposed the adaptive image denoising model based on hybrid regularizers combining the advantages of TV model and LLT model as follows,

$$\min \limits _u\int _\Omega \left( (1-g)|\nabla u|+g|\nabla ^2 u|+\frac{\lambda }{2}(u-u_0)^2\right) d\Omega$$
(5)

where g(x) also denotes the edge-stopping function defined as in (3). The results of experiments indicate that the model of (5) performs better than the pure second-order or hight-order models.

In recent years, efficient computational algorithms for solving the denoising models have emerged in large numbers, for instance, fixed point iteration, gradient descent methods, primal-dual methods, relaxation methods, Bregman iteration and split Bregman method, and so on. These methods are efficient for image denoising while preserving the edges.

Inspired by Li et al. (2007) and Liu (2015), we propose a novel adaptive anisotropic diffusion model, incorporating the advantages of the total variation filter and the fourth-order filter, and develop an efficient computational algorithm. The main contributions of our paper can be generalized as follows. First of all, the hybrid regularization term of the novel model is composed of total variation regularization and a fourth-order filter. The fidelity term uses the \(H^{-1}\)-norm as opposed to the more commonly used \(\ell ^1\)-norm or \(\ell ^2\)-norm. The two above-mentioned filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. Another main contribution is that the split Bregman and relaxation approach are successively employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model achieves higher quality in both the qualitative and quantitative aspects than that of the state-of-the-art models cited in the paper.

The remainder of this paper is organized as follows. In “Preliminaries” section, we give some definitions. In “The new model and algorithms” section, we give the proposed model and numerical implementation in detail. The experimental results are given in “Experiments” section. Finally, this paper is concluded in the fifth section.

Preliminaries

In this section, we give a brief overview of some necessary notations and definitions for the proposed model, which will be used in the subsequent sections.

Definition 1

(Chen and Wunderli 2002). Let \(\Omega\) be an open bounded subset of \({\mathbb {R}}^n(n\ge 2)\) with Lipschitz boundary. Given a function \(u\in L^1(\Omega )\). Then the total variation of u in \(\Omega\) is defined as,

$$\int _\Omega |\nabla u|:=\sup \left\{ \int _\Omega u div\phi d\Omega |\phi \in C^1_c(\Omega ,{\mathbb {R}}^n),\Vert \phi \Vert _{L^\infty (\Omega )}\le 1\right\},$$
(6)

where div is the divergence operator, \(C_c^1(\Omega ,{\mathbb {R}}^n)\) is the subset of continuously differentiable vector functions of compact support contained in \(\Omega\), and \(L^\infty (\Omega )\) is the essential supremum norm.

Remark 1

Let the Sobolev space be \(W ^{1,1}(\Omega ):=\{u\in L^1(\Omega )|\nabla u\in L^1(\Omega )\}\). If \(\Vert u\Vert _{BV^2(\Omega )}=\int _\Omega |\nabla u|+\Vert u\Vert _{ W ^{1,1}(\Omega )}\), the space \(BV^2(\Omega )\) is a Banach space.

Definition 2

(Liu et al. 2007). Let \(\Omega\) be an open bounded subset of \({\mathbb {R}}^n(n\ge 2)\) with Lipschitz boundary. Given a function \(u\in L^1(\Omega )\). Then the \(BV^2\) seminorm of u is defined as,

$$\int _\Omega |\nabla ^2 u|:=\sup \left\{ \int _\Omega <\nabla u, div(\varphi )>_{{\mathbb {R}}^n}|\varphi \in C^2_c(\Omega ,{\mathbb {R}}^{n\times n} ),\Vert \varphi \Vert _{L^\infty (\Omega )}\le 1\right\} ,$$
(7)

where

$$div(\varphi ):=(div(\varphi _1),div(\varphi _2),\cdots ,div(\varphi _n)),$$
(8)

with \(\forall i,\varphi _i=\{\varphi ^1_i,\cdots ,\varphi ^n_i)\) and \(div(\varphi _i)=\sum _{j=1}^{n}\frac{\partial \varphi ^j_i}{\partial x_j}\), and \(\Vert \varphi \Vert =\sqrt{\sum _{i,j=1}^{n}(\varphi _i^j)^2}\).

Definition 3

(Liu 2015). Let \(\Omega\) be an open subset of \({\mathbb {R}}^n(n\ge 2)\) with Lipschitz boundary. Given a function \(u\in L^1(\Omega )\), and let \(\alpha (x)\ge 0\) be a continuous real function. Then the \(\alpha -\)total Variation of u in \(\Omega\) is defined by,

$$\int _\Omega \alpha |\nabla u|:=\sup \left\{ \int _\Omega udiv\phi d\Omega |\phi \in C^1_c(\Omega ,{\mathbb {R}}^n),\Vert \phi _i\Vert _{L^\infty (\Omega )}\le \alpha ,1\le i\le n\right\} ,$$
(9)

where the vector valued function \(\phi =(\phi _1,\phi _2,\ldots ,\phi _n)\). Moreover, the \(\alpha -BV\) seminorm is characterized by \(\Vert u\Vert _{\alpha -BV}=\int _\Omega \alpha |\nabla u|+\Vert u\Vert _{L^1(\Omega )}\).

Definition 4

(Liu 2015). Let \(\Omega\) be an open subset of \({\mathbb {R}}^n(n\ge 2)\) with Lipschitz boundary. Given a function \(u\in L^1(\Omega )\), and let \(\beta (x)\ge 0\) be a continuous real function. Then the weighted \(BV^2\) seminorm of u in \(\Omega\) is defined as,

$$\int _\Omega \beta |\nabla ^2 u|:=\sup \left\{ \int _\Omega <\nabla u, div(\varphi )>_{{\mathbb {R}}^n}|\varphi \in C^2_c(\Omega ,{\mathbb {R}}^{n\times n} ),\Vert \varphi \Vert _{L^\infty (\Omega )}\le \beta \right\} ,$$
(10)

and \(\Vert u\Vert _{\beta -BV^2(\Omega )}=\int _\Omega \beta |\nabla ^2 u|+\Vert u\Vert _{ W ^{1,1}(\Omega )}\).

Definition 5

(Jia et al. 2011). For \(\lambda >0\) and \(c\in {\mathbb {R}}\), the soft thresholding operator \(cut(c,\frac{1}{\lambda })\) is defined as,

$$\begin{aligned} cut\left( c,\frac{1}{\lambda }\right) = \left\{ \begin{array}{ll} \frac{1}{\lambda }& \quad if\quad c>\frac{1}{\lambda },\\ c&\quad if\quad -1/\lambda \le c\le \frac{1}{\lambda }, \\ -\frac{1}{\lambda }&\quad if \quad c<-\frac{1}{\lambda }. \end{array}\right. \end{aligned}$$
(11)

The new model and algorithms

The proposed model

Meyer analyzed that there exists no oscillation function in the space \(L^2(\Omega )\) and a weaker \(H^{-1}\)-norm is appropriate to represent textured or oscillatory patterns (Meyer 2001), so that we replace \(\ell ^2\)-norm of the fidelity term \((u_0-u)\) with \(H^{-1}\)-norm. Therefore, a novel adaptive image denoising model is proposed,

$$\min \limits _u\left\{ E(u)=\int _\Omega ((1-g(x))|\nabla u|+g(x)|\nabla ^2 u|)d\Omega +\frac{\lambda }{2}\Vert u_0-u\Vert ^2_{H^{-1}}\right\} ,$$
(12)

where u and \(u_0\) are the recovered image and the noisy image, respectively. Seen from Eq. (12), the \(H^{-1}\)-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter in the proposed model. \(\Vert u_0-u\Vert ^2_{H^{-1}}=\int _\Omega |\nabla (\Delta ^{-1}(u_0-u))|^2d\Omega\), and \(\Delta ^{-1}\) is the inverse Laplace operator. The diffusivity function g(x) is defined as,

$$g(x)=\exp (-{\mathcal {K}}|\nabla G_\rho *u_0|^2),$$
(13)

where Gaussian filter \(G_\rho (x)\) pre-smooths the noisy image. The larger standard deviation \(\sigma\) of the noise is, the larger standard deviation \(\rho\) of Gaussian filter is. We set \(\rho =C\sigma\), where C lies between 0 to 1. When \(g(x)\rightarrow 0\), it means that the pixels locate at the edges. Then total variation filter is selected to filter the image, which can preserve the edges. When \(g(x)\rightarrow 1\), it means that the pixels belong to the flat regions. Then the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. We replace g(x) with g in the next part of this article. Figure 1 shows the results of image denoising by our proposed model and model from Li et al. (2007), which demonstrates that the model whose fidelity term uses the \(H^{-1}\)-norm yields better results in image denoising since \(H^{-1}\)-norm is appropriate to represent textured or oscillatory patterns.

Fig. 1
figure 1

Results of image denoising by our model and model from Li et al. (2007). a Original image, b noisy image with \(\sigma =25\), c result by our model, d noise, detecting by our model, e result by model from Li et al. (2007), f noise, detecting by model from Li et al. (2007)

The numerical algorithm for the proposed model

We apply a split Bregman method (Cai et al. 2009) to solve Eq. (12). The idea of split Bregman method is to use splitting operator and Bregman iteration to solve various inverse problems (Goldstein and Osher 2009).

We turn Eq. (12) into the following constrained minimization problem by introducing an auxiliary variable z,

$$\min \limits _u\left\{ \int _\Omega \left( (1-g)|\nabla u|+g|\nabla ^2 u|+\frac{\lambda }{2}|\nabla (\Delta ^{-1}(u_0-z))|^2\right) d\Omega \right\} ,s.t. z=u.$$
(14)

The method of solving the constrained minimization problem is that it may be transformed into the unconstrained minimization problem, so the constrained problem (14) can be turned into the following unconstrained problem by introducing an auxiliary variable b,

$$\min \limits _{u,z,b}\left\{ \int _\Omega \left( (1-g)|\nabla u|+g|\nabla ^2 u|+\frac{\lambda }{2}|\nabla (\Delta ^{-1}(u_0-z))|^2\right) d\Omega +\frac{\mu }{2}\Vert u-z+b\Vert {^2_2}\right\} ,$$
(15)

By taking advantage of split Bregman method, Eq. (15) can be solved iteratively according to the following equations,

$$\begin{aligned} \left\{ \begin{array}{rcl} u^{k+1}&=&\min \nolimits _u \left\{ \int _\Omega ((1-g)|\nabla u|+g|\nabla ^2 u|) d\Omega +\frac{\mu }{2}\Vert u-z^k+b^k\Vert {^2_2}\right\} , \\ z^{k+1}&{}=&{}\min \nolimits _z\left\{ \frac{\lambda }{2}\int _\Omega |\nabla ( \Delta ^{-1}(u_0-z))|^2d\Omega +\frac{\mu }{2}\Vert u^{k+1}-z+b^k\Vert {^2_2}\right\} ,\\ b^{k+1}&=&b^k+u^{k+1}-z^{k+1}, \end{array}\right. \end{aligned}$$
(16)

where k is the number of iterations.

Solve the first subproblem in Eq. (16)

At present, the Euler–Lagrange equation method is usually used to solve the problem similarity to the first subproblem in Eq. (16). However, it works slowly. To accelerate the computation speed, the split Bregman algorithm and relaxation algorithm are adopted to solve the first subproblem in Eq. (16).

First, we define \(|\nabla u|=\Vert \nabla _x u\Vert _1+\Vert \nabla _y u\Vert _1\), and \(|\nabla ^2 u|=\Vert \Delta _x u\Vert _1+\Vert \Delta _y u\Vert _1\), and then the first subproblem in Eq. (16) can be rewritten as follows,

$$u^{k+1}=\min \limits _u\left\{ (1-g)(\Vert \nabla _x u\Vert _1+\Vert \nabla _y u\Vert _1)+g(\Vert \Delta _x u\Vert _1+\Vert \Delta _y u\Vert _1)+\frac{\mu }{2}\Vert u-z^k+b^k\Vert {^2_2}\right\} ,$$
(17)

where \(\nabla _x\), \(\nabla _y\), \(\Delta _x\) and \(\Delta _y\) are the first-order difference operators and the second-order difference operators, respectively. All the difference operators are approximated using following formulas:

$$\begin{aligned} \nabla _x u_{i,j}&=\left\{ \begin{array}{ll} 0&\quad if\quad i=1, \\ u_{i,j}-u_{i-1,j}&{}\quad if\quad 1<i\le M, \end{array}\right. \end{aligned}$$
(18)
$$\begin{aligned} \nabla _y u_{i,j}&=\left\{ \begin{array}{ll} 0& \quad if\quad j=1,\\ u_{i,j}-u_{i,j-1}& \quad if\quad 1<j\le N, \end{array}\right. \end{aligned}$$
(19)
$$\begin{aligned} \Delta _x u_{i,j}&=\left\{ \begin{array}{ll} u_{1,j}-u_{2,j}&{}\quad if\quad i=1, \\ 2u_{i,j}-u_{i-1,j}-u_{i+1,j}& \quad if\quad 1<i< M, \\ u_{M-1,j}-u_{M,j}&\quad if\quad i=M \end{array}\right. \end{aligned}$$
(20)
$$\begin{aligned} \Delta _y u_{i,j}&=\left\{ \begin{array}{ll} u_{i,1}-u_{i,2}&\quad if\quad j=1,\\ 2u_{i,j}-u_{i,j-1}-u_{i,j+1} &\quad if\quad 1<j<N, \\ u_{i,N-1}-u_{i,N}&\quad if\quad j=N. \end{array}\right. \end{aligned}$$
(21)

where \(M\times N\) represents the image size.

Second, we introduce four auxiliary variables \(\upsilon _x, \upsilon _y, \omega _x,\) and \(\omega _y\), and then Eq. (17) can be transformed into the following constrained optimization problem,

$$u^{k+1}=\min \limits _u\left\{ (1-g)(\Vert \upsilon _x \Vert _1+\Vert \upsilon _y\Vert _1)+ g(\Vert \omega _x\Vert _1+\Vert \omega _y\Vert _1)+ \frac{\mu }{2}\Vert u-z^k+b^k\Vert {^2_2}\right\} ,$$
(22)

with \(\upsilon _x=\nabla _x u, \upsilon _y=\nabla _y u, \omega _x=\Delta _x u,\) and \(\omega _y=\Delta _y u\).

The above constrained problem (22) are turned into the unconstrained minimization problem,

$$\begin{aligned} u^{k+1}=&\min \limits _{\upsilon _x,\upsilon _y,\omega _x,\omega _y,u}\left\{ (1-g)(\Vert \upsilon _x \Vert _1+\Vert \upsilon _y\Vert _1)+ g(\Vert \omega _x\Vert _1+\Vert \omega _y\Vert _1)+ \frac{\mu }{2}\Vert u-z^k+b^k\Vert ^2_2\right. \nonumber \\&\quad +\frac{\alpha }{2}\Vert \upsilon _x-\nabla _x u-f^k_x\Vert ^2_2 + \frac{\alpha }{2}\Vert \upsilon _y-\nabla _y u-f^k_y\Vert ^2_2\nonumber \\&\left. \quad +\frac{\beta }{2}\Vert \omega _x-\Delta _x u-c^k_x\Vert ^2_2+ \frac{\beta }{2}\Vert \omega _y-\Delta _y u-c^k_y\Vert ^2_2\right\} , \end{aligned}$$
(23)

where the parameters \(\alpha >0\) and \(\beta >0\). Let \(F(\upsilon _x, \upsilon _y,\nabla _x u, \nabla _y u)= \frac{\alpha }{2}\Vert \upsilon _x-\nabla _x u-f^k_x\Vert ^2_2 + \frac{\alpha }{2}\Vert \upsilon _y-\nabla _y u-f^k_y\Vert ^2_2\) and \(E(\omega _x, \omega _y,\Delta _x u,\Delta _y u)=\frac{\beta }{2}\Vert \omega _x-\Delta _x u-c^k_x\Vert ^2_2+ \frac{\beta }{2}\Vert \omega _y-\Delta _y u-c^k_y\Vert ^2_2\), and apply the split Bregman method, Eq. (23) can be solved by following equations,

$$\begin{aligned} \left\{ \begin{array}{rcl} u^{k+1}&{}=&\min \nolimits _u\left\{ \frac{\mu }{2}\Vert u-z^k+b^k\Vert ^2_2+ F(\upsilon _x^k, \upsilon _y^k,\nabla _x u, \nabla _y u)+E(\omega _x^k, \omega _y^k,\Delta _x u,\Delta _y u)\right\} ,\\ (\upsilon _x^{k+1},\upsilon _y^{k+1})&=&\min \limits _{\upsilon _x,\upsilon _y}\{(1-g)(\Vert \upsilon _x \Vert _1+\Vert \upsilon _y\Vert _1)+F(\upsilon _x,\upsilon _y,\nabla _x u^{k+1}, \nabla _y u^{k+1})\},\\ (\omega _x^{k+1},\omega _y^{k+1})&=&\min \nolimits _{\omega _x,\omega _y}\{g(\Vert \omega _x\Vert _1+\Vert \omega _y\Vert _1)+ E(\omega _x, \omega _y,\Delta _x u^{k+1},\Delta _y u^{k+1})\}, \end{array}\right. \end{aligned}$$
(24)

with the update equations,

$$\left\{ \begin{array}{lll} f_x^{k+1}&=&f^k_x-(\upsilon ^{k+1}_x-\nabla u^{k+1}_x),\quad f_y^{k+1}=f^k_y-(\upsilon ^{k+1}_y-\nabla u^{k+1}_y), \\ c_x^{k+1}&=&c^k_x-(\omega ^{k+1}_x-\Delta u^{k+1}_x),\quad c_y^{k+1}=c^k_y-(\omega ^{k+1}_y-\Delta u^{k+1}_y), \end{array}\right.$$
(25)

where \(k>0\). For \(k=0\), choose \(f_x^0=f_y^0=c_x^0=c_y^0=0\) and \(\upsilon _x^0=\upsilon _y^0=\omega _x^0=\omega _y^0=0\).

According to the relaxation algorithm (Jia et al. 2011), we may define,

$$\begin{aligned}\left\{ \begin{array}{rcl} f_x^k&=&cut(\nabla _x u^k+f^{k-1}_x,1/\alpha ),\quad f_y^k=cut(\nabla _y u^k+f^{k-1}_y,1/\alpha ), \\ c_x^k&=&cut(\Delta _x u^k+c^{k-1}_x,1/\beta ), \quad c_y^k=cut(\Delta _y u^k+v^{k-1}_y,1/\beta ). \end{array}\right. \end{aligned}$$
(26)

So, we have

$$u^{k+1}=(1-t)u^k+t\left[ z^k-b^k-\frac{\alpha }{\mu }(1-g)(\nabla ^T_x f^k_x+\nabla ^T_y f^k_y)-\frac{\beta }{\mu }g(\Delta ^T_x c^k_x+\Delta ^T_y f^k_y)\right] ,$$
(27)

where \(\nabla _x^T\), \(\nabla _y^T\), \(\Delta _x^T\) and \(\Delta _y^T\) are respectively the adjoint operators of \(\nabla _x\), \(\nabla _y\), \(\Delta _x\) and \(\Delta _y\). \(\nabla _x^T\) and \(\nabla _y^T\) have the following discrete forms,

$$\begin{aligned} \nabla _x^T u_{i,j}= & \left\{ \begin{array}{ll} -u_{2,j} & \quad if\quad i=1, \\ u_{i,j}-u_{i+1,j} &\quad if\quad 1<i<M, \\ u_{M,j} & \quad if\quad i=M \end{array}\right. \end{aligned}$$
(28)
$$\begin{aligned} \nabla _y^T u_{i,j}=&\left\{ \begin{array}{ll} -u_{i,2}&\quad if\quad j=1, \\ u_{i,j}-u_{i,j+1} &\quad if\quad 1<j<N, \\ u_{i,N} &\quad if\quad j=N \end{array}\right. \end{aligned}$$
(29)

Definitely, \(\Delta _x^T=\Delta _x\) and \(\Delta _y^T=\Delta _y\).

Solve the second subproblem in Eq. (16)

For the second subproblem in Eq. (16), we derive the Euler–Lagrange equation with respect to z, which is as follows,

$$(\lambda -\mu \Delta )z=\lambda u_0-\mu \Delta (u^k+b^k).$$
(30)

This is a linear equation, so additional operator split (AOS) iteration and Gauss–Seidel (GS) iteration can be used to solve Eq. (30). We use AOS iteration to solve this equation.

In summary, the proposed algorithm for image denoising can be described as follows,

figure a

Experiments

In this section, we experimentally compare our proposed model with the state-of-the-art models. All experiments are performed under Matlab R2009a on a PC with an Intel CPU of 1.7 GHz and 4 GB memory. Six grayscale images viz. “Manmade”, “Lena”, “Peppers”, “Barbara”, “Cameraman”, and “House” are selected as testing examples for both qualitative and quantitative evaluations. The original test images are shown in Fig. 2. The performances of all methods are compared quantitatively by using the peak signal to noise ratio (PSNR), structural similarity index measure (SSIM) (Wang et al. 2004), multi-scale structural similarity index (MS-SSIM) (Wang et al. 2003), and feature-similarity index (FSIM) (Zhang et al. 2011). In addition, we also compare the computing time and iterations of six models. PSNR is defined as follows,

$${\textit{PSNR}}=10\times log_{10}\left( \frac{255^2}{\textit{MSE}}\right) \ (db),$$
(31)

with

$${\textit{MSE}}(u,{\bar{u}})=\frac{1}{M\times N}\sum \limits _i\sum \limits _j (u_{i,j}-{\bar{u}}_{i,j})^2,$$
(32)

where u and \({\bar{u}}\) are respectively the recovered image and the original image. Generally, the larger the value of the PSNR, the better the performance. However, PSNR is inconsistent with human visual judgments. SSIM, MS-SSIM, and FSIM are close to the human vision system, so we also use them to assess the noise removal quality. SSIM is defined by,

$${\textit{SSIM}}(u,{\bar{u}})=\frac{(2\mu _u\mu _{{\bar{u}}}+c_1)(2\sigma _{u{\bar{u}}}+c_2)}{(\mu ^2_u+\mu ^2_{{\bar{u}}}+c_1)(\sigma ^2_u+\sigma ^2_{{\bar{u}}}+c_2)},$$
(33)

where \(\mu _u\) and \(\sigma _u^2\) are the mean and variance of u, respectively, \(\sigma _{u{\bar{u}}}\) is the covariance of u and \({\bar{u}}\), and \(c_1\) and \(c_2\) are two constants to avoid instability. MS-SSIM is defined by,

$${\textit{MS-SSIM}}(u,{\bar{u}})=[l_M(u,{\bar{u}})]^{\alpha _M}\prod _{i=1}^M[c_i(u,{\bar{u}})]^{\beta _i}[s_i(u,{\bar{u}})]^{\gamma _i},$$
(34)

where the luminance distortion \(l_i(u,{\bar{u}})\), the contrast distortion \(c_i(u,{\bar{u}})\) and the structure distortion \(s_i(u,{\bar{u}})\) at scale i between images u and \({\bar{u}}\) are defined as follows,

$$\begin{aligned} \left\{ \begin{array}{lll} l_i(u,{\bar{u}})&=&\frac{2\mu _u\mu _{{\bar{u}}}+c_1}{\mu _u^2+\mu _{{\bar{u}}}^2+c_1};\\ c_i(u,{\bar{u}})&=&\frac{2\sigma _u\sigma _{{\bar{u}}}+c_2}{\sigma _u^2+\sigma _{{\bar{u}}}^2+c_2};\\ s_i(u,{\bar{u}})&=&\frac{2\sigma _{u{\bar{u}}}^2+c_3}{\sigma _u^2\sigma _{{\bar{u}}}^2+c_3}, \end{array}\right. \end{aligned}$$
(35)

where \(\mu _u\) and \(\mu _{{\bar{u}}}\) represent the mean intensity of u and \({\bar{u}}\) at scale i; \(\sigma _u\) (resp. \(\sigma _{{\bar{u}}}\)) is the standard deviation of u (and \({\bar{u}}\)) at scale i, and \(\sigma _{u{\bar{u}}}\) is the covariance between u and \({\bar{u}}\) at scale i. \(c_1\), \(c_2\) and \(c_3\) are three small constants to avoid instability. In this paper, the values of the exponents \(\alpha _M\), \(\beta _i\) and \(\gamma _i\) are set as the same as those in Wang et al. (2003). FSIM is defined by,

$${\textit{FSIM}}=\frac{\sum _{x\in \Omega }S_L(x)PC_m(x)}{\sum _{x\in \Omega }PC_m(x)}$$
(36)

where \(S_L(x)\) at each location x is the similarity measure, which is defined as product of the similarity function \(S_{PC}(x)\) on Phase Congruency (PC) and similarity function \(S_{G}(x)\) on Gradient Magnitude (GM). \(S_{PC}(x)\) and \(S_{G}(x)\) are defined as follows,

$$\begin{aligned} \left\{ \begin{array}{lll} S_{PC}(x)&=&\frac{2PC_u(x)\cdot PC_{{{\bar{u}}}}(x)+T_1}{PC_u^2(x)+PC_{{{\bar{u}}}}^2(x)+T_1} \\ S_{G}(x)&=&\frac{2G_u(x)\cdot G_{{{\bar{u}}}}(x)+T_2}{G_u^2(x)+G_{{{\bar{u}}}}^2(x)+T_2} \end{array}\right. \end{aligned}$$
(37)

where \(PC_u\) and \(PC_{{{\bar{u}}}}\) denote the PC maps extracted from u and \({{\bar{u}}}\), respectively, and \(G_u\) and \(G_{{{\bar{u}}}}\) denote the GM maps extracted from u and \({{\bar{u}}}\), respectively; \(T_1\) and \(T_2\) are two small positive constants to avoid instability.

The termination condition for all experiments is defined as follows,

$$\frac{\Vert u^{n+1}-u^n\Vert ^2_2}{\Vert u^{n+1}\Vert ^2_2}\le \varepsilon,$$
(38)

where \(u^n\) and \(u^{n+1}\) are respectively denoising results at nth and \((n+1)th\) iteration, and \(\varepsilon\) is a given positive number. We set \(\varepsilon =10^{-3}\) in the experiments.

Fig. 2
figure 2

Original test images. a Manmade image, b lena image, c peppers image, d barbara image, e cameraman image, f house image

Figure 3 shows the results for the 8-bit gray-scale synthetic image with size \(320\times 320\) pixels, which is corrupted by zero-mean Gaussian white noise with \(\sigma =30\). In our experiments, we use the trial-and-error method for determining the optimal parameters. We set \(\mu =0.3\), \(\alpha =\beta =0.15\), \(t=0.2\), \({\mathcal {K}}=0.005\), and \(\rho =1\) in our algorithm, while all parameter values in TV model Rudin et al. (1992), LLT model Lysaker et al. (2003), non-local means (NLM) model Buades et al. (2005), BLS-GSM Portilla et al. (2003), and hybrid model Liu (2015) are chosen manually by trial-and-error method to ensure the best results. Figure 3a, b are the original and noisy images, respectively. Figure 3c–h show that the denoising results of TV model, LLT model, NLM model, BLS-GSM, hybrid model, and our proposed model, respectively. Figure 3c indicates that there exists the staircase effects in the recovered images by TV model. Although LLT model has the advantage on relieving the staircase effect, the edges are blurred and there are serious speckles in the recovered image. The computational efficiency of NLM model is very low. There exits the edges blurring in the result of BLS-GSM from Fig. 3f. We also find that there still exits a little stair-case effect by hybrid model from Fig. 3g. Our proposed model relieves the staircasing effects and avoids the edges blurring. Table 1 shows the PSNR, SSIM, MS-SSIM, and FSIM values corresponding to Fig. 3. From Table 1 and Fig. 3, it is shown that our proposed model produces the best result. At the same time, the proposed method takes less computational time than LLT model, NLM model and hybrid model, but the computational time of our proposed method is slightly inferior to that of TV model and BLS-GSM.

Fig. 3
figure 3

Results of the synthetic image by the six models. a Original image, b noisy image, c result by TV model, d result by LLT model, e result by NLM model, f result by BLS-GSM model, g result by hybrid model, h result by our model

Table 1 Performance comparison of the recovered results with different methods in Fig. 2

In order to demonstrate that our model can also work well for natural images, the next experiments are conducted for different images corrupted by zero-mean Gaussian white noise with \(\sigma =30\). The experimental results are shown in Figs. 4, 5, 6, 7 and 8, where the experimental results of TV model, LLT model, NLM model, BLS-GSM, hybrid model, and our proposed model are illustrated, respectively. The figures show that our model produces the visually most appealing results among the six models. The quantitative PSNR, SSIM, MS-SSIM, and FSIM values are presented in Fig. 9, which depicts that the performance of our proposed model are better than those of TV model, LLT model, NLM model, BLS-GSM, and hybrid model. In order to verify the better performance of our proposed method, Fig. 10 shows the enlarged regions cropped from Fig. 4.

Fig. 4
figure 4

Results of Lena image by the six models. a Original image, b noisy image, c result by TV model, d result by LLT model, e result by NLM model, f result by BLS-GSM model, g result by hybrid model, h result by our model

Fig. 5
figure 5

Results of Peppers image by the six models. a Original image, b noisy image, c result by TV model, d result by LLT model, e result by NLM model, f result by BLS-GSM model, g result by hybrid model, h result by our model

Fig. 6
figure 6

Results of Barbara image by the six models. a Original image, b noisy image, c result by TV model, d result by LLT model, e result by NLM model, f result by BLS-GSM model, g result by hybrid model, h result by our model

Fig. 7
figure 7

Results of Cameraman image by the six models. a Original image, b noisy image, c result by TV model, d result by LLT model, e result by NLM model, f result by BLS-GSM model, g result by hybrid model, h result by our model

Fig. 8
figure 8

Results of House image by the six models. a Original image, b noisy image, c result by TV model, d result by LLT model, e result by NLM model, f result by BLS-GSM model, g result by hybrid model, h result by our model

Fig. 9
figure 9

Comparison results in a PSNR, b SSIM, c MS-SSIM and d FSIM for Gaussian noise with \(\sigma =30\)

Fig. 10
figure 10

The enlarged detail regions cropped from Fig. 3. a Original image, b noisy image, c result by TV model, d result by LLT model, e result by NLM model, f result by BLS-GSM model, g result by hybrid model, h result by our model

We also use six images corrupted with different levels of Gaussian noise to examine the performance of the proposed model and the alternative models. Tables 2, 3, 4 and 5 give the PSNR, SSIM, MS-SSIM, and FSIM values obtained by the proposed model and the alternative models, respectively. From Tables 2, 3, 4 and 5, it can be observed that our model is greater than or equal to the other five models in PSNR, SSIM, MS-SSIM, and FSIM for the same standard deviation, which demonstrate that our model provides the best noise removal performance at different noise level.

Table 2 Comparison results in PSNR(dB) for different levels of Gaussian noise
Table 3 Comparison results in SSIM for different levels of Gaussian noise
Table 4 Comparison results in MS-SSIM for different levels of Gaussian noise
Table 5 Comparison results in FSIM for different levels of Gaussian noise

Conclusions

To eliminate the so-called staircase effect in total variation filter and avoid the edges blurring for fourth-order PDE filter, we propose an adaptive anisotropic diffusion model for image denoising, which is composed of a hybrid regularization term combining a total variation filter and a fourth-order filter and the fidelity term using the \(H^{-1}\)-norm. We also develop an efficient algorithm to solve our proposed model. Numerical experiments show that our proposed model has the highest PSNR, SSIM, MS-SSIM, and FSIM values among the six methods, and can preserve important structures, such as edges and corners.

References

  • Buades A, Coll B, Morel JM (2005) A review of image denoising algorithms, with a new one. Multiscale Model Simul 4(2):490–530

    Article  Google Scholar 

  • Cai J, Osher S, Shen Z (2009) Split Bregman methods and frame based image restoration. SIAM J Multiscale Model Simul 8(2):337–369

    Article  Google Scholar 

  • Chatterjee P, Milanfar P (2010) Is denoising dead? IEEE Trans Image Process 19(4):895–911

    Article  Google Scholar 

  • Chen Q, Montesinos P, Sun QS et al (2010) Adaptive total variation denoising based on difference curvature. Image Vis Comput 28(3):298–306

    Article  Google Scholar 

  • Chen YM, Wunderli T (2002) Adaptive total variation for image restoration in BV space. J Math Anal Appl 2272(1):117–137

    Article  Google Scholar 

  • Goldstein T, Osher S (2009) The split Bregman method for L1-regularized problems. SIAM J Imaging Sci 2(2):323–343

    Article  Google Scholar 

  • Jia RQ, Zhao H, Zhao W (2011) Relaxation methods for image denoising based on difference schemes. SIAM J Multiscale Model Simul 9(1):355–372

    Article  Google Scholar 

  • Li F, Shen CM, Fan JS, Shen CL (2007) Image restoration combining a total variational filter and a fourth-order filter. J Vis Commun Image Represent 18(4):322–330

    Article  Google Scholar 

  • Liu XW (2015) Efficient algorithms for hybrid regularizers based image denoising and deblurring. Comput Math Appl 69(7):675–687

    Article  Google Scholar 

  • Liu XW, Huang LH, Guo ZY (2011) Adaptive fourth-order partial differential equation filter for image denoising. Appl Math Lett 24(8):1282–1288

    Article  Google Scholar 

  • Liu Q, Yao Z, Ke Y (2007) Entropy solutions for a fourth-order nonlinear degenerate problem for noise removal. Nonlinear Anal Theory Methods Appl 67(6):1908–1918

    Article  Google Scholar 

  • Lysaker M, Lundervold A, Tai XC (2003) Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Trans Image Process 12(12):1579–1590

    Article  Google Scholar 

  • Meyer Y (2001) Oscillating patterns in image processing and nonlinear evolution equations. University Lecture Series, vol 22, Am Math Soc, Providence, RI

  • Nikolova M (2002) Minimizers of cost-functions involving nonsmooth data-fidelity terms. SIAM J Numer Anal 40(3):965–994

    Article  Google Scholar 

  • Oh S, Woo H, Yun S et al (2013) Non-convex hybrid total variation for image denoising. J Vis Commun Image Represent 2013(3):332–344

    Article  Google Scholar 

  • Osher S, Burger M, Goldfarb D et al (2005) An iterative regularization method for total variation based on image restoration. Multiscale Model Simul 4(2):460–489

    Article  Google Scholar 

  • Perona P, Malik J (1990) Scale-space and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell 7:629–639

    Article  Google Scholar 

  • Portilla J, Strela V, Wainwright MJ, Simoncelli EP (2003) Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans Image Process 12(11):1338–1351

    Article  Google Scholar 

  • Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Phys D 60(1):259–268

    Article  Google Scholar 

  • Strong DM (1997) Adaptive total variation minimizing image restoration. Ph.D. thesis, UCLA

  • Wang Z, Bovik AC, Sheikh HR et al (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  • Wang Y, Chen W, Zhou S et al (2011) MTV: modified total variation model for image noise removal. Electron Lett 47(10):592–594

    Article  Google Scholar 

  • Wang Z, Simoncelli EP, Bovik AC (2003) Multi-scale structural similarity for image quality assessment. In: Proceedings of international conference on signals, sysstems and coputers, Pacific Grove, USA

  • Zhang L, Mou X, Zhang D (2011) Fsim: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(9):2378–2386

    Article  Google Scholar 

Download references

Authors’ contributions

Dr. KL carried out the study design and drafted the manuscript. Prof. JT analyzed the theory and revised the manuscript. Dr. KL and Dr. LA participated in software programming and analysis. All authors read and approved the final manuscript.

Acknowledgements

This work is supported by the National Science Foundation of China (No. 61472466) and NASF-Guangdong Joint Foundation (Key Project) (No. U1135003).

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kui Liu.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, K., Tan, J. & Ai, L. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising. SpringerPlus 5, 404 (2016). https://doi.org/10.1186/s40064-016-1999-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40064-016-1999-6

Keywords