- Research
- Open access
- Published:

# Multiplicative noise removal using primal–dual and reweighted alternating minimization

*SpringerPlus*
**volume 5**, Article number: 277 (2016)

## Abstract

Multiplicative noise removal is an important research topic in image processing field. An algorithm using reweighted alternating minimization to remove this kind of noise is proposed in our preliminary work. While achieving good results, a small parameter is needed to avoid the denominator vanishing. We find that the parameter has important influence on numerical results and has to be chosen carefully. In this paper a primal–dual algorithm is designed without the artificial parameter. Numerical experiments show that the new algorithm can get a good visual quality, overcome staircase effects and preserve the edges, while maintaining high signal-to-noise ratio.

Multiplicative noise appears in many image processing applications, such as synthetic aperture radar (SAR), ultrasound imaging, single particle emission-computed tomography, and positron emission tomography. It reduces the image quality seriously and affected the subsequent processing, The traditional Gauss-based distribution denoising models (Rudin et al. 1992; Wu and Tai 2010) are not suitable for removing this sort of noise. Hence the construction of multiplicative noise model and the corresponding efficient algorithm become an important research topic recently.

Gamma distribution is commonly used to simulate multiplicative noise. Based on this assumption many models have been established. Aubert and Aujol (2008) put forward AA model using maximum a posteriori probability (MAP). Based on the logarithm transform, Shi and Osher (2008) proposed SO model. Huang et al. (2009) presented linearized alternating direction methods HNW, and Chen and Zhou (2014) proposed a linearized alternating direction method using discrepancy function constraint (Huang et al. 2013). In order to better protect the edge of the denoised image, Wang et al. (2012) suggested an iteratively reweighted total variation model (IR-TV). In this model the expectation maximum (EM) and the total variation (TV) with the classical iteratively reweighted algorithm are used. In order to avoid zero denominator in the iterative process, an artificial parameter is needed. It is well known that the parameter has important influence on numerical results and has to be chosen carefully. In this paper, an improvement of the iteratively reweighted algorithm is introduced without the artificial parameter,

The rest of the paper is organized as follows: “Iteratively reweighted model with TV” section briefly introduces the IR-TV model as well as the classical iteratively reweighted algorithm. “Solution to the model” section presents the new algorithm of IR-TV model, which is based on the primal–dual optimization and without the artificial parameter. In “Numerical experiment” section the effectiveness of the proposed algorithm is verified through numerical experiments. Finally we conclude in “Conclusion” section.

## Iteratively reweighted model with TV

Suppose the degraded image \( f({\mathbf{x}}) = u({\mathbf{x}})v({\mathbf{x}}),x \in\Omega \), where the original image \( u({\mathbf{x}}) \) is a real piecewise smooth function defined on a bounded domain \( \Omega \subset R^{2} \), and the multiplicative noise \( v({\mathbf{x}}) \) is assumed to obey Gamma distribution with mean 1

In Eq. (1), \( \varGamma ( \cdot ) \) is a Gamma function with variance 1/*L*.

Iteratively reweighted *l*
_{1} regularization minimization problem attempts to find a local minimum of concave penalty functions that more closely resembles the *l*
_{0} regularization problem (Simon and Lai 2009; Candes et al. 2008). In our previous work Wang et al. (2012), we put forward an iteratively reweighted model

where \( z({\mathbf{x}}) = \log u({\mathbf{x}}) \) and \( \phi (z) = |\nabla z| \) ,regularizer parameter *μ* is a constant connected with the intensity of noise, \( g({\mathbf{x}}) \) is a nonnegative weight function which controls the strength of smoothing. According to the classical iteratively reweighted algorithm, we choose

where *n* is the number of outer iteration. It is obvious that the larger \( \left| {\nabla z} \right| \), the weaker smoothing strength is, thus the noise is removed while the edges are preserved.

The classical algorithm to Eq. (2) attempts to find a local minimum of a concave function, whereas in each iteration the algorithm simply requires to solve a convex optimization problem, In order to prevent the zero denominator, Eq. (3) usually be revised to

The parameter \( \varepsilon^{\left( n \right)} \) provides the stability for iterations. The choice of \( \varepsilon^{\left( n \right)} \) has a significant effect on the result of the denoising. Therefore it needs to carefully adjusted. It will lead to poor denoising results with a inappropriate \( \varepsilon^{\left( n \right)} \) (Wang et al. 2012; Simon and Lai 2009).

In next section, we propose a novel algorithm to solve Eq. (2). First the splitting method is used to transform the original equation into two corresponding equations. Then the primal–dual algorithm and the Euler–Lagrange method are applied to solve these two subproblems respectively.

## Solution to the model

As Huang et al. (2009) has mentioned, let us consider the splitting form of Eq. (2)

where *w* is an auxiliary function, The parameter *γ* measures the amount of regularization to a denoising image, which is large enough to make *w* be close to *z*. In our experiment, *γ* = 19 is chosen. The main advantage of the proposed method is that the TV norm can be used in the noise removal process in an efficient manner. And Eq. (5) can be splitted into two equations

This is an alternating minimization algorithm. The first step of the method is to apply a weighted TV denoising scheme to the image generated by the previous multiplicative noise removal step. The second step of the method is to solve a part of the optimization problem.

In this paper, a primal–dual algorithm (Bertsekas et al. 2006; Bertsekas 2011) is applied to iteratively reweighted model 6a. Convex close set *K* is defined by,

where \( \overline{{\{ \cdot \} }} \) denotes convex close set of \( \{ \cdot \} \).

Let \( X,Y \) be two finite dimention real vector spaces, the corresponding norm defined as \( \left\| \cdot \right\| = \left\langle { \cdot , \cdot } \right\rangle^{1/2} \), where \( \left\langle { \cdot , \cdot } \right\rangle \) is the inner product. Gradient operator \( \nabla :X \to Y \) is continuous linear operator, the corresponding norm defined as

We introduce a divergence operator \( div:X \to Y \), the adjoint of divergence operator is defined by \( \nabla^{ * } = - div \). Then we introduce dual variable \( p = \left( {p_{1} ,\,p_{2} } \right) \), which divergence is \( divp = {{\partial p_{1} } \mathord{\left/ {\vphantom {{\partial p_{1} } {\partial x_{1} }}} \right. \kern-0pt} {\partial x_{1} }} + {{\partial p_{2} } \mathord{\left/ {\vphantom {{\partial p_{2} } {\partial x_{2} }}} \right. \kern-0pt} {\partial x_{2} }} \), we have

The regularizator of 6a is

and 6a can be transformed into

for every \( w \in X \) and \( \lambda > 0 \), \( J\left( {\lambda w} \right) = \lambda J\left( w \right) \) holds, so *J* is one-homogeneous. By the Legendre–Fenchel transform, we can obtain

with \( J^{ * } \left( v \right) \) is the “characteristic function” of a closed convex set *K*:

Since \( J^{ * * } = J \), we recover

The Euler equation for (7) is

where \( \partial J \) is the “sub-differential” of *J*. Writing this as

we get that \( q = 2\gamma \left( {z^{{\left( {n - 1} \right)}} - w} \right)/\mu \) is the minimizer of \( \left\| {q - 2\gamma z^{{\left( {n - 1} \right)}} /\mu } \right\|^{2} + \frac{2\gamma }{\mu }J^{ * } \left( q \right) \). Since \( J^{ * } \) is given by (3), the solution of problem (6) is simply given by

Therefore the problem to compute \( w^{\left( n \right)} \) become a problem to compute the nonlinear projection \( q = \pi_{{{{\mu K} \mathord{\left/ {\vphantom {{\mu K} {2\gamma }}} \right. \kern-0pt} {2\gamma }}}} \left( {z^{{\left( {n - 1} \right)}} } \right) \). Consider the following problem:

Following the standard arguments in convex analysis (Chambolle 2004; Chambolle and Pock 2011), the Karush–Kuhn–Tucker conditions yield the existence of a Lagrange multiplier \( \alpha_{i,j} \left( {\mathbf{x}} \right) \ge 0 \), such that constraint problem (9) become to,

Notice constraint problem \( \left| p \right| \le g \) in Eq. (10). For any \( {\mathbf{x}} \), \( \alpha \left( {\mathbf{x}} \right) \ge 0 \), if \( \left| p \right|^{2} < g^{2} \) ,then \( \alpha \left( {\mathbf{x}} \right) = 0 \); If \( \left| p \right|^{2} = g^{2} \), we see that in any case

Then

Substituting (11) into (10) gives,

We thus propose the following semi-implicit gradient descent (or fixed point) algorithm. We choose τ > 0, let \( p_{0} = 0 \) and for any n ≥ 0,

Combining Eq. (3) \( g\left( {\mathbf{x}} \right) = \frac{1}{{\left| {\nabla z^{{\left( {n - 1} \right)}} } \right|}} \), we calculate \( p_{m + 1} \) \( \left( {m \ge 1} \right) \) by

The denominator of Eq. (13) is greater than zero, which avoids the appearance of the rectified parameters, and of course does not need to be adjusted. The method can be seen a new method to solve nonconvex problem. We need to calculate the boundary of the norm \( \left\| {div} \right\| \).

###
**Theorem 1**

(Chambolle 2004) *If*
\( \kappa = \left\| \Delta \right\| = \left\| {div} \right\| \), *then*
\( \kappa^{2} \le 8 \)

Similar to Papers (Chambolle 2004; Chambolle and Pock 2011; Bresson et al. 2007), we now can show the following result about dual algorithm to iteratively reweighted TV model.

###
**Theorem 2**

*Let*
\( \delta t \le {1 \mathord{\left/ {\vphantom {1 8}} \right. \kern-0pt} 8} \). *Then*, \( \frac{\mu }{2\gamma }divp_{m} \)
*converges to*
\( \pi_{{{{\mu K} \mathord{\left/ {\vphantom {{\mu K} {2\gamma }}} \right. \kern-0pt} {2\gamma }}}} \left( {z^{{\left( {n - 1} \right)}} } \right) \)
*as*
\( m \to \infty \).

###
*Proof*

By algorithm we easily see that for every \( m \ge 0 \), \( \left| {\left( {p_{m} } \right)_{i,j} } \right| \le \left( {g\left( {\mathbf{x}} \right)} \right)_{i,j} \). Let \( \eta = {{\left( {p_{m + 1} - p_{m} } \right)} \mathord{\left/ {\vphantom {{\left( {p_{m + 1} - p_{m} } \right)} \tau }} \right. \kern-0pt} \tau }, \) it can be obtained

Then we have

Now, consider the following equation

where \( \left( {i,j} \right) \) is any point of image region \( \Omega \) (2-dimensional matrices).

For every point \( \left( {i,j} \right) \), we get

By \( \left| {p_{i,j}^{n + 1} } \right| \le g_{i,j} \left( {\mathbf{x}} \right) \), we know

\( \left| {\frac{{\left| {\left( {\nabla \left( {z^{{\left( {n - 1} \right)}} - \frac{\mu }{2\gamma }\left( {{\text{div}}p_{m} } \right)} \right)} \right)_{i,j} } \right|}}{{g_{i,j} }}\left( {p_{m + 1} } \right)_{i,j} } \right| \le \left| {\left( {\nabla \left( {z^{{\left( {n - 1} \right)}} - \frac{\mu }{2\gamma }\left( {{\text{div}}p_{m} } \right)} \right)} \right)_{i,j} } \right| \).

So, if \( \delta t \le 1/\kappa^{2} \), \( divp_{m + 1} - {{2\gamma z^{{\left( {n - 1} \right)}} } \mathord{\left/ {\vphantom {{2\gamma z^{{\left( {n - 1} \right)}} } \mu }} \right. \kern-0pt} \mu } \) is decrease on *n*.

And when \( \eta = 0 \), it holds \( p_{m + 1} = p_{m} \).

In fact, If \( \delta t < 1/\kappa^{2} \), it is obvious that \( \eta = 0 \) is equivalence to \( p_{m + 1} = p_{m} \) ;If \( \delta t = 1/\kappa^{2} \), by Eq. (14), for any \( i,j \) of \( \Omega \), it hold

We deduce \( \left| {\left( {\nabla \;\left( {divp_{m} - {{2\gamma z^{{\left( {n - 1} \right)}} } \mathord{\left/ {\vphantom {{2\gamma z^{{\left( {n - 1} \right)}} } \mu }} \right. \kern-0pt} \mu }} \right)} \right)_{i,j} } \right| = 0 \) or \( \left| {\left( {p_{m + 1} } \right)_{i,j} /g_{i,j} } \right| = 1 \), In both cases, (12) yields \( p_{m + 1} = p_{m} \).

In the following, we will prove the convergence of \( \frac{\mu }{2\gamma }divp_{m} \). Let \( s = \mathop {\lim }\limits_{m \to \infty } \left\| {divp_{m} - {{2\gamma z^{{\left( {n - 1} \right)}} } \mathord{\left/ {\vphantom {{2\gamma z^{{\left( {n - 1} \right)}} } \mu }} \right. \kern-0pt} \mu }} \right\| \), and \( \bar{p} \) be the limit of a converging subsequence \( \left\{ {p_{{m_{k} }} } \right\} \) of \( \left\{ {p_{m} } \right\} \). Letting \( \bar{p}^{'} \) be the limit of \( \left\{ {p_{{m_{k} + 1}} } \right\} \), we have

and repeating the previous calculations we see

It holds \( \bar{\eta }_{i,j} = {{\left( {\bar{p}_{i,j}^{{\prime }} - \bar{p}_{i,j} } \right)} \mathord{\left/ {\vphantom {{\left( {\bar{p}_{i,j}^{{\prime }} - \bar{p}_{i,j} } \right)} {\delta t}}} \right. \kern-0pt} {\delta t}} = 0 \), for any \( i,j \),i.e., \( \bar{p}^{'} = \bar{p} \)

So we can deduce

which is the Euler equation for a solution of (8). One can deduce that \( \bar{p} \) solves (8) and that \( \mu /2\gamma div\bar{p} \) is the projection of \( \pi_{\mu K/2\gamma } \left( {z^{{\left( {n - 1} \right)}} } \right) \). Since this projection is unique, we deduce that all the sequence \( \mu /2\gamma divp^{n} \) converges to \( \pi_{{{{\mu K} \mathord{\left/ {\vphantom {{\mu K} {2\gamma }}} \right. \kern-0pt} {2\gamma }}}} \left( {z^{{\left( {n - 1} \right)}} } \right) \) as \( \kappa^{2} \le 8 \).

6b is equivalence to solve the nonlinear system

## Numerical experiment

We compare our algorithm on eliminating staircase effect and preserving the detail to SO model, HNW model and classic iteratively reweighted total variation (CWTV). Signal to Noise Ratio (SNR) of the denoising image to the corresponding true image is defined as

where \( \bar{X} \) is the denoised image and \( X \) is the true image. We stop algorithm while attaining maximum SNR. The test images are, “Shape1”, “Shape2”, “Barbara”, “Lena256”, “Cameraman”, “Phantom”. The multiplicative noise with standard variance (NSV) of 1/30 and 1/10 are considered in our experiments. Table 1 shows the effect of artificial parameter *ε*
_{
n
} to denoising results of classic iteratively reweighted isotropous total variation method. Table 2 is the comparison of denoising results on SNR. From Table 1, we can explicitly see that suitable artificial parameter *ε*
_{
n
} can obtain better denoising results than some other models (such as SO model, HNW model), while unsuitable artificial parameter *ε*
_{
n
} obtain lower SNR than other models. New algorithm can obtain the highest SNR than SO model, HNW model and classic iteratively reweighted method. Moreover the new algorithm is not affected by this parameter.

### Experiment 1: Comparison on eliminating staircase effect

“Shape1” is used as a test image in this experiment, the multiplicative noise intensity is standard variance 1/10. In our algorithm, \( \mu = 0.013 \) and the number of inner iteration is set 30, the denoising SNR result can achieve 12.3856 dB. Figure 1 is the denoising results. Comparing Fig. 1c–f, we can see, staircase effect is restrained in the alternative splitting minimizating algorithm (HNW model and our algorithm), and the transition of smooth region in the new model has a good visual effect. Moreover, we can clearly find new model can preserve edge and detail better than SO model, HNW model. The edge and details of the restored images are preserved because of the action of the weighted function. In Fig. 1 short widthways lines in our methods can be restored more number than SO model and HNW model.

### Experiment 2: Detail preserving

“Shape 2” and “Lena256” images are contaminated by multiplicative noise with standard variance 1/10. Figures 2 and 3 are the denoising results. In our algorithm to “Shape 2”, \( \mu = 0.015 \) and the number of inner iteration is set 30, the denoising SNR result can achieve 16.0540 dB. We can see the denoising results is better than the SO model and HNW model. In our algorithm to “Lena256”, \( \mu = 0.0025 \) and the number of inner iteration is same as the experiment 1, and the denoising SNR result can achieve 13.9022 dB. The preserved detail of our algorithm is better than the SO model and HNW model, especially the feather on the cap.

On the edge of the image, the derivative of image edges is bigger, then weight function value becomes little and the degree of polishing is weakened to the edges. thus the edges are preserved; On the other hand, The derivative of the smooth regions is much small, weighted function is large, which strengthen the smoothing to relatively smooth regions, thus the noise is removed. Compare to Figs. 2 and 3c–f, it is obvious that the denoising results of proposed algorithm can keep details better.

## Conclusion

We study a new algorithm on iteratively reweighted to remove multiplicative noise model. An alternating minimization method is employed to solve the proposed model. And a Chambolle projection algorithm to iteratively reweighted model is proposed. Our experimental results have shown that the quality of images restored by the proposed method is quite good, especially on preserving the detail and restraining the staircase effect. Moreover the proposed algorithm provides an approach to solve the non-convex problem.

## References

Aubert G, Aujol J (2008) A variational approach to removing multiplicative noise. SIAM J Appl Math 68(4):925–946

Bertsekas DP (2011) Convex optimization theory. Tsinghua University Press, Beijing

Bertsekas DP, Nedic A, Ozdaglar AE (2006) Convex analysis and optimization. Athena Scientific, Belmont, MA

Bresson X, Esedoglu S, Vandergheynst P, Thiran JP, Osher S (2007) Fast global minimization of the active contour/snake model. J Math Imaging Vis 28:151–167

Candes EJ, Wakin M, Boyd SP (2008) Enhancing sparsity by reweighted l1 minimization. J Fourier Anal Appl 14(5–6):877–905

Chambolle A (2004) An algorithm for total variation minimization and applications. J Math Imaging Vis 20:89–97

**(Special issue on mathematics and image analysis)**Chambolle A, Pock T (2011) A first-order primal–dual algorithm for convex problems with applications to imaging. J Math Imaging Vis 40:120–145

Chen D-Q, Zhou Y (2014) Multiplicative denoising based on linearized alternating direction method using discrepancy function constraint. J Sci Comput 60:483–504

Huang Y-M, Ng MK, Wen Y-W (2009) A new total variation method for multiplicative noise removal. SIAM J Imaging Sci 2(1):20–40

Huang Y-M, Lu D-Y, Zeng T-Y (2013) Two-step approach for the restoration of images corrupted by multiplicative noise. SIAM J Sci Comput 35(6):2856–2873

Rudin L, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Phys D 60:259–268

Shi J, Osher S (2008) A nonlinear inverse scale space method for a convex multiplicative noise model. SIAM J Imaging Sci 1(3):294–321

Simon F, Lai MJ (2009) Sparsest solutions of underdetermined linear systems via

*l*_{ q }-minimization for 0 <*l*_{ q }≤ 1. Appl Comput Harmon Anal 26(3):395–407Wang X-D, Feng X-C, Huo L-G (2012) Iteratively reweighted anisotropic TV based multiplicative noise removal model. Acta Autom Sin 38:444–451

**(in Chinese)**Wu C, Tai X-C (2010) Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J Imaging Sci 3(3):300–339

## Authors’ contributions

A small parameter is needed to avoid the denominator vanishing in the algorithm using reweighted alternating minimization to remove multiplicative noise. And the parameter has important influence on numerical results and has to be chosen carefully. In this paper a primal–dual algorithm is designed without the artificial parameter. All authors read and approved the final manuscript.

### Acknowledgements

This research was supported by National Science Foundation of China (No. 61363037, 61271294) and Guangxi Natural Science Foundation (No. 2015GXNSFAA139309).

### Competing interests

The authors declare that they have no competing interests.

## Author information

### Authors and Affiliations

### Corresponding author

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Wang, X., Bi, Y., Feng, X. *et al.* Multiplicative noise removal using primal–dual and reweighted alternating minimization.
*SpringerPlus* **5**, 277 (2016). https://doi.org/10.1186/s40064-016-1807-3

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s40064-016-1807-3