 Research
 Open Access
 Published:
New algorithms to compute the nearness symmetric solution of the matrix equation
SpringerPlus volume 5, Article number: 1005 (2016)
Abstract
In this paper we consider the nearness symmetric solution of the matrix equation AXB = C to a given matrix \(\tilde{X}\) in the sense of the Frobenius norm. By discussing equivalent form of the considered problem, we derive some necessary and sufficient conditions for the matrix \(X^{*}\) is a solution of the considered problem. Based on the idea of the alternating variable minimization with multiplier method, we propose two iterative methods to compute the solution of the considered problem, and analyze the global convergence results of the proposed algorithms. Numerical results illustrate the proposed methods are more effective than the existing two methods proposed in Peng et al. (Appl Math Comput 160:763–777, 2005) and Peng (Int J Comput Math 87: 1820–1830, 2010).
Background
The wellknown linear matrix equation
has been considered by many authors. Such as the generalized singular value decomposition method to compute the symmetric solutions, the reflexive and antireflexive solutions, the generalized reflexive solutions and the leastsquares symmetric positive semidefinite solutions were studied by Chu (1989) (see also Dai 1990), Peng et al. (2007), Yuan et al. (2008) and Liao et al. (2003), respectively. The quotient singular value decomposition method to compute the least squares symmetric, skewsymmetric and positive semidefinite solutions were studied by Deng et al. (2003). The generalized inverse method to compute the reflexive solutions, the asymmetric positive solutions and the Hermitian part nonnegative definite solutions were considered by Cvetkoviciliic (2006), Arias et al. (2010) and Dragana et al. (2008), respectively. The matrixform CGNE (Bjorck 2006) iteration method to compute the symmetric solutions, the skewsymmetric solutions and the leastsquares symmetric solution were given by Peng et al. (2005), Huang et al. (2008) and Lei et al. (2007) (see also Peng 2005), respectively. The matrixform LSQR iteration method to compute the leastsquares symmetric and antisymmetric solutions were given by Qiu et al. (2007). The matrixform BiCOR, CORS and GPBiCG iteration methods and the matrixform CGNE iteration method to solve the extension from of the matrix Eq. (1) were studied by Hajarian (2015a, b) and Dehghan et al. (2010), respectively.
The problem of finding a nearest matrix in the symmetric solution set of the matrix Eq. (1) to a given matrix in the sense of the Frobenius norm, that is, finding \(X\) such that
is called the matrix nearness problem. The matrix nearness problem is initially proposed in the processes of test or the recovery of linear systems due to incomplete dates or revising dates. A preliminary estimate \(\tilde{X}\) of the unknown matrix \(X\) can be obtained by the experimental observation values and the information of static distribution. The matrix nearness problem (2) with unknown matrix \(X\) being symmetric, skewsymmetric and generalized reflexive were considered by Liao et al. (2007) (see also Peng et al. 2005), Huang et al. (2008) and Yuan et al. (2008), respectively. The approaches taken in these papers include the generalized singular value decomposition method and the matrixform CGNE iteration method. In addition, there are many important results on the discussions of the matrix nearness problem associated with the other matrix equations, we refer the readers to (Chu and Golub 2005; Deng and Hu 2005; Higham 1988; Jin and Wei 2004; Konstaintinov et al. 2003; Penrose 1956) and references therein.
In this paper, we continue to consider the matrix nearness problem (2). By discussing the equivalent form of the matrix nearness problem (2), we derive some necessary and sufficient conditions for the matrix \(X^{*}\) is a solution of the matrix nearness problem (2). Based on the idea of the alternating variable minimization with multiplier (AVMM) method (Bai and Tao 2015), we propose two iterative methods to compute the solution of the matrix nearness problem (2), and analyze the global convergence results of the proposed algorithms. Numerical comparisons with some existing methods are also given.
Throughout this paper the following notations are used. \(R^{m \times n}\) and \(SR^{n \times n}\) denote the set of \(m \times n\) real matrices and the set of \(n \times n\) real symmetric matrices. I denote the identity matrix with size implied by context. \(A^{ + }\) denote the Moore–Penrose generalized inverse of the matrix \(A\). Define the inner product in space \(R^{m \times n}\) by \(\left\langle {A,B} \right\rangle = tr(A^{T} B)\) for all \(A,B \in R^{m \times n}\), then the associated norm is the Frobenius norm, and denoted by \(\left\ A \right\\).
Iteration methods to solve the matrix nearness problem (2)
In this section we first give the equivalent constrained optimization problems of the matrix nearness problem (2), and discuss the properties of the solutions of these constrained optimization problems. Then we propose iteration methods to compute the solution of the equivalent constrained optimization problems, and hence to compute the solution of the matrix nearness problem (2). Finally, we prove some convergence theorems of the proposed algorithms.
Obviously, the matrix nearness problem (2) is equivalent to the following constrained optimization problem
or
Theorem 1 Matrix pair \([X^{*} \vdots Y^{*} ]\) is a solution of the constrained optimization problem (3) if and only if exists matrices \(M^{*} \in R^{m \times n}\) and \(N^{*} \in R^{m \times p}\) such that the following equalities (5–8) hold.
Proof Assume that there exist matrices \(M^{*}\) and \(N^{*}\) such that the equalities (5–8) hold. Let
Then, for any matrices \(U \in SR^{n \times n}\) and \(V \in R^{m \times n}\), we have
This implies that the matrix pair \([X^{*} \vdots Y^{*} ]\) is a global minimizer of the matrix function \(\bar{F}(X,Y)\). Since \(AX^{*}  Y^{*} = 0\), \(Y^{*} B  C = 0\) and \(\bar{F}(X,Y) \ge \bar{F}(X^{*} ,Y^{*} )\) hold for all \(X \in SR^{n \times n}\) and \(Y \in R^{m \times n}\), we have
Hence, \(F(X,Y) \ge F(X^{*} ,Y^{*} )\) holds for all \(X \in SR^{n \times n} ,Y \in R^{m \times n}\) with \(AX  Y = 0\) and \(YB  C = 0\). That is, the matrix pair \([X^{*} \vdots Y^{*} ]\) is a solution of the constrained optimization problem (3).
Conversely, if the matrix pair \([X^{*} \vdots Y^{*} ]\) is a global solution of the constrained optimization problem (3), then the matrix pair \([X^{*} \vdots Y^{*} ]\) certainly satisfies Karush–Kuhn–Tucker conditions of the constrained optimization problem (3). That is, there exist matrices \(M^{*} \in R^{m \times n}\) and \(N^{*} \in R^{m \times p}\) such that satisfy conditions (5–8). □
Theorem 2 Matrix pair \([X^{*} \vdots Y^{*} ]\) is a solution of the constrained optimization problem (4) if and only if exists matrices \(M^{*} \in R^{n \times p}\) and \(N^{*} \in R^{m \times p}\) such that the following equalities (9–12) hold.
Proof The proof is similar to Theorem 1 and is omitted here.□
Let
We propose an iteration method to solve the constrained optimization (3), and hence to solve the matrix nearness problem (2) as follows.
Algorithm 1

Step 1.
Input the matrices \(A,B,C,\tilde{X}\) and the tolerance \(\varepsilon > 0\). Choose the initial matrices \(Y_{0} ,M_{0} ,N_{0}\) and the parameters \(\alpha ,\beta > 0\). Set \(k \leftarrow 0\).

Step 2.
Exit if a stopping criterion has been met.

Step 3.
Compute
$$X_{k + 1} = \arg \mathop {\hbox{min} }\limits_{{X \in SR^{n \times n} }} L_{\alpha ,\beta } (X,Y_{k} ,M_{k} ,N_{k} ) ,$$(14)$$Y_{k + 1} = \arg \mathop {\hbox{min} }\limits_{{Y \in R^{m \times n} }} L_{\alpha ,\beta } (X_{k + 1} ,Y,M_{k} ,N_{k} ) ,$$(15)$$M_{k + 1} = M_{k}  \alpha (AX_{k + 1}  Y_{k + 1} ) ,$$(16)$$N_{k + 1} = N_{k}  \beta (Y_{k + 1} B  C),$$(17)Set \(k \leftarrow k + 1\) and go to Step 2.

Step 4.
Let
$$\tilde{L}_{\alpha ,\beta } (X,Y,M,N) = \frac{1}{2}\left\ {X  \tilde{X}} \right\^{2}\,\,\left\langle {M,XB  Y} \right\rangle  \left\langle {N,AY  C} \right\rangle \,+ \, \frac{\alpha }{2}\left\ {XB  Y} \right\^{2} \,+\, \frac{\beta }{2}\left\ {AY  C} \right\^{2} .$$(18)
We similarly propose an iteration method to solve the constrained optimization (4), and hence to solve the matrix nearness problem (2) as follows.
Algorithm 2

Step 1.
Input the matrices \(A,B,C,\tilde{X}\) and the tolerance \(\varepsilon > 0\). Choose the initial matrices \(Y_{0} ,M_{0} ,N_{0}\) and the parameters \(\alpha ,\beta > 0\). Set \(k \leftarrow 0\).

Step 2.
Exit if a stopping criterion has been met.

Step 3.
Compute
$$X_{k + 1} = \arg \mathop {\hbox{min} }\limits_{{X \in SR^{n \times n} }} \tilde{L}_{\alpha ,\beta } (X,Y_{k} ,M_{k} ,N_{k} ) ,$$(19)$$Y_{k + 1} = \arg \mathop {\hbox{min} }\limits_{{Y \in R^{m \times n} }} \tilde{L}_{\alpha ,\beta } (X_{k + 1} ,Y,M_{k} ,N_{k} ) ,$$(20)$$M_{k + 1} = M_{k}  \alpha (X_{k + 1} B  Y_{k + 1} ) ,$$(21)$$N_{k + 1} = N_{k}  \beta (AY_{k + 1}  C),$$(22) 
Step 4.
Set \(k \leftarrow k + 1\) and go to Step 2.
For the Algorithm 1 and 2, the most of time consumption is compute \(X_{k + 1}\) and \(Y_{k + 1}\). Below, we discuss how to compute \(X_{k + 1}\) and \(Y_{k + 1}\). Firstly, \(X_{k + 1}\) in (14) can be expressed as
where \(S = \left[ {\begin{array}{*{20}c} {\sqrt \alpha A} \\ I \\ \end{array} } \right] \in R^{(m + n) \times n}\), \(T = \left[ {\begin{array}{*{20}c} {\sqrt \alpha Y_{k} + M_{k} /\sqrt \alpha } \\ {\tilde{X}} \\ \end{array} } \right] \in R^{(m + n) \times n}\). Analogously, \(X_{k + 1}\) in (19) can be expressed as
where \(S = \left[ {I,\sqrt \alpha B} \right] \in R^{n \times (n + p)}\), \(T = \left[ {\tilde{X},\sqrt \alpha Y_{k} + M_{k} /\sqrt \alpha } \right] \in R^{n \times (n + p)}\).
To solve the problems (23) and (24), we give the following Lemma 1.
Lemma 1 (Sun 1988) Given matrices \(B \in R^{n \times n}\) and , \(\Upsigma = diag(\sigma_{1} ,\sigma_{2} , \ldots \sigma_{n} )\) , then the problem \(\sigma_{i} > 0\,(i = 1, \ldots n)\,\,\) \(\left\ {X\Upsigma  B} \right\^{2} = \hbox{min}\) have a unique least squares symmetric solution with the following expression
where \(\Upphi_{ij} = \frac{1}{{\sigma_{i}^{2} + \sigma_{j}^{2} }}\), \(\Upphi = (\Upphi_{ij} ) \in R^{n \times n},\) and \(A \circ B = (a_{ij} b_{ij} )\) denotes the Hadamard product.
Noting that the matrix \(S\) in (23) is full column rank, the singular value decomposition (SVD) of the matrix \(S\) can be expressed as
where \(\Upsigma = diag(\sigma_{1} , \ldots ,\sigma_{n} )\), \(\sigma_{i} > 0\), and \(U = (U_{1} ,U_{2} ) \in R^{(m + n) \times (m + n)}\), \(V \in R^{n \times n}\) are orthogonal matrices, \(U_{1} \in R^{(m + n) \times n}\). Hence, \(X_{k + 1}\) in (23) can be expressed as
Let \(\tilde{T} = U_{1}^{T} TV\), we have by Lemma 1 that \(X_{k + 1}\) in (23) can be expressed as
Analogously, the matrix \(S\) in (24) is full row rank, and the SVD of \(S\) can be expressed as
where \(\Upsigma = diag(\sigma_{1} , \ldots ,\sigma_{n} )\), \(\sigma_{i} > 0\), and \(P \in R^{n \times n}\), \(Q = (Q_{1} ,Q_{2} ) \in R^{(n + p) \times (n + p)}\) are the orthogonal matrices, \(Q_{1} \in R^{(n + p) \times n}\), and \(X_{k + 1}\) in (24) can be expressed as
where \(\tilde{T} = P^{T} TQ_{1}\).
Then, we change our attention to compute \(Y_{k + 1}\). By simply changing, \(Y_{k + 1}\) in (15) can be expressed as
and \(Y_{k + 1}\) in (20) can be expressed as
Next, we discuss the global convergence of Algorithm 1 and 2. Note that Algorithm 2 is similar to Algorithm 1, we only discuss the convergence of Algorithm 1.
Theorem 3 Let \((X^{*} ,Y^{*} ,M^{*} ,N^{*} )\) be a saddle point for the Lagrange function
of the constrained optimization problem (3), that is, the matrices \(X^{*} ,Y^{*} ,M^{*} ,N^{*}\) satisfy conditions (5–8). Define
then, the following inequality holds
Proof Since \((X^{*} ,Y^{*} ,M^{*} ,N^{*} )\) is a saddle point, we have by the saddle point theorem (Bjorck 2006) that
for all \(X,Y,M,N\), where \(L(X,Y,M,N) = \frac{1}{2}\left\ {X  \tilde{X}} \right\^{2}  \left\langle {M,AX  Y} \right\rangle  \left\langle {N,YB  C} \right\rangle\), which called the Lagrange function of the constrained optimization problem (3). Hence we have
Noting that \(AX^{*}  Y^{*} = 0 , \, Y^{*} B  C = 0\), \(S_{k + 1} = AX_{k + 1}  Y_{k + 1}\) and \(T_{k + 1} = Y_{k + 1} B  C\), we know that the following inequality holds
Since \(X_{k + 1}\) minimize the matrix function \(L_{\alpha ,\beta } (X,Y_{k} ,M_{k} ,N_{k} )\), we have
where the first equality is the firstorder optimality condition of the problem (14), and the second equality is followed by Algorithm 1. This implies that
Hence, we have
Since \(Y_{k + 1}\) minimize \(L_{\alpha ,\beta } (X_{k + 1} ,Y,M_{k} ,N_{k} )\), we have
where the first equality is the firstorder optimality condition of the problem (15), and the second equality is followed by Algorithm 1. This implies that
Hence, we have
Adding the inequalities (28) and (30), and using \(AX^{*}  Y^{*} = 0\), \(Y^{*} B  C = 0\), we know that the following inequality holds
Adding the inequalities (26) and (31), we have
Noting that
and
We have by inequality (32) and the definition of \(\mu_{k}\) that
which means that the inequality (25) holds. The proof is completed. □
Theorem 3 implies that the sequence \(\left\{ {\mu_{k} } \right\}\) is a nonnegative monotone decreasing with low bounded. Hence, the limit of the sequence \(\left\{ {\mu_{k} } \right\}\) exists which implies that the limit of the sequences \(\left\{ {Y_{k} } \right\}\), \(\left\{ {M_{k} } \right\}\), \(\left\{ {N_{k} } \right\}\) exist, and \(S_{k + 1} + Y_{k + 1}  Y_{k} = AX_{k + 1}  Y_{k} \to 0\) and \(T_{k + 1} = Y_{k + 1} B  C \to 0\) as \(k \to \infty\). Futhermore, \(S_{k + 1} + Y_{k + 1}  Y_{k} = AX_{k + 1}  Y_{k} \to 0\) as \(k \to \infty\) implies that the limit of the sequence \(\left\{ {X_{k} } \right\}\) exists. Assume that \(X_{k} \to X^{*}\), \(Y_{k} \to Y^{*}\), \(M_{k} \to M^{*}\), \(N_{k} \to N^{*}\) as \(k \to \infty\), then (9) and (10) are hold by taking limit, respectively, the Eqs. (27) and (29), and (11) and (12) are hold by \(S_{k + 1} + Y_{k + 1}  Y_{k} = AX_{k + 1}  Y_{k} \to 0\) and \(T_{k + 1} = Y_{k + 1} B  C \to 0\) as \(k \to \infty\). Hence, we have by Theorem 1 that the matrix pair \([X^{*} \vdots Y^{*} ]\) is a solution of the constrained optimization problem (3), and hence is a solution of the matrix nearness problem (2). In addition, Note that the subjective function of the constrained optimization problem (3) is a strictly convex functions and the constrained set \(\Upomega = \left\{ {\left[ {X \vdots Y} \right]\left {AX  Y = 0,YB  C = 0,X \in SR^{n \times n} } \right.} \right\}\) is closed and convex, we know that matrix pair \([X^{*} \vdots Y^{*} ]\) is the unique solution of problem (3). Hence the sequence \(\left\{ {X_{k} } \right\}\) generated by Algorithm 1 converges to the unique solution of the matrix nearness problem (2). These results can be described as the following Theorem 4.
Theorem 4 Assume that \(\left\{ {X_{k} } \right\}\) is a sequence generated by Algorithm 1 with any initial matrices \(Y_{0} ,M_{0} ,N_{0}\) and the parameters, then the sequence \(\alpha ,\beta > 0\) \(\left\{ {X_{k} } \right\}\) converges to a solution of the matrix nearness problem (2).
Numerical experiments
In this section, we compare Algorithm 1 and 2 with existing two methods proposed in (Peng et al. 2005; Peng 2010), denoted, respectively, by CG and LSQR. Our computational experiments were performed on an IBM ThinkPad T410 with 2.5 GHz and 3.0 RAM. All tests were performed in MATLAB 7.1 with a 64bit Windows 7 operating system.
In the implementation of Algorithm 1 and 2, we take parameters \(\alpha = \beta = 10\). The initial matrices \(Y_{0} ,M_{0} ,N_{0}\) in Algorithm 1 and 2, and \(X_{0}\) in Algorithm CG and LSQR are chosen as zeros matrices. All of algorithms, the small tolerance \(\varepsilon = 10^{  8}\) and the termination criterion are chosen as \(\left\ {AX_{k} B C} \right\ \le \varepsilon\). In addition, the maximum iterations numbers of the three methods is limited within 20000.
For the matrix nearness problem (2), the matrices \(A,B,\tilde{X}\) and \(C\) are given as follows (in MATLAB style): \(A = randn(m,n)\), \(B = randn(n,p)\), \(\tilde{X} = randn(n,n)\), \(C = AX_{0} B\) with \(X_{0} = H + H^{T}\) and \(H = randn(n,n)\). Here the matrix C is chosen in this way to guarantee that the matrix nearness problem (2) is solvable.
In Table 1, we report the iteration CPU time (‘CPU’) and the iteration numbers (‘IT’) based on their average values of 10 repeated tests with randomly generated matrices A, B and C for each problem size in each tests.
Based on the tests reported in Table 1 and many other performed unreported tests which show similar patterns, we have the following results: when \(m,p \gg n\), Algorithm 1 and 2 are more effective than Algorithms CG and LSQR. But when \(m \approx p \approx n\), Algorithm CG and LSQR are relatively more effective than Algorithms 1 and 2. When \(p < m\) and \(m \gg n\), Algorithm 2 is the most effective, and Algorithm 1 is the most effective when \(m < p\) and \(p \gg n\).
Conclusions
In this paper, we have considered the matrix nearness problem (2), i.e., finding the matrix nearness solution \(X^{*}\) of matrix equation \(AXB = C\) to a given matrix \(\tilde{X}\). By discussing equivalent form of the considered problem, we have derived some necessary and sufficient conditions for the matrix \(X^{*}\) is a solution of the considered problem. Based on the idea of the alternating variable minimization with multiplier method, we have proposed two iterative methods to compute the solution of the considered problem, and have analyzed global convergence results of the proposed algorithms. Numerical results illustrate proposed methods are more effective than existing two methods proposed in Peng et al. (2005) and Peng (2010).
References
Arias ML, Gonzalez MC (2010) Positive solutions to operator equations AXB = C. Linear Algebra Appl 433:1194–1202
Bai ZZ, Tao M (2015) Rigorous convergence analysis of alternating variable minimization with multiplier methods for quadratic programming problems with equality constraints. BIT Numer Math. doi:10.1007/s105430150563z
Bjorck A (2006) Numerical methods in scientific computing, vol. I–II. SIAM, Philadelphia. http://www.mai.liu.se/akbjo/
Chu KE (1989) Symmetric solution of linear matrix equations by matrix decomposition. Linear Algebra Appl 119:35–50
Chu M, Golub G (2005) Inverse Eigenvalue Problems, Theory Algorithms and Applications. Oxford University Press, Oxford
Cvetkovićiliíc DS (2006) The reflexive solutions of the matrix equation AX B = C. Computers Math Appl 51:897–902
Dai H (1990) On the symmetric solutions of linear matrix equations. Linear Algebra Appl 131:1–7
Dehghan M, Hajarian M (2010) The general coupled matrix equations over generalized bisymmetric matrices. Linear Algebra Appl 432:1531–1552
Deng YB, Hu X (2005) On solutions of the matrix equation AXA ^{T} + BYB ^{T} = C. J Comput Math 23:17–26
Deng YB, Hu XY, Zhang L (2003) Least squares solution of BXA ^{T} = T over symmetric, skewsymmetric, and positive semidefinite matrices X. SIAM J Matrix Anal Appl 25:486–494
Dragana S, Cvetkovicilic DS (2008) Rennd solutions of the matrix equation AX B = C. J Aust Math Soc 84:63–72
Hajarian M (2015a) Developing BiCOR and CORS methods for coupled Sylvestertranspose and periodic Sylvester matrix equations. Appl Math Model 39:6073–6084
Hajarian M (2015b) Matrix GPBiCG algorithms for solving the general coupled matrix equations. IET Control Theory Appl 9:74–81
Higham NJ (1988) Computing a nearest symmetric positive semidefinite matrix. Linear Algebra Appl 103:103–118
Huang GX, Yin F, Guo K (2008) An iterative method for the skewsymmetric solution and the optimal approximate solution of the matrix equation AXB = C. J Comput Appl Math 212:231–244
Jin X, Wei Y (2004) Numerical Linear Algebra and its Applications. Science Press, Beijing/New York
Konstaintinov M, Gu D, Mehrmann V, Petkov P (2003) Perturbation theory for matrix equations. Elsevier, Amsterdam
Lei Y, Liao AP (2007) A minimal residual algorithm for the inconsistent matrix equation AXB = C over symmetric matrices. Appl Math Comput 188:499–513
Liao AP, Bai ZZ (2003) Leastsquares solution of AXB = D over symmetric positive semidefinite matrices X. J Comput Math 21:175–182
Liao AP, Lei Y (2007) Optimal approximate solution of the matrix equation AXB = C over symmetric matrices. J Comput Math 25:543–552
Peng ZY (2005) An iterative method for the least squares symmetric solution of the linear matrix equation AXB = C. Appl Math Comput 170:711–723
Peng ZY (2010) A matrix LSQR iterative method to solve matrix equation AXB = C. Int J Comput Math 87:1820–1830
Peng XY, Hu XY, Zhang L (2005) An iteration method for the symmetric solutions and the optomal approximation solution of matrix equation AXB = C. Appl Math Comput 160:763–777
Peng XY, Hu XY, Zhang L (2007) The reflexive and antireflexive solutions of the matrix equation, A ^{H} XB = C. J Comput Appl Math 200:749–760
Penrose R (1956) On best approximate solutions of linear matrix equations. Proc Cambridge Philos Soc 52:17–19
Qiu YY, Zhang ZY, Lu JF (2007) Matrix iterative solutions to the least squares problem of BXA ^{T} = F with some linear constraints. Appl Math Comput 185:284–300
Sun JG (1988) Two kinds of inverse eigenvalue problems for real symmetric matrices. J Comput Math 3:283–290
Yuan YX, Dai H (2008) Generalized reflexive solutions of the matrix equation AXB = D and an associated optimal approximation problem. Computers Math Appl 56:1643–1649
Authors’ contributions
ZP conceived the study, carried out the analysis of the sequences and drafted the manuscript. YF, XX and DD participated in its design and coordination and helped to finalise the manuscript. All authors read and approved the final manuscript.
Acknowledgements
Research is supported by National Natural Science Foundation of China (11261014, 11271117, 11301107).
Competing interests
The authors declare that they have no competing interests.
Author information
Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Peng, Zy., Fang, Yz., Xiao, Xw. et al. New algorithms to compute the nearness symmetric solution of the matrix equation. SpringerPlus 5, 1005 (2016). https://doi.org/10.1186/s400640162416x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s400640162416x
Keywords
 Symmetric matrix
 Matrix equation
 Iteration method
 AVMM method
 Matrix nearness problem
Mathematics Subject Classification
 15A24
 15A39
 65F30