On A Matrix Decomposition and Its Applications
On A Matrix Decomposition and Its Applications
On A Matrix Decomposition and Its Applications
Fuzhen Zhang
Farquhar College of Arts and Sciences
Nova Southeastern University
Fort Lauderdale, FL 33314, USA
zhang@nova.edu
Abstract
We show the uniqueness and construction (of the Z matrix in Theorem 1,
to be exact) of a matrix decomposition and give an affirmative answer to a
question proposed in [J. Math. Anal. Appl. 407 (2013) 436-442].
AMS Classification: 15A45, 15A60, 47A30
Keywords: accretive-dissipative matrix, Cartesian decomposition, majorization, matrix de-
composition, numerical range, sectoral decomposition, unitarily invariant norm
1 Introduction
Several recent papers [2, 3, 4, 5, 10] are devoted to the study of matrices with
numerical range in a sector of the complex plane. In particular, this includes the
study of accretive-dissipative matrices and positive definite matrices as special
cases. A matrix decomposition plays a fundamental role in these works. The
aim of this paper is twofold: show the uniqueness along with other properties
of the key matrix in the decomposition and give an affirmative answer to a
question raised in [12].
As usual, the set of n × n complex matrices is denoted by Mn . For A ∈ Mn ,
the singular values and eigenvalues of A are denoted by σi (A) and λi (A), respec-
tively, i = 1, . . . , n. The singular values are always arranged in nonincreasing
order: σ1 (A) ≥ · · · ≥ σn (A). If A is Hermitian, then all eigenvalues of A are
real and ordered as λ1 (A) ≥ · · · ≥ λn (A). Note that σj (A) = λj (|A|), where |A|
is the modulus of A, i.e., |A| = (A∗ A)1/2 with A∗ for the conjugate transpose
of A. We denote σ(A) = (σ1 (A), . . . , σn (A)) and λ(A) = (λ1 (A), . . . , λn (A)).
For a square complex matrix A, recall the Cartesian (or Toeplitz) decompo-
sition (see, e.g., [1, p. 6] and [7, p. 7]) A = <A + i=A, where
1 1
<A = (A + A∗ ), =A = (A − A∗ ).
2 2i
1
There are many interesting properties for such a decomposition. For in-
stance, <(R∗ AR) = R∗ (<A)R for any A ∈ Mn and any n × m matrix R. A
celebrated result due to Fan and Hoffman (see, e.g., [1, p. 73]) sates that
2
Proof. Existence. Write A = M + iN , where M = <A and N = IA are
Hermitian. Since W (A) ⊆ Sα , A is invertible and M is positive definite. By [7,
Theorem 7.6.4] or [16, Theorem 7.6], M and N are simultaneously *-congruent
and diagonalizable, that is, P ∗ M P and P ∗ N P are diagonal for some invertible
matrix P . It follows that we can write A = QDQ∗ for some diagonal matrix D
and invertible matrix Q. Since W (A) ⊆ Sα , we have W (D) ⊆ Sα . Thus we can
iθ1 iθn
√ 1 e ,√. . . , dn e ), where djiθ> 0 andiθ|θj | ≤ α, j = 1, . . . , n.∗ Set
write D = diag(d
X = Q diag( d1 , . . . , dn ) and Z = diag(e 1 , . . . , e n ). Then A = XZX , as
desired.
Uniqueness. Suppose that A = XZ1 X ∗ = Y Z2 Y ∗ are two decompositions
of A, where X and Y are nonsingular, Z1 and Z2 are unitary and diagonal.
We may assume Y = I (otherwise replace X with Y −1 X). We show that Z1
and Z2 have the same main diagonal entries (regardless of order). For this, we
show that β ∈ C is a diagonal entry of Z1 with multiplicity k if and only
if β is a diagonal entry of Z2 with the same multiplicity. Without loss of
generality, we may assume β = 1 (or multiply both sides by β̄ and continue
the discussion on X(β̄Z1 )X ∗ = β̄Z2 ). Let Z1 = C1 + iS1 and Z2 = C2 + iS2
be the Cartisian decompositions of Z1 and Z2 , respectively. Then C1 and C2
are positive definite. Since β = 1 is a diagonal entry of Z1 with multiplicity
k, 1 appears on the diagonal of C1 k times, so S1 has k zeros on its diagonal.
Thus rank(XS1 X ∗ ) = n − k. As XS1 X ∗ = S2 , we have rank(S2 ) = n − k.
This implies that C2 contains k 1’s on its diagonal. We conclude that Z2 is
permutation similar to Z1 .
Note that cos α is decreasing in α on [0, π2 ), the following are immediate.
(iii). σj2 (R) ≤ sec α λj R(<Z)R∗ ≤ sec α σj RZR∗ for any R and j.
(iv). σj2 (X) ≤ sec α λj (<A ≤ sec α σ j (A) for all j = 1, . . . , n.
3
Y are both n × n matrices, XY and Y X have the same eigenvalues. We have
λj (P ∗ N P ) = λj (P P ∗ N ) = λj (M −1 N ). It follows that P ∗ AP = I +Di and D is
the diagonal matrix of the eigenvalues µj of M −1 N . Let 1 + iµj = |1 + iµj |eiγj ,
|γj | < π2 , j = 1, . . . , n. Then Z = diag(eiγ1 , . . . , eiγn ). With γ(A) = maxj |γj |,
we see that W (Z), W (I + Di), and W (A) are all contained in Sγ(A) .
W (A + B) ⊆ Sα .
A = R1 + iS1 , B = R2 + iS2 .
Since W (A) and W (B) are contained in Sα , we have R1 + R2 > 0. Note that
for a, b, c, d > 0, (a + b)/(c + d) ≤ max{a/c, b/d}. We compute, for any x 6= 0,
|x∗ (S1 + S2 )x| |x∗ S1 x| + |x∗ S2 x|
≤
x∗ (R1 + R2 )x x∗ (R1 + R2 )x
x |S1 |x + x∗ |S2 |x
∗
≤
x∗ R1 x + x∗ R2 x
∗
x |S1 |x x∗ |S2 |x
≤ max ,
x∗ R1 x x∗ R2 x
≤ tan α.
4
3 Norm inequalities for partitioned matrices
Recall that a norm k · k on Mn is unitarily invariant if kU AV k = kAk for any
A ∈ Mn and all unitary U, V ∈ Mn . The unitarily invariant norms of matrices
are determined by nonzero singular values of the matrices via symmetric gauge
functions (see, e.g., [16, Theorems 10.37 and 10.38]). If B is a submatrix of
A ∈ Mn , then kBk is understood as the norm of the n × n augmented matrix
B with 0’s, and conventionally B has n singular values with the trailing ones
0; that is, σ(B) = (σ1 (B), . . . , σr (B), 0, . . . , 0) ∈ Rn , where r is the rank of B.
Thus σ(A) and σ(B) are both in Rn .
Let A be an n-square complex matrix partitioned in the form
A11 A12
A= , where A11 and A22 are square. (2)
A21 A22
In [12], the following norm inequalities are proved (in Hilbert space).
LZ1 [12, Theorem 3.3]: Let A ∈ Mn be accretive-dissipative and partitioned as
in (2). Then for any unitarily invariant norm k · k on Mn ,
It is
√ asked in [12] as an open problem whether the factor
√ 4 in (3) and the
factor 2 in (4) can be improved. Indeed, the factor 2 in (4) is optimal.
To construct such an accretive-dissipative matrix, we can find a matrix whose
numerical range is contained in the sector Sπ/4 , then rotate it by +π/4. The
normal matrix B = 10 01 + 01 10 i = 1i 1i has eigenvalues 1 + i and 1 − i. So the
matrix A = eiπ/4
√ B is accretive-dissipative. A and B have the same repeated
singular value 2. Thus, for the trace norm (sum of all singular values),
√ √ √
2 2 = kAk = 2 (kA11 k + kA22 k) = 2 (1 + 1).
However, the factor 4 in (3) can be improved to 2 (see Corollary 3). In this
section, we show some more general results than (3) and (4).
We adopt the following standard notations. Let x = (x1 , . . . , xn ), y =
(y1 , . . . , yn ) ∈ Rn . We denote the componentwise product of x and y by x ◦ y.
i.e., x ◦ y = (x1 y1 , . . . , xn yn ). We write x ≤ y to mean xj ≤ yj for j = 1, . . . , n.
We say that x is weakly majorized by y, written as x ≺w y, if the partial sum
of the first k largest components of x is less than or equal to the corresponding
partial sum of y for k = 1, . . . , n. We write x ≺ y if x ≺w y and the sum of all
components of x is equal to that of y. (See, e.g., [14, p. 12] or [16, p. 326].)
It is well known (see, e.g., [14, p. 368] or [16, p. 375]) that, for A, B ∈ Mn ,
kAk ≤ kBk for all unitarily invariant norms k · k on Mn if and only if σ(A) ≺w
5
σ(B). So, to some extend, the norm inequalities are essentially the same as the
singular value majorization inequalities. The Fan-Hoffman inequalities (1) yield
immediately k<Ak ≤ kAk for any A ∈ Mn and any unitarily invariant norm
k · k on Mn . The following is a reversal. Two useful facts are: the singular value
majorization of product σ(AB) ≺w σ(A) ◦ σ(B) (see, e.g., [16, p. 363]) and its
companion norm inequality kABk2 ≤ kAA∗ k kB ∗ Bk (see, e.g., [6, p. 212]).
The last “≤” is by Corollary 1 (iv). The norm inequality follows at once.
So (5) is true for A12 . The inequality for A21 is similarly proven.
If A is a positive definite matrix, then α = 0 and sec α = 1 in (5).
6
Proof. Set α = π/4 in the theorem. Then sec2 α = 2.
(6) is stronger than (3). Moreover, the constant factor 2 is best possible
for all accretive-dissipative matrices and unitarily invariant norms. Let B =
1 1−i ∗
0 1 . One may check that <B > 0 and <B ≥ ±=B, which yield x (<B)x ≥
|x∗ (=B)x| for all x ∈ C2 . (Note that <B 6≥ |=B|.) So W (B) ⊆ Sπ/4 and
A = eiπ/4 B is accretive-dissipative. For the trace norm, apparently, kA12 k2 =
2 = 2(1 · 1) = 2kA11 k kA22 k. This answers a question raised in [12, p. 442].
To present next theorem, we need a lemma which is interesting on its own.
h i
Lemma 2 Let H = H∗11 H∗22 be an n × n positive semidefinite matrix, where
H11 and H22 are square submatrices (possibly of different sizes). Then
We must also point out that (8) has appeared in [6, p. 217] and a more
general result is available in [8, Theorem 2.1]. We include our proof here as it
is short and elementary, and the most elegant one in author’s opinion.
7
h i
<A11 ∗
Proof. By Lemma 1 and noticing that <A = ∗ <A22 > 0, we have
The desired inequality follows at once since k<Xk ≤ kXk for any X.
If A is positive definite, then α = 0 and Theorem 4 reduces to (8). If A is
accretive-dissipative, then (4) is immediate by setting α = π/4 in (9).
Acknowledgement
The author is thankful to S.W. Drury and M. Lin for reference [5] which initiated
this work; he is also indebted to M. Lin for his valuable input and discussions.
References
[1] R. Bhatia, Matrix Analysis, GTM 169, Springer-Verlag, New York, 1997.
[2] S.W. Drury, Fischer determinantal inequalities and Higham’s Conjecture,
Linear Algebra Appl. (2013), http://dx.doi.org/10.1016/j.laa.2013.08.031.
[3] S.W. Drury, A Fischer type determinantal inequality, Linear Multilinear
Algebra (2013), http://dx.doi.org/10.1080/03081087.2013.832244.
[4] S.W. Drury, Principal powers of matrices with positive defintie real part,
preprint.
[5] S.W. Drury and M. Lin, Singular value inequalities for matrices with nu-
merical ranges in a sector, preprint.
[6] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis, Cambridge Uni-
versity Press, New York, 1991.
[7] R.A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University
Press, New York, 2nd edition, 2013.
[8] E.-Y. Lee, Extension of Rotfel’d theorem, Linear Algebra Appl. 435 (2011)
735-741.
[9] C.-K. Li, L. Rodman, and I. Spitkovsky, On numerical ranges and roots, J.
Math. Anal. Appl. (2003) 329-340.
[10] C.-K. Li and N. Sze, Determinantal and eigenvalue inequalities for matrices
with numerical ranges in a sector, J. Math. Anal. Appl. 410 (2014) 487-491.
[11] M. Lin and H. Wolkowicz, An eigenvalue majorization inequality for posi-
tive semidefinite block matrices, Linear Multilinear Algebra 60 (2012) 1365-
1368.
[12] M. Lin and D. Zhou, Norm inequalities for accretive-dissipative operator
matrices, J. Math. Anal. Appl. 407 (2013) 436-442.
[13] D. London, A note on matrices with positive definite real part, Proc. Amer.
Math. Soc. 82 (1981) 322-324.
8
[14] A.W. Marshall, I. Olkin, and B. Arnold, Inequalities: Theory of Majoriza-
tion and Its Applications, Springer, New York, 2nd edition, 2011.
[15] R. Turkmen, V. Paksoy, and F. Zhang, Some inequalities of majorization
type, Linear Algebra Appl. 437 (2012) 1305-1316.
[16] F. Zhang, Matrix Theory: Basic Results and Techniques, Springer, New
York, 2nd edition, 2011.