Talk:Symmetric matrix

From Wikipedia, the free encyclopedia
Latest comment: 18 January by Nicolas.quesada in topic Proof 4 of Takagi
Jump to navigation Jump to search

Template:WikiProject banner shell Shouldn't be better to create a distinct entry for 'skew-symmetric matrix' ?

Inverse Matrix

Does the inverse of a square symmetrical matrix have any special properties? Does being symmetrical provide any shortcut to finding an inverse? 58.107.136.85 (talk) 03:56, 11 April 2008 (UTC)Reply


If the inverse of a symmetrical matrix is also a symmetrical matrix it should be stated under properties. —Preceding unsigned comment added by 77.13.24.86 (talk) 18:25, 25 January 2011 (UTC)Reply

Yes! Of course the inverse of a symmetric matrix is symmetric; its very easy to show too.

Proof:

Suppose A = A^t and A is non-singular, then there exists A^-1 such that A*A^-1 = I. Applying the transposition operator to each side of the equation we get...

Transpose[A*A^-1] = Transpose[I]... {(A^-1)^t}*A^t = I; however, we have that A = A^t, so it follows that... {(A^-1)^t}*A = I, but the inverse is unique therefore,... (A^-1)^t = A^-1. This proves that the inverse is symmetric. QED — Preceding unsigned comment added by Brydustin (talkcontribs) 00:36, 1 January 2012 (UTC)Reply

Basis, Eigenvectors

It's easy to identify a symmetric matrix when it's written in terms of an orthogonal basis, but what about when it's not? Is a real-valued matrix symmetrix iff its eigenvectors are orthogonal? —Ben FrantzDale 00:31, 11 September 2006 (UTC)Reply

Reading more carefully answers my question: "Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix." So apparently the answer is yes. —Ben FrantzDale 15:27, 11 September 2006 (UTC)Reply

I believe you're confusing a couple of concepts here. A matrix is a rectangular array of numbers, and it's symmetric if it's, well, symmetric. Of course, a linear map can be represented as a matrix when a choice of basis has been fixed. On the other hand, the concept of symmetry for a linear operator is basis independent. Greg Woodhouse 01:34, 30 November 2006 (UTC)Reply

being symmetric with real entries implies unitarily diagonalizable; the converse need not be true. anti-symmetric matrices with real entries are normal therefore unitarily diagonalizable. but the eigenvalues are no longer real, so one must speak of unitary matrices, rather than orthogonal. Mct mht 04:07, 12 September 2006 (UTC)Reply

It's been a while since I followed up on this. I still feel like there is something missing in this article. For me back in 2006, I was confused about the importance of symmetry of a matrix because they are "just" rectangular arrays of numbers. As such, symmetry seems like a superficial property that can be undone by simple things like swapping rows. Furthermore, we could have a matrix that is symmetric but meaninglessly so. For example, a data matrix of participants with age and weight as columns. If Alice is 80 and weighs 90 pounds and bob is 90 and weighs 80 pounds, then you get a symmetric table, but that symmetry doesn't mean anything (for starters, the units don't match, but we could construct something for which they did). That left me wondering "when does symmetry mean something?" I now think I understand. Consider the moment matrix of a bunch of points in R3. That is a symmetric 3×3 matrix. As I've come to understand things, that matrix is contravariant (in the tensor sense) in its rows and columns.

I think matched variance of rows and columns is a necessary (but not sufficient) condition for a matrix to be symmetric in any meaningful sense. That implies that a meaningfully symmetric matrix is strictly-speaking the matrix representation of a tensor. Does that sound right? (I don't mean to say that [80 90; 90 80] isn't symmetric, I am just saying that for that symmetry to be anything other than coincidence, the matrix has to have matched variance in rows and columns.) —Ben FrantzDale (talk) 13:46, 14 December 2010 (UTC)Reply

"More precisely, a matrix is symmetric if and only if it has an orthonormal basis of eigenvectors" This statement is just wrong. See 'Normal Matrix'. Normal matrices need not be symmetric (in fact they can be anti-symmetric), but does have an orthonormal basis of eigenvectors. However, it IS true that if a matrix is symmetric, then it has an orthonormal basis (in fact this is trivially true, since all 'symmetric matrices' are 'normal matrices', and normal matrices have an orthonormal basis of eigenvectors) Please correct. —Preceding unsigned comment added by 128.122.20.210 (talk) 03:17, 30 December 2010 (UTC)Reply

It is correct if we assume that eigenvectors are real. Then A=O^TDO, and A^T=O^TD^TO=A. This is a bad username (talk) 22:16, 8 February 2016 (UTC)Reply

Symmetric matrices are usually considered to be real valued

I've made several changes to indicate that symmetric matrices are generally assumed to be real valued. With this, the real spectral theorem can be stated properly. VectorPosse 05:03, 12 September 2006 (UTC)Reply

Would it be better to have a little more detailed discussion of Hermitian? --TedPavlic 16:21, 19 February 2007 (UTC)Reply

It may be worthwhile to add a section on complex symmetric matrices, or matrices that are (complex) symmetric w/r/t an orthonormal basis. They are not as useful as self-adjoint operators, but the category includes toeplitz matrices, hankel matrices and any normal matrix. 140.247.23.104 04:43, 12 January 2007 (UTC)Reply

I agree. We just need to make sure it's in a different section so that it doesn't get mixed up with the stuff about the spectral theorem. VectorPosse 19:28, 19 February 2007 (UTC)Reply

Products of Symmetric Matrices: Eigenspaces Closed Under Transformation

As the article states, products of symmetric matrices are symmetric if and only if the matrices commute. However, it also says, "Two real symmetric matrices commute if and only if they have the same eigenspaces." This makes no sense. Consider arbitrary matrix A and the identity matrix I. Certainly, AI=IA, so these matrices commute. However, in general A and I will not have the same eigenspaces! I think this statement was supposed to be, "Two real symmetric matrices commute if and only if they are simultaneously diagonalizable," or, "Two real symmetric matrices commute if and only if the eigenspace for one matrix is closed under the other matrix." Both of these statements sound complicated compared to the original statement. I'm not sure if it's worthwhile to even mention it. However, I'm going to make a change. I'm okay with someone removing the statement entirely. --TedPavlic 17:34, 19 February 2007 (UTC)Reply

the previous version was correct. two real symmetric matrix commute iff they can be simultaneously diagoanlized iff they have the same eigenspaces. please undo your change. Mct mht 10:24, 21 February 2007 (UTC)Reply
As far as I can see, Ted's counterexample (identity matrix and arbitrary symmetric matrix) shows that two symmetric matrices can commute without having the same eigenspaces. Please tell me where we go wrong. -- Jitse Niesen (talk) 11:25, 21 February 2007 (UTC)Reply
hm, that depends on what's meant by "having the same eigenspaces", no? if that means "the collection of eigenspaces coincide", then you would be right. (however, seems to me the wording of the comment, which i removed, about the "closure" of eigenspaces can be improved.) perhaps it's more precise to say two real symmetric matrix commute iff there exists a basis consisting of common eigenvectors. Mct mht 12:17, 21 February 2007 (UTC)Reply
also, the identity matrix is really a degenerate case. since it and its multiples are the only matrices that's diagonal irrespective of the basis chosen. excluding such cases (if A restricted to a subspace V is a · I, remove V), seems to me that the general claim is true: real symmetric matrices {Ai} commute pairwise iff the family of eigenspaces of Ai and the family of eigenspaces of Aj are the same for all i and j. Mct mht 15:42, 21 February 2007 (UTC)Reply
I agree with "two real symmetric matrices commute iff there exists a basis consisting of common eigenvectors". I think the more common formulation is "two real symmetric matrices commute iff they are simultaneously diagonalizable", so I'd prefer that. I agree that the formulation "the eigenspace for one matrix is closed under the other matrix" is rather unfortunate as I had to read that sentence a couple of times before I understood what is meant.
I don't understand what you mean with "if A restricted to a subspace V is a · I, remove V". Every matrix is a multiple of the identity when restricted to an eigenspace, and after removing the eigenspaces of a symmetric matrix there's nothing left. -- Jitse Niesen (talk) 04:04, 22 February 2007 (UTC)Reply
shoot, you're right. well, remove V if dimension V is > 1. that better? Mct mht 04:10, 22 February 2007 (UTC)Reply
hm, forget it Jitse, that did not make it better. you're right there. Mct mht 12:20, 22 February 2007 (UTC)Reply

Hey..the definition of symmetrizable matrices is not complete. A symmetrizable matrix is a product of a symmetric matrix and a positive definite matrix. The positive definite matrix need not be a invertible diagonal matrix as in the section. Please check... Naik.a.s —Preceding unsigned comment added by Naik.a.s (talkcontribs) 10:08, 27 July 2009 (UTC)Reply

eigenvalues

are the eigenvalues of A:n×n, A=AT always {0,...,0,tr(A)} ?
applies for matrix BTB with B=[1,2,3,4]
--Saippuakauppias 10:48, 31 December 2007 (UTC)Reply

No. For instance, the identity matrix is symmetric, but has eigenvalues {1,1,…,1}. However, every matrix of the form A=BTB does have {0,…,0,tr(A)) as its eigenvalues. Such matrices are called rank-one matrices, because their rank is one. -- Jitse Niesen (talk) 15:24, 31 December 2007 (UTC)Reply


In the article the statement "Two real symmetric matrices commute if and only if they have the same eigenspaces." is wrong. For a counterexample consider the identity matrix and any diagonal matrix with more than one eigenvalue. The statement should read: "If two real symmetric matrices of dimension n commute then a basis for R^n can be chosen so that every element of the basis is an eigenvector for both matrices."

Incidentally the answer above is is assuming that B is itself a rank one matrix (as in the example given with B=[1,2,3,4]). It's not true for B an arbitrary matrix.

137.222.137.107 (talk) 15:16, 15 June 2012 (UTC)Nick GillReply

The spectral theorem...

...is conspicuous by the absence of any mention of it in this article!

Maybe I'll be back. Michael Hardy (talk) 02:18, 10 August 2008 (UTC)Reply

It's at the start of the "Properties" section. -- Jitse Niesen (talk) 10:57, 10 August 2008 (UTC)Reply

trace of the product of three matrices

Hi,

there's a mistake in the article. It's claimed that the trace of three symmetric (or hermitian) matrices is invariant under arbitrary permutations. To prove this, it's used that (CBA)^t = CBA which is simply not true because the product of symmetric (hermitian) matrices is symmetric (hermitian) if and only if they commute. —Preceding unsigned comment added by 192.33.103.47 (talk) 09:27, 28 June 2010 (UTC)Reply

Complex symmetric matrix eigenvalues

Hello, the page currently states that for each complex symmetric matrix, there exists a unitary transformation such that the resulting diagonal matrix has real entries. The eigenvalues of complex symmetric matrices are generally themselves complex, and not all real. Somebody tell me if I'm reading this wrong, otherwise I'm going to change the wording "is a real diagonal matrix" to "is a complex diagonal matrix". — Preceding unsigned comment added by Jeparie (talkcontribs) 18:36, 25 September 2015 (UTC)Reply

The entries of the diagonal matrix are the singular values not the eigenvalues of the original matrix. Best wishes, --Quartl (talk) 19:47, 25 September 2015 (UTC)Reply

The article is still (or again) wrong. A complex symmetric matrix doesn't necessarily have real eigenvalues, as the article currently states in the Decomposition section. Either we need to change complex symmetric matrix to complex Hermitian matrix, or elaborate that the diagonal matrix doesn't contain eigenvalues. — Preceding unsigned comment added by 213.52.196.70 (talk) 18:07, 6 November 2017 (UTC)Reply

Math notation

I changed all the math from inline math to tex. This was reverted by ‎87.254.93.231 several times. The article as it is now contains an ugly mixture of tex and inlne math. I propse changing all again to tex. Reasons for Tex:

  • You can display almost everything with Tex
  • Tex is easy to distinguish from surrounding nom-math text. E.g. for a matrix A vs. for a matrix A.

Reasons for inline math

  • Less work to write article

What do you think? 11:36, 15 January 2019 (UTC)

Proposing adding new proof of Takagi

Note: This proof yields a short algorithm for computing the Takagi decomposition using software packages like Scipy and Matlab. The schur decomposition feature should be used for this.

Let uncomplexify:Mn()M2n() denote the injective ring homomorphism A+Bi(ABBA), where we use block matrix notation. Let M=A+Bi be an arbitrary non-singular complex-symmetric matrix. (The non-singularity restriction will later be lifted.) Observe that while the matrix uncomplexify(iM)=(BAAB) is not -symmetric, we may define J=(0II0) and get that uncomplexify(iM)J=(ABBA) is -symmetric. By the spectral theorem it follows that uncomplexify(iM)J has an orthonormal eigenbasis v1,v2,,v2n with real eigenvalues λ1λ2λ2n. Let K=(0II0) and observe that for every eigenvector vi in our eigenbasis, the vector Kvi is an eigenvector of eigenvalue λi. We therefore improve our orthonormal eigenbasis of uncomplexify(iM)J to the new orthonormal eigenbasis (v1,v2,,vn,Kv1,,Kv2,Kvn), which we interpret as a block matrix P. Verify that each vi is indeed orthogonal to each Kvj because they have distinct eigenvalues (which might fail if M were singular). Given our definition of P, we have that uncomplexify(iM)J=P(Λ00Λ)PT (where Λ=diag(λ1,λ2,,λn)). Observe that P is equal to uncomplexify(U) for some matrix U; this follows because if we write each vi as the block matrix (wizi), we have that P=(WZZW)=uncomplexify(W+iZ). Observe also that U is unitary; this follows because uncomplexify(I)=I=PTP=uncomplexify(U)Tuncomplexify(U)=uncomplexify(U*U), and by cancelling uncomplexify due to injectivity, we get U*U=I. We therefore have that

uncomplexify(iM)J=P(Λ00Λ)PT=uncomplexify(U)(Λ00Λ)uncomplexify(U)T=uncomplexify(U)uncomplexify(iΛ)Juncomplexify(U)T=uncomplexify(U)uncomplexify(iΛ)uncomplexify(U)TJ=uncomplexify(UiΛU*)J

Cancelling J (as it's invertible) and then cancelling uncomplexify (because it's injective) yields M=UΛUT.

The result can be extended to any singular matrix M by approximating M as a sequence of invertible matrices Mn, and then forming the pair of sequences (Un,Λn) such that Mn=UnΛnUnT. Since the components of Un and Λn (for each n) are bounded, we may appeal to the Bolzano–Weierstrass theorem to get a pair of subsequences (Unk,Λnk) that both converge. We therefore get M=limkMnk=(limkUnk)(limkΛnk)(limkUnk)T. We are done. --Svennik (talk) 14:54, 19 May 2022 (UTC)Reply

Proof 3 of Takagi

Note: This proof does not treat singular matrices as a special case.

Let uncomplexify:Mn()M2n() denote the injective ring homomorphism A+Bi(ABBA), where we use block matrix notation. Let M=A+Bi be an arbitrary complex-symmetric matrix. Observe that while the matrix uncomplexify(iM)=(BAAB) is not -symmetric, we may define J=(0II0) and get that uncomplexify(iM)J=(ABBA) is indeed -symmetric. Let K=uncomplexify(iI)=(0II0) and observe that for every eigenvector v of eigenvalue λ of uncomplexify(iM)J, the vector Kv is an eigenvector of eigenvalue λ. We therefore build an orthonormal eigenbasis of uncomplexify(iM)J in the following way:

We start with the empty basis B and the linear map L:span(B)span(B),vuncomplexify(iM)Jv.
While span(B) is not 0-dimensional, we let v be a unit eigenvector of L. Observe that as well as v being orthogonal to span(B), so is Kv. We replace B with B{v,Kv}, and replace L with its restriction to span(B).

We arrange our resulting eigenbasis into the block matrix P=(v1,v2,,vn,Kv1,Kv2,,Kvn). Given our definition of P, we have that uncomplexify(iM)J=P(Λ00Λ)PT (where Λ consists of the eigenvalues corresponding to the eigenvectors v1 to vn). Observe that P is equal to uncomplexify(U) for some matrix U; this follows because if we write each vi as the block matrix (wizi), we have that P=(WZZW)=uncomplexify(W+iZ), so that U=W+iZ. Observe also that U is unitary; this follows because uncomplexify(I)=I=PTP=uncomplexify(U)Tuncomplexify(U)=uncomplexify(U*U), and by cancelling uncomplexify (due to injectivity) we get I=U*U. We therefore have that

uncomplexify(iM)J=P(Λ00Λ)PT=uncomplexify(U)(Λ00Λ)uncomplexify(U)T=uncomplexify(U)uncomplexify(iΛ)Juncomplexify(U)T=uncomplexify(U)uncomplexify(iΛ)uncomplexify(U)TJ=uncomplexify(UiΛU*)J

Cancelling J (as it's invertible) and then cancelling uncomplexify (because it's injective) yields M=UΛUT. --Svennik (talk) 19:46, 19 May 2022 (UTC)Reply

Symmetric Hessian

In the section on the Hessian it might be worth mentioning that there is a quite crucial assumption made there which is the twice differentiability. As the Hessian of a function can be well defined even if it is not twice differentiable (see for example https://calculus.subwiki.org/wiki/Failure_of_Clairaut's_theorem_where_both_mixed_partials_are_defined_but_not_equal#Example) and then the partial derivatives do not commute and the Hessian is not symmetric. It is stated there that it applies to twice differentiable functions but it might be worth explicitly writing that it does not apply to all functions whose Hessian is well defined. Tom-Lukas Lübbeke (talk) 14:52, 6 July 2024 (UTC)Reply

Proof 4 of Takagi

A further simpler of the Takagi decomposition is as follows (cf. https://cdnsciencepub.com/doi/full/10.1139/cjp-2024-0070 ).

Start with the SVD of the input symmetric matrix A=UΛV This is not the sought after decomposition. However, consider the unitary matrix W=U(UV*) (here it is understood we are taking a matrix square root, since unitary matrices are normal, this can be easily obtained from a Schur decomposition). Then it holds that A=WΛWT. The proof of this statement is given Sec 4. of the paper cited above. This is also the implementation given in thewalrus which is also heavily tested for edge cases here. Nicolas.quesada (talk) 14:22, 18 January 2025 (UTC)Reply