I was wondering if there is a Python package, numpy or otherwise, that has a function that computes the first eigenvalue and eigenvector of a small matrix, say 2x2. I could use the linalg package in numpy as follows.
Eigenvalue Of A Matrix
I was wondering if there is a Python package, numpy or otherwise, that has a function that computes the first eigenvalue and eigenvector of a small matrix, say 2x2. I could use the linalg package in numpy as follows.
Eigenvalue Of A Matrix
But this takes a really long time. I suspect that it's because numpy computes eigenvectors through some sort of iterative process. So I was wondering if there were a much faster algorithm that only returns the first (largest) eigenvalue and eigenvector, since I only need the first.
For 2x2 matrices of course I can write a function myself, that computes the eigenvalue and eigenvector analytically, but then there are problems with floating point computations, for example when I divide a very big number by a very small number, I get infinity or NaN. Does anyone know anything about this? Please help! Thank you in advance!
One other thought, for 2x2 matrices, I don't think eigs(A,B,1) would help anyway. The effort involved in computing the first eigenpair leaving the matrix transformed to where the second emerges directly. There is only benefit for 3x3 and larger.
I was using the LinearAlgebra package, and computing the leading eigenvalue as maximum(real(eigvals(matrix))). When profiling the code, I noticed that this computation takes a noticeable amount of time, especially for larger matrices, so I am trying to speed it up.
I cannot use eigvals! because I need the matrix later and do not want to override it. Furthermore, I only need one eigenvalue. I found that Arpack.jl can compute the leading eigenvalue directly. It is way faster, but it fails to diagonalize some matrices, with the error
About the use of sigma in Arpack.jl, @rveltz , as far as I understand I need an initial guess of the eigenvalues beforehand, which I might not have. Also, does this mean that the XYAUPD error is the expected behaviour of the package?
What's the smallest absolute value possible of a non-zero eigenvalue of an $n$ by $n$ square matrix whose entries are either $0$ or $1$ (all operations are over $\mathbbR$)? I would be interested in estimates or bounds as I imagine an exact answer is tricky.
Then (much as inthis answer)the inverse matrix $M_n^-1$ is anti-triangular with constant antidiagonals;thus it is determined by its bottom row, and this bottom row is$1, -1, 1, -2, 3, -4, 6, -9, 13, -19, 28, -41, \ldots$,with alternating signs and absolute values satisfying the recurrence$t_m = t_m-1 + t_m-3$. Thus $t_m$ grows like a multiple of $C^m$where $C = 1.46557\ldots$ is the real root of $C^3 = C^2 + 1$,and the main diagonal of $M_n^-1$ has constant sign. Here is$M_13^-1$:
(Numerical computation suggests that in fact there's only one really small eigenvalue,which is thus $O(C^-n)$; for example, $M_13$ has an eigenvalue$0.008902\ldots$, and the next-smallest eigenvalues are about$-.78$ and $.82$.)
There is a pretty crude lower bound, namely $1/n^n-1$. This is obtained by observing that the product of the nonzero eigenvalues is one of the symmetric functions, hence here must have absolute value at least one. The largest possible absolute eigenvalue of a size $n$ $0-1$ matrix is $n$, so we have $s\cdot n^n-1 \geq 1$ where $s$ is the smallest absolute value of a nonzero eigenvalue.
This is really crude, since if one of the eigenvalues is $n$, then the matrix is rank one, so it won't yield anything interesting, and along these lines, I suspect that if the largest eigenvalue is close to the maximum ($n$), then the other eigenvalues will be much much smaller (possibly less than one in absolute value), so the product argument will not give anything close ...
Since for small values of $n$, there are really not that many $0-1$ matrices (and many, e.g., determinant zero, can probably be discarded anyway), it is possible to calculate the minimal absolute eigenvalue. A table of these would be helpful.One I can do by hand; if $n=2$, then $s = 1/\gamma$ (reciprocal of the golden ratio).
The circulant matrix with first row $c_0,c_n-1,\dots,c_2,c_1$ has eigenvalues $$\lambda_j=c_0+c_n-1\omega_j+\cdots+c_2\omega_j^n-2+c_1\omega_j^n-1$$ where $\omega_j=e^2\pi ij/n$, so if you can find a small sum of $n$th roots of unity, you can find a 0-1 matrix with a small eigenvalue. There is some discussion of the question of small sums of roots of unity here.
This is not an answer, but some remarks that the OP might find interesting. For symmetric matrices, a related question has been studied previously, namely that of bounding the largest and smallest eigenvalues, for more general matrices.
If you are getting zero pivots, then you know that the matrix is singular, and therefore the eigenvalue of smallest magnitude is 0. Perhaps you could work around this with a try/catch block? (I am unfamiliar with Arpack.)
I would say use a shift to calculate nonzero eigenvalues. Then, when you undo the shift, you can decide whether or not you are interested in the eigenvalues which would be zero or close to zero. It is possible that your matrix may have one or more zero eigenvalue, depending on the application.
Instead of computing them from scratch, I wonder if there exists an analytical way to find the eigenvectors and eigenvalues iteratively using $E$ and $S$. In other words, is there a link between $E$, $S$ and $\tildeE$, $\tildeS$?
P.S.(1) I've read the answers to the question "How does eigenvalues and eigenvectors change if the original matrix changes slightly", but I couldn't find a connection with this question. Sorry if I couldn't get a point and created a duplicate question.
P.S.(2) If you wonder why I'm asking this question, here it is: I'm computing the eigenvalues and eigenvectors of the covariance matrix of some samples. Let each sample have 3 dimensions and let $D \in \mathbbR^k\times n$ be the data matrix where each row is a sample and we have $k$ samples. Then the covariance matrix is $C = D^T D$. If we have 3 dimensions and the columns of $D$ are $d_1$, $d_2$ and $d_3$, then we have
I wondered whether there's a way to compute the eigenvectors and eigenvalues of the missing data's covariance matrix using the ones that we computed from the full data in an iterative manner instead of computing it from scratch.
Let $A$ be a symmetric matrix, with eigenvalues $\lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n$. Let $B$ be the matrix obtained by deleting the $k$-th row and column from $A$, with eigenvalues $\mu_1 \leq \mu_2 \leq \cdots \leq \mu_n-1$. Then$$\lambda_1 \leq \mu_1 \leq \lambda_2 \leq \mu_2 \leq \lambda_3 \leq \cdots \leq \lambda_n-1 \leq \mu_n-1 \leq \lambda_n.$$This is a special case of Cauchy's interlacing theorem. The operator $P$ in the wikipedia article should be taken to be the projection on the coordinate vectors other than the $k$-th one.
The word you're looking for is downdating, and I cannot do better than to point out these two survey papers, and this article. I will also have to make the reminder that it makes better numerical sense to compute the singular values of $\mathbfD$ rather than the eigenvalues of $\mathbfD^T \mathbfD$.
Removing a paired row/column from a symmetric matrix is kind of like setting the corresponding entries to zero. This reformulation would allow you to consider a perturbed matrix of the same dimensions.
[___] = eig(A,B,algorithm), where algorithm is "chol", uses the Cholesky factorization of B to compute the generalized eigenvalues. The default for algorithm depends on the properties of A and B, but is "qz", which uses the QZ algorithm, when A or B are not symmetric.
[___] = eig(___,outputForm) returns the eigenvalues in the form specified by outputForm using any of the input or output arguments in previous syntaxes. Specify outputForm as "vector" to return the eigenvalues in a column vector or as "matrix" to return the eigenvalues in a diagonal matrix.
Ideally, the eigenvalue decomposition satisfies the relationship. Since eig performs the decomposition using floating-point computations, then A*V can, at best, approach V*D. In other words, A*V - V*D is close to, but not exactly, 0.
Ideally, the eigenvalue decomposition satisfies the relationship. Since eig performs the decomposition using floating-point computations, then W'*A can, at best, approach D*W'. In other words, W'*A - D*W' is close to, but not exactly, 0.
It is better to pass both matrices separately, and let eig choose the best algorithm to solve the problem. In this case, eig(A,B) returns a set of eigenvectors and at least one real eigenvalue, even though B is not invertible.
Ideally, the eigenvalue decomposition satisfies the relationship. Since the decomposition is performed using floating-point computations, then A*eigvec can, at best, approach eigval*B*eigvec, as it does in this case.
Output format of eigenvalues, specified as "vector" or "matrix". This option allows you to specify whether the eigenvalues are returned in a column vector or a diagonal matrix. The default behavior varies according to the number of outputs specified:
Eigenvalues, returned as a column vector containing the eigenvalues (or generalized eigenvalues of a pair) with multiplicity. Each eigenvalue e(k) corresponds with the right eigenvector V(:,k) and the left eigenvector W(:,k).
Right eigenvectors, returned as a square matrix whose columnsare the right eigenvectors of A or generalizedright eigenvectors of the pair, (A,B). The formand normalization of V depends on the combinationof input arguments:
[V,D] = eig(A,B) and [V,D] = eig(A,B,algorithm) return V as a matrix whose columns are the generalized right eigenvectors that satisfy A*V = B*V*D. The 2-norm of each eigenvector is not necessarily 1. In this case, D contains the generalized eigenvalues of the pair, (A,B), along the main diagonal.
 02a7112eeb