Eigenvector and covariance matrix
http://math.stackexchange.com/questions/23596/why-is-the-eigenvector-of-a-covariance-matrix-equal-to-a-principal-component
Short answer: The eigenvector with the largest eigenvalue is the direction along which the data set has the maximum variance. Meditate upon this.
Long answer: Let's say you want to reduce the dimensionality of your data set, say down to just one dimension. In general, this means picking a unit vector u, and replacing each data point, xi, with its projection along this vector, uTxi. Of course, you should choose u so that you retain as much of the variation of the data points as possible: if your data points lay along a line and you picked u orthogonal to that line, all the data points would project onto the same value, and you would lose almost all the information in the data set! So you would like to maximize the variance of the new data values uTxi. It's not hard to show that if the covariance matrix of the original data points xi was Σ, the variance of the new data points is just uTΣu. As Σ is symmetric, the unit vector u which maximizes uTΣu is nothing but the eigenvector with the largest eigenvalue.
If you want to retain more than one dimension of your data set, in principle what you can do is first find the largest principal component, call it u1, then subtract that out from all the data points to get a "flattened" data set that has no variance along u1. Find the principal component of this flattened data set, call it u2. If you stopped here, u1 and u2 would be a basis of the two-dimensional subspace which retains the most variance of the original data; or, you can repeat the process and get as many dimensions as you want. As it turns out, all the vectors u1,u2,… you get from this process are just the eigenvectors of Σ in decreasing order of eigenvalue. That's why these are the principal components of the data set.