mathtres(2009)
Mon 04/27/09
.. since that's the highest you can pull out
then Dim Eλ < m
so dim is between 1 and m.
if you don't have enough eiganvectors in your eigan space .. you can only get k linearly indpt eigan vectors assoc with λ, the you're not going to g
et enough lin indep. to get eigenvectors
tom 26 27
fri 28
wed .. a.199-207
-------
Wed 04/29/09
for monday of next week -- homework on page 110
for friday of that week -- b.194 and b.195
thm 28.5
suppose we have λ1, λ2, λ3
Eλ1 = { x1 .. xk} // linearly
independant
Eλ2 = { y1 .. yl}
Eλ3 = {z1 .. zr}
each is lin indep .. want to formtthe union and conclude that it's linearly independant
std proof of linear combo
Assume
a1x1+ .. _ akxk + b1y1 + ... + blyl + c1z1 + ... + crzr = 0
prove that all a's , b's and c's are zero's
a1 ... an
give us
x in Eλ1
the vector y b1y1 + .. bnyn in Eλ2
and similarly, z in Eλ3
so now,
x + y + z = 0
therefore, x=0, y=0 and z=0.
by my theorum!!
therefore, a1x1 + .. + zkxk = 0
therefore, all the ai's = 0
similarly for bi's and ci's
therefore, when we merge these 3 sets, we get a linearly independant set!
we want all these eigan vectors to add up to n .. that would give us a basis of eigan vectors
p(t) = (-1)^n )(t-λ1)^m ...
therefore, n = m1 + m2 + m3
so assuming the characteristic poly does split , a necessarily and sufficient characteristic to diagonalize is that the multiplicity equal the dimension (?)
A.200 .. Cor 28.6
suppose p(t) splits and dim Eλ = mλ, for each eigen value ofλ of T, where mλ is the multiplicity of the eigennvalue λ .. then t is diagonalizable.
Let V be n dimensional
Then characteristic polynomial is of degree n
p(t) (-1)^n (t-λ1)^m1 (t-λ2)^m2 + .. + (t-λk)^mk
where n = m1 + .. mk
Snce dim(Eλ) = mλ by hypotehsis,
dim(Eλ) = mi where
i: 1-> k
Since the eigenspace Eλi contans all eigenvectors x corresponding to λi
therefore
Si = {x1 .. xi} is lin indep (them 28.5)
Since Si is lin independant , we can write
S = S1 U S2 U S3 ... Sk
and S is lin indep.
(them 28.5)
Since S is lin. ind., and contains n elements , S is a basis of Eigenvectors of T (corrollary 5.8)
therefore [T]S is diagonal (thm 26.1)
is the lin trans diagonalizable.
.. first hurdle was to see if it splits
then make sure that the dimennsion = to multiplicities of the eigan values, then you wuld know that it's diagonalizable
Thm 28.7 - for any eigen value λ dim Eλ = n-rank(A-λI)
proof: We know Eλ = {x st (T-λI)(x) = 0}
Eλ = N(T-λI)
dim(Eλ) = NUllity (T-λI)
= n - rank(T-λI) Thm 8.5
= n - rank([T-λI]β) // β is the original basis (Not eigen vectors)
= n- rank([T]β - λ[I]β)
= n- rank(A-λI) (thm 17.3)
29. Invarient Subspaces.
T(W) contained in W
examples
T(0) = 0 , an element of of {0}
(2) V→ V
T(V) = R(V) a subset of V
next two . not in class.
R(T) is T invarient
range of T will map into the range of T
N(T) is also T invarient
N(T) maps to zero
N(T) has to map into N(T) and it always contains zero!
example 5:
eigenspaces are T invarient
Eλ
Case 1
0 in T(Eλ) .. 0 in Eλ by def of Eλ
case 2
Let T(x) in T(Eλ)
Let w = T(x)
T(w) = T(T(x))
= T(λ(x)
= λT(X) = λw
T(x) in Eλ
def 29.2.
t ciclic subspace generated by T(x)
..
example 6:
then W = span (x, t(x) = span( [1,0.0, [01,0] = the xy - plane
T([x,0,0]) = {x+0, 0+0, 0]
= [x, 0, 0]
T([x, 0,0}) subset [x,0,0]
therefore x-axis is a T - invarient subspace -- actually it's the identity mapping!
Show xy-plane is T-invarient
T([x,y,0]) = [x+y, y+0, 0}
= [x+y, y, 0]
th4e key thing is that htere's a 0 in the z component .. that means you're still in the xy plane
T([x,y,0]) in xy-plane
therefore xy plane is T invarient
Theorem 29.1
show W in def 29.2 is T inv)
w = span({x, T(x), T^2(x) .. }
proof Let y in W
y = a-x + a1T(x) + a2T2(x) ... anT^n(x)
So T(y) =T(a0x) + T(a1T(x) + .. + T(an T^n(x) // applies additivity property
= a0T(x)
+ a1 T^2(x) + .. + anT^n+1(x)
T(Y) = finite lin combination of vectors in {x, T(x) .. }
therefore W is T-invarent
example on page 107.
let T: r3 -> r3 be defined by T([a,b,c]) = [-b+c, a+c, 3c] Let x = {1,0,0) then
W = span ({x, T(x)}) = span ({[1,0,0], [0,1,0]}) = the xy plane
By def 29.2, W = span ( x1, t(x), .. t^(x) ..
T(x) = T([1,0,0]) = [0,10]
t2(x) = T(T(x) = T([0,1,0] = [-1,0,0] b already in xy plane
[-1,0.0] = -1x
T^3(x) = T[-1,0.0] = [0,-1,0]
T^4(x) = T(0,-1,0) = [1,0.0] = x
so it just keeps repeating
W = span ({ x, T(x)}) = span {[1,0,0], [0,1,0]) = xy plane
the first time you get a liner combo, you can say that T of it will also be a linear combo
so when generating a cyclic subpace, you'd stop when you obtain a vector that's aalinear combo of what you've generated.
for friday .. finish this section and then page 114.
-------
Fri 05/01/09
29.2 - W is T invarient .. everything maps to W
Tw: W → W
every linear operator has a characteristic polynomial
Let γ bbe a baiss for W -- if γ has k elements, then k ≤ n)
take basis γ and augment it with n-k vectors to form a basis β for V
(by replacement theorum)
γ = {x1, .. xk}
β = {x1, .. , xk, xk+1, ... xn}
[T]β = headings:
1 2 3 ... k
0's below the kth row under the first k columns
A= [T_w]γ
A B (k x n-k)
0 C ( n-k x n-k)
q(T) = det(A-tI)
characteristic poly for T_w
for T
p(t) det ([T]β - λI)
T - tI
A-tI B
0 C-tI
p(t) = det(A-tI)det(C-tI) /// by homework problem
= q(t)h(t)
therefore q(t) | p(t) (remember, that | stands for divides in this case)
theorum 29.3
a) .. when you generate these vectors, it look slike an infinite list, but technically what you can do is you can stop generating the vectors once you get a linear combo of the previous vectors
. because then all the vectors after it will be linear combos
[x, T(x), T^2(x), ..., .. T^j-1(x)
where T^j is the first vector that is a linear combination of the previous ones
show
T^j+k(x) subspet of span (x, T(x) .. T^j-1(x)}
(then the span of {x, T(x) .. T^2(x) .. } will be a subset of the first j vectors that we generated
actually, the spans equal
proof is by induction on k.
for m = 0, T^j+0(x) in span (x, .. T^j-1(x)) .. because that's when we said we'd stop!
Suppose that T^j+m in span ((x, .. T^j-1(x))
then T^j+m = a0x + a1T(x) + .. aj-1T^j-1(x)
T^j+m+1(x) = a0T(x) + a1T^2(x) + .. + ajT^j(x)
the first are all in the set
the last one can be written that way
therefore
T^j+m+1 in the span
therefore, for all m T^j+m(x) in span({x, .. T^j-1(x)})
.. therefore span(x, ... } is a subet of span {x, .. T^j-1(x)}
the the dimension is the number of fvectors in that set .. j!
so when you want to figure out a definition for wo ..
.. keep writing until you get a linear combo of the previous ones
Cayley-Hamilton Theorum:
A =
[1 2]
[0 4]
p(t) = t^2 -t +4
A^2 -5A + 4I (put for into t)
1 10 - 5 10 +I = 0 0
0 16 0 20 0 0
thm 29.5 -
T is lin trans on V
p(t) is the characteristic polynomial of V
Prove p(T) is zero trans (show p(T)(x) = 0→ for every x in v)
case 1. x=0
Since p(t) is linear, p(T)(x) = 0 by def
Case 2 x≠ 0
Let w be T- cyclic suspace, generatedd by x, Tw
By thm 29.3, p(t) is divisble by char polynomail of Tw
=> g(t)|p(t) => p(t) = g(t)h(t0
By thm 29.4, g(T)(x) = 0
sbustitute T in for t
P(T) = g(T)h(t) = h(T)g(T) .. since poly composition is commutative
Since g(t)(x) = 0 is generated x
we can write g_x(T(x) = 0
(technically every x gives you a different functon g_x(x) .. since it's cyclic)
pT(x) = (h, Tg, T(x)
..
= h(T)g(T)x
= h(T) = 0
= 0
Thm 29.6
matrix form
show p(A) = nxn zero matrix
p(A) = (-1)^N [ A^ + an-1A^n-1 + .. _ anA + a0 I .. by definiton of the characteristic polyinomial
= (-1)^n [([
la]β ^n + an (La{β ^n-1 + .. an{La]β + an I) if β is a std basis
-1(^n [La^n + an-1La^n-1 + .. + a1La + a0I]β (Thm 9.4)
by 29.5 = = (-1)^n [zero tranformaton ]β
therefore, p( A) is the zero matrix
generalzied eiganvector
eigenvector is actuall a generalized eigenvector fixed to power 1
we'll be able to create a basis of generalized eigenvectors everytime the polynomial splits
and these can be organized in cycles of eigenvectors in such a way that the matrix representaion comes vey close to looking like a diagonal matrix
a matrix is Jordon connonical form is constructed from Jordan blocks
each jordan bloxk is a square matrix with the same number on the diagonal and 1's above the main diagonal
a matrix that's in Jordan form can be seaparated into Jordan blocks
when a matrix cannot be diagonalized,
(when it can it separates into 1x1 blocks)
when it doesn't , you can always get a matrix in jordan
for Tuesday - 29.
-------
Mon 05/04/09
thm 30.1
Let p be the smallest integer possible st (T_λIv
)^p(x) = 0
to show that p is not the smallest
y = (T-λIv)^p-1(x)
then this vector by the definition of p is non zero
p ≥ 1
(T-λIv)(y) = (T-λIv)^p(x) = 0
Then y is an eiganvalue
works both ways because if there is a generalized eiganvalue .. look i don't know!
thm 30.2.
a) Let T in cycle γ
y = (T-λT)^p-k(x) for some k = 1
(T-λI)^k y = (T-λI)^k(T-λI)^p-1(x)
= (T-λIv)(x)
= 0
therefore y is a generalized eigenvector corresponding to eigganvalue x
b) The initial vector of γ is of To cooresponding to egenvalue λ and no other vectorγ is an eigenvector
(T-λI)(T-λI)^p-1(x) = ()
= 0
show no toher is an eigegenvector
(T-λIv)(TiλI)^p-k
.. ≠0
generalized eigan vector --> T-λ ..
all will be generalized egian vectors
.. ulimately you get o -- when
you get 0, the pervious one, is an eigan vector because you get 0 by applying (T-λI)
thm 30.3 the vectors in a cycle γ are linearly independant
{x}
.. is that linearly independant? yes!
since x is non-zero ax = 0 means that a = 0
cycle of p+1
(T-λI)^p, .., (T-λI)^2(x), (t-λI)(x, x}
Supppose a_p (T-λI)^p (x) + .. p ... x = 0
show that all coefficients are 0
apply T-λI to both sides
ap(T- λi)^p+1(x) + a_p-1(T-λI)^p(x) .. a1(T-λI)^2 + a0(T-λI)(x) = 0
ap(T- λi)^p+1(x) is zero
so we have
a_p-1(T-
λI)^p(x) .. a1(T-λI)^2 + a0(T-λI)(x) = 0
these are in a cycle (T-λI)(x)
{(T-λI)^p(x), .. (T-λI)x, (T-λI)(x) ..
that's a cycle of length p ..
our IH says that that's linearly independant
ap(T-λI)^p(x) = 0
and (T-λI)^p(x) is non-zero .. it took one mo
re to get the zero vector
so a_p = 0
so any cycle consists of linarly independant vectors
Jordan Blocks ..
jordan diagonal form .. not a diagonal matrix, but close
diagonal form .. jcf where all the blocks are 1x1
30.4
If γ = {xi+1, xi+2, xi+k} is a portion of an ordered basis β corresponding to a jordan block assocaited with eigenvalue λ, then γ is a cycle of generlaized eigenvactors correesponding to λ
col:
i+1 i+2 ... i+k
1 1 0 0
0 1
...
above and blow it are all 0's
T(x_i+k) = x_i+k-
1 + λx_i+k
x_i+k-1 = T(x_i+k) - λx_i+k
= x_i+k-1 = (T-λIv)(x_i+k
Look at x_i+k-2
x_i+k-2 = T(x_i+k-1) - λx+i+k-2
= (T-λIv)^2 x_i+k
The patteron then continues to 1
T(xi+1) = λx_i+1
therefore, (T-λI)(x_i+1) = 0
γ = { (T-λI)^k-1(x_i+k)
+ ... +
(T-
λI^1(x_i+k) + (x_i+k)
therefore by 30.2, γ is a cycle of generalized eiganvectors
thm 30.5
γ = {x_i+1, ... , x+i+k}
= { x_i+1+2, x_i+k-2, .. xi+k }
by def 30.2
x_i+k+r = (T-λIv)(x+i+k-r)
= T(x_i+k+-r+1)
=
T(X_i+k-r) = x_1+k-r + λ x_1+k-r+1
for left most column
so (TIλI)(xi) = 0
.. so left most column will have λ at the top and 0's everywhere else
T(xi) = λI(xi)
= λ (xi)
so this is a reversal of what tyler was doing
Eλ .. eigan space
general eitgan space
set of al generalized iegna vector
s associated with λ .. so bigger than Eλ
dim Kλ = mλ
we should be able to get journdon canonical form
but it's not gonna be useful unless we have cycles .. they let us create jordan blocks
that's the wrinkle
thm 31.1
Kλ .. generalized eigan space
sh
ow Kλ is closed under scalar mult
x in Kλ
show cx in Kλ
x in Kλ, then (T-λI)^p(x) = 0
then (T-λI)^p(cx) =
c(T-λI)^p(x) = 0
so cx in Kλ
closed under addition:
suppose x, y in Kλ
(T-λI)^p(x) = 0
(T-λI)^q(y) = 0
is there a power of T-λI that whipes
out x+y?
sure!
(T-λI)^p+q (x+y) = (T-λI)^p+q(x) + (T-λI)^p+q(y)
(T-λI)^q(T-λI)^p(x) + (T-λI)^p(T-λI)^q(x)
= 0 + 0
920-412-1056
-------
Fri 05/08/09
jordan canonical form
the only way you can get a matrix in that form is if your basis consists of cycles of generalized eigenvectors
if you're in an eigan space, you can just pick linearly independant vectors
if Kλ represents the generalized eiganspace,
dim(Kλ) = mλ
the dimension of an eiganspace (generaized?) is surely large enough
but we cant just take any linearly independant set ..
if we just randomly pick a set out of kλ , they won't be organized in cycles
inside Kλ is Eλ, since every eigan vector is a generalizeed ieganviector
Eλ is teh nullspace of t-λI
as you apply more powers of T-λI, it ex
plands the lullspace
suppose mλ = 5 but dim of Eλ is only two
what's the dimension of (T-λI)^2
when we do that, suppose we find that it's 4 .. added two more dimensins
finally, say that on the 3rd power of T-λI, we get Kλ .. .. then we expanded by ne
more dimension
we want a cycle of length 3 and a cycle of length 2
if the two initial vectors are linearly independant, then the whole set will be
n - rank(T-λI)
n - rank(A-λI)^2
take rank(A-λI) - rank(T-λI)^2
rank(A-λI)^2 - rank(A-λI)^3
so we do these computatins to create a dot diagram
. .
. .
.
arrange the five dots into this dot diagram
that's not the only one you could possibly get with the five dots
. . . . . <-- means that part is diagonalizable
or
. .
.
.
.
always have the longest columns to the left and you create the longest cycle before the shortest cycle
construct it row by row, but then you use it colunn by column to find your cycles
A =
3 2 1
0 3 5
0 0 3
p(t) = -(t-3)^3
3 - rank(0 2 1) = 3-2 = 1
...
so there's 1 dot in this row
.. so we're done ... the other dot's have to go below
two dots ->
. .
.
and three dots also done
so yo don't have to worry about actually computing the 2nd row is in m=4
so we have to find a vector to represent the lowest dot .. and tha
t has to be a generalized eigenvector that doesn't get whiped out until you apply t-λI to it 3 times
need x st
(A-λI)^3(x) = 0
but
(A-λI)^2(x) ≠ 0
(A-λI)^2 = (0 0 10)
0 0 0
0 0 0
(A-λI)^3 = 0 0 0
0 0 0
0 0 0
so we want a solution to ^3 = 0 that is not a solution to ^2 = 0
so don't want to choose a vector where c3 = 0
let's pick 0,0,1
so that's our lowest dot
0 2 1] 0 = 1
0 2 5] 0 5
0 0 0] 1 0
so that's our second dot
0 2 1] 1 = 10
0 2 5] 5 0
0 0 0] 0 0
so basis =
[10,0,0], [1,5,0], [0,0,1]
jcf matrix:
3 1 0
0 3 1
0 0 3
you can take
Q^-1 J Q to check your work
second example:
[2 -1 0 1]
[0 3 -1 0]
[0 1 1 0]
[0 -1 0 3]
p(t) .. subtract t from the diagonal
expand on rows / columns with a lot of 0! and don't multiply out!
(2-t)(3-t)((1-t)(3-t)+1)
(2-t)(3-t)(t-2)^2
p(t) = (t-2)^3(t-3)
we're not worried about 3 .. we know we can get an eiganvector for that one ..
but what about the 2 with m 3
let's do the diagonalization test first
either it comes out OK or it becomes the first step in the jcf process
4 - rank( 0 -1 0 1
0 1 -1 0
0 1 -1 0
- 0 -1 0 1
4 - 2 = 2 .. not big enough!
so row 1 is
. .
.
to get the cycle of length 2
we need x→ that satisfies the following:
(A-λI)^2x = 0
but
(A-λI)x ≠ 0
so wanna choose our vector so that
=-c2 + c4 = 0
and c2-cd3 = 0
so taht both of these are NOT satisfied
(A-2I)^2 = ( 0 -2 1 1
0 0 0 0
0 0 0 0
0 -2 1 1
-2c2 + c3 + c4 = 0
one choice -> [col: 0, 1, 2, 0]
multiply
(A-2I)(0,1,2,0) = [col: -1,-1, -1, -1]
so now we need to find the 3rd dot .. and it's gotta be an eiganvector that's not a multple of -1,-1,-1,-1
[col: 1,0,0,0]
and we still need one more vector because we had an eiganvalue of 3
1,0,0,1
[-1,-1,-1,-1],[0,1,2,0],[1,0,0,0][1,0,0,1]
now we can predict jcf
2 1 0 0
0 2 0 0
0 0 2 0
0 0 0 3
and that's the jcf
can get to this matrix by getting the oroginal matrix A and multiplyhing it into that Q^-1 R Q thing
this is not fool proof .. what if we have two long cycles
.. the best we can do is make it linearly independant of the place in the other cycle --- but we can't be sure.
so may have a false start
for the final:
200 points
will emphasize (20 pts) . jcf
.. the rest of the test is among all the material in the first four exams
generally at the same level as the other exams
like problems from the previous tests
so look at test problems and try similar problems
test starts at 2 .. can go as long as you want