math(2009)
Mon 01/16/08
field .. we will almost always use the reals
others are .. the rational numbers
you can have a scalar field of complex numbers .. alsog an alg feild
field .. basically anything that looks sorta like the real number in that:
can add, *, add commutatively, * either way
have associativity
got an addative identity -- ex: 0
* identity .. 1 in real numbers
addative inverses .. ex 5 +- 5 =0
* inverses except for the 0 elemente ..
ex: 2 and 1/2
distributive property
* over +
a(b+c)
= ab+ bc
any set that has two operations and satisfies those veryious axious is a field
exotic fields:
Z[n]
{0,1,2, ... n-1} .. start that as your set
suppose
{0,1,2,3} .. that's Z[4]
suppose we want to make a field out of that set
have to have some kind of + and * in such a way that when I add I stay within the set and when I * .. I stay in the set
.. use modular aritmetic to add
that way we stay within the set
can do the same with *
2*3 = 6 .. really a 2 .. so same idea
is this really a field?
.. no .. one of the axiums is that every element that's not the zero elemente has to aave a multiplicative inverse
zero devisors?!
3 has a * inverse .. 3 * 3 = 9 = 1
so 3 has a multiplicative inverse .. but 2 doesn't
how can we prove that .. try everything!
nothing works!
2 *2 is a zero divisor .. doesn't have multiplicative inverses
.. so this is a ring, rather taan a field
.. but if n is a prime number, then it's a field
so Z[5] is a field
2 is the inverse of 3 and vise versa
4 is the inverse of 4
therefore all non-zero ielementes have mulitpicative inverses
Z[p] .. a finitte field
where p is a prime
in theory .. nothing in this definition would prevent the scalars from being Z[p] .. so you cluld have some knd fo vecdtor space where teh scalars were just a tfininte set of elementes from Z[p] .. but allmost all applicaitations use either R or C
the 8 axiousm on the handout are the minimum for a vector field
can't prove 1x = x
but we can prove 0x = 0
wide variety of vector spaces
ex 1 .. is the classic example
. vectors from calc 3
"paralelogram law"
scalar multiplication
[x1, y1, z1]
+ [x2, yx, z2]
= [x1+x2, y1+y2, z1+z2]
c[x1, y1, z1]
= [cx1, cx2, cx3]
.. algebraic definitions turn out to be better for proving stuff
#3 .. is not mouch more of a generalization
R6n ..
why stop with real numbers! .. why not just use compleex numbers
why not numbers from Z[5]
.. F^n where F is any field
.. just ordered n-tuples but they come from any field
4 .. matricies
M[xx3](R)
.. two roews and 3 columnns
R means that the numberws have to be real numbers
[[2, π, e][-3, 0, 1.1516]]
ave to be able to add theese together .. and can just do that component by component with two identically diminetioned M
.. scalar .. constant multipled by the entries
the vectors are the matrices . the scalars are the real numbers
we can do that with any set of matricies of the same size
a.1
prove that r63 satisifies axium 1
axium 1 :
distinguish between vectors and scalars!
x-> + y-> = y-> + x->
has to be true for all x and all y in the vector space
has to be universally true
R^3
have to tranform this axium into appropriate notation for the vector space we're dealing with
let x and y be arbitrary but rwrite them to reflect ahtt they from an anexamlple
x : x1, y1, z1
y : x2, y2, z2
Then
x-> + y-> = {x1, y1, z1] + [x2, y2, z2]
a lot of steps don't have to be justified .. somewhat a matter of judgement .. justify the steps that are really critical!
others, use your judgement on
=[x1+x2, y1 + y2, z1, + z2]
critical step!
= [x2 + x1, y2+y1, z2 + z1]
.. additional of real numbers .. that's all that's going on tin that step
. and we know that the additon is commuative
so justificatoh: commutitiveity of additon in ordinary reals
.. and then unwinds in reverse
x2, y2, z2 + x1, y1, z1
.. = y-> + x->
.. we can pratice on axium 2
a2. take the general vecotor space F^n
we'll do axium 3
axium 3
there exists someting .. the zero vector 0-> st x-> + 0-> = x-> for all x->
.. constructive proof
.. start out by carefully creating the object that will work
F^n
Let 0-> = [0, 0, 0, .. 0n]
and it's not really 0 .. it's whatever is acting as f's zero
.. we do know tht in every field there is something in there that well call 0
Let x-> = {x1, x2, x3 .. xn]
xi in F
Then x + 0 =
{x1, x2, x3 .. xn] + [0,0,0, .. ]
.. we'll add them together component wise
= [x1 + 0, x2+ 0, x3 + 0 ... ]
= {x1, x2, x3 .. xn] .. by the definition of 0 ih F
= x
produce the object that will have that property and then show that it works
.. constructive proof
a.3
axium 8 .. for every a, b in F
and every x in V
(a+b) * x-> = ax-> + bx->
a+b .. addition in the feild of scalars
.. ax-> .. scalar muliplication
but the plus on the right is not the same
.. it's the plus of adding two vectors
Let C in M[mxn] ((r))
show that (a+b)C = aC + bC
two show that two matricies are eqal .. yyou aave to shhwo eqaul in each omponent
Let 1 ≤ i ≤ m, 1 ≤ j≤ m be arbitrary
focus our attention on entry i, j
as long as i and j are arbitrary , we're done , tahts the important thing
dont have to cdtually lok at each entry indiviually
Then show
[(a+b)C]ij
.. then that's a real number now .. we were assuming that the entries were real numbers
..
= [aC + bC] ij
[(a+b)C]ij = (a+b)Cij
= aCij + bCij .. 4th grade arithmetic because iyt's real numbers
really a*Cij ..etc
= (aC)ij + (bC)ij
= [aC + bC]ij
.. since i and j are arbiturary .. I can hnow say that
(a+b)C = aC + bC
for next time:
we'll begin with A4
first homework friday
so prep A.10
fridaa: b1. .. b7
-------
Wed 01/28/08
P[3] (R)
so a typical vector would be
C3x^3 + c2 x^2 + c1 x + c0
.. and any of the c's could be zeros
if this is ging to be a vector space, there has to be a way to add two objects like this together
.. and it's just like oridinary additon of polinomials
scalor multipclation .. we'll use the ordinary variety .. take the scalar and multiply to each coefficient.
and you can generalize your scalars .. from Z[5] for example
item 8 .. if you leave the subscript off
just P(R)
then you still can add any two together .. ok if they hhave different degres
so you can tak ll polynomials , put them in a big field .. and that will for a vector space
and you can change r to any field
#10 ..
F(R,R)
.. objects in this set are functions
f: R -> R
.. you coluld have a vector space of functions
bt yo have to have some notion of waht it means to create the object
f + g .. has to b eanother object in the set
.. borrow from calc 1
(f + g)(x)
= f(x) + g(x)
af .. what's that function gonna be?
gotta be something hat maps the reals to the reals
can define it:
(af)(x) = a * f(x)
a.4
prove axium 6
for every a, b in F and x-> in V
(ab) x-> = a(bx->)
let x-> = c3 x^3 + ...
(ab)x-> = (ab)c3 x^3 + (ab)c2 x^2 ...
= a(bc) x^3 ...
by associativity
= a(bc3 x^2 + bc2 x^2 .. ) .. by definition of scalar multiplication
= a(b(c3 x^3 + c2 x^2 ...)
= a(b x→)
....
vector space of functions
a.5, axium 7
F(R, R) satifsfies a(x→ + y→) = ax→ + ay→
Let x→ = f and y→ = g
a(f+g) = af + ag
Show
[a (f+g)](x) = [af + ag] (x)
[a (f+g)](x) = a[f + g] (x)
by definiton of the sacalr multiple of a function
= a[f(x) + g(x)]
= a * f(x) + a * g(x)
= (af)(x) + (ag)(x) .. by defintion of scalar again
= (af + ag) (x)
therefore, = a(f + g) (x)
remaining examples of vector spaces
example 11
.. is a continuation of example 10
in example 10, we assumed that the domain was the real numbers
but you can let the domain be any arbitrary set .. and you can do that becasse in order to create these definitons of f++g or a (f+g) .. all you need to be able to do is additon ann multipplication in the range
as long as R is your rang ..
X = {a, b, c}
f: a-> -1, b-> -2, c -> π
.. you could still define adding two funcions together and scalar multiplier of functions
12 is a still further generalizaton .
.. could look at all fuctions into some arbitrary field F .. and define sums of two funcions and reate a vector space out of it
"last example I want to look at"
imagine a field and inside of it is a subfield
F could be the rationals .. inside the field of Reals
you can have a field inside a field . a so-called subfield
V = E
F = scalars
so any time you take two elements out of E, you can add them together because there's an additon on the field E
and if you take an element out of F and E .. both still belong to E , so you have a scalar multiplicatiin
a little ambiguous .. anythng in F .. can be considered a scalar and a vector
so if you pick two from F .. you have to consider 1 a scalar and one a vector
have to ignore mulitpilcation out of E * E .. becaase no scalar .. even though it's there
interesting thing about field theory .. eventually field and extension theory and the use of vector saace idea such as dimention lead to a proof that you cannot , in general, trisect an angle
elementary theorums for vector spaces
theorum 1.1
Axium 3 .. there exists a 0 vector such that
x→ + 0→ = x→ for all x in V
The vector 0 described innAxiom 3 is unique.
Proof
(proof of uniqueness .. assume two things with that property)
suppose 01 and 02 are 0 vectors
Show that 01 = 02
01 = 01 + 02, since 02 is the additive identity
.. treating 02 as 0
can think of 01 that way to
so
= 02
since 01 is an additive identity
therefore 01 = 02 .. and we have axium 3
proof of a much mor general theorum .. in an alg structure . the identity element for a binary operaton is unique period.. and the proof in each case is like what we just did
Theorum 1.2
thorsons rule .. if an element acts like the inverse of another one, then they are
if the add together to get 0
the vector y mentioned in Axiom 4 is unique, ie there is only one y that would be called -x
x→ + y→ = 0
Prove y→ is unique
Assume y→1 and y∞ satisfy axium 4
So x + y1 = 0
and x + y2 = 0
So
x + y1 = x + y2
y1 + x + y1 = y1 + x + y2
...
0 + y1 = 0 + y2
y1 = y2
Theorum 1.3
for any scalar a in F, a0=0
Show 0x→ = 0
Hint: 0=0+0
then
0 x = (0+0) x
= 0x + 0x
Axium 4 says
0x +(-0x) = (0x + 0x) + (-0x)
0 = 0x = (ox 0 0x)
=
0 = 0x + 0
by axium 3 .. 0 = 0x
Theorum 1.4
a 0 = a(0 + 0)
since 0 = 0+0
= a0 + a0 by axium 7
ao + ((-a0) = a0 + a0 + (-a0)
0 = reassociated
= 0→ + a0
a0 = 0
cenellation law -- for any x, y, and z in V, if x+z=y+z, then x=y
axium 4 (-z) + x + z = (z) + y + z
(z + (-z) + x = (z+ (-z)) + y
) = x = 0 + y
x + y
fill in a little parenthesis
for friday: end of section 2.
-------
Fri 01/29/08
Prove Thm 1.7
if a in F and X in V and ax = 0→ then a=0 or x→ = 0→
Assume a in F, x in V, ax=0, a≠0
ax→ = 0→
a^-1 (ax→) = a^-1 0
by axium 6, we can re associate
...
(1) x→ = a^-1 0->
x→ = a^-1 0→ by axium 5
x→ = 0→ by theorum 1.4
logic lesson!
p => q \/ r
prove the logically equivalent
p and ~q ==> r
Section 2.
Subspaces .. vector space inside another vector space
has to be using the same additon and scalar multiplication
if the smaller setp forms a vector space with the sajme operations, then we call it a subspace
in theorum 2.1 a .. really trivial ..
if you start with a vector psace, V is a subset of itself .. V is a subspace of itself .. that's on eof the trival subpsaces
if you just take the 0 vector, you also get a subspace.
what do yyou have to prove to show that that subset forms a subspace
V .. vector space W .. subset of V .. is it a vector space
there are certain things we cannot be sure of
nothing in the info he gave us that would prevent the 0 vector from being outside of W
so, that's one way that the subset could get screwed up.
Suppose I takee W1 and w2
pull those out of the set W .. I know I can add them together .. they're still in V .. but all I know about the new vector is that it's somewhere in V . it might be outside of W!
and that would be bad ..
take any two out of the space and you get a new vector i the sapce .. don't want to land outside of W ... so is it closed under vector additon .. do I land in W?
a similar thing can happen with scalar multiplication.
so a number of things are not automatic .. they have to be verified when we look at a particular situation
another thing that could go wrong: I know that there's supposed to be an addative inverse axium .. anytime I take a vecotr, I must get a negative vector ... we know we can find it somehwere in the big circle, but it is possible that -W1 might be outside of the subset
the other axiums are inherited.
so that's theorum 2.2
but we caa do a little better .. we actually don't have to verify 4 .. just 3
becuase item d technically will be true if the other properties are true
Show that d is repetative
let W be a subst of V
with a, b, and c of thm 2.2 satisfied
Let x in W and x in W
-1*x ... that has to belong to W because of closure under scalar multiplcation
show that it's actual -x
.. that's theorum 1.5
= -(1x)
by axium 5 = -(x) .. -x
therefore -x in W
A.14 Prove {0→} is a subspace of V
a) The vector 0→ is in W
b) W is closed under addition
since our only choice is
0→ + 0→ = 0→
c) W is closed under scalar multiplication
by thm 1.4
a0→ = 0→
... so again we stay inside the set
a.15 . . example #1
L v= r2 and let k in R be a fixed number. Let W = x, kx) x isi in R} then wi is a subspace of V.
Let V = R2 and let k in Rbe a fixed number
Show w is a subspace of V
Let w = {(x, kx) : x in R}
a) show the vector 0→ in W. Let x=0 since 0 in R
= (0,k0), which is in W
.. really (0,0)
b) Let (x1, kx1) and (x2, kx2) in W
x1, x2 in R
(x1, kx1) + (x2, kx2) =
x1 + x2, kx1 + kx2
(x1 + x2, k(x1 +kx2))
we know x1 + x2 in R
threfore (x1 + x2, k(x1 +kx2)) in W
therefore, w is closed under addition
c)
Let (x, kx) in W
a in R
a(x, kx) = (ax, akx)
= (ax, k(ax))
(ax, k(ax)) in W
therefore, W is closed under scalar multiplication
therefore, w is a subspace of V. QED
a.16
(a) Let V = R^2 and
Let W = {x,1 : x in R}
Since 1 must be in the second position, you can't get 0,0
b) (x1, 1) + (x2, 1) =
(x1+x2,2)
.. now there's a 2 in the second position!
so no again!
c) 2(3,1) = (6,2) .. so that also doesn't work!
epic. fail.
a.17 M 3x3 (R)
square matricies
diagonal ... if you add the numbers in that diagonal together, that's th trace
so if A is nx n,
rowcolumn
Trace(a) = a11+A22 + ... + Ann
V = M nxn (R)
W = {A in M nxn R : trace =0}
(a) Consider the 0 matrix
= ij=0 for all i and j
Trace(0) = 0ii + 022 + .. + 0nn
= 0 + 0 + 0 ... +0
therefore, the zero matrix is in W
(b) Suppose A, B in W
Then Trace (A+B) = (A+B)11 + (A+b)22 + .. + (A+B)nn
= A11 + B11 + A22 + B22 + ... + Ann + Bnn
and that's just real numbers
we can rearrangeso taat all the A quantities are first
followed by the B quantities
= 0 + 0 = 0
therefore, A + B in W
finally, c) closure under scalar multiplication
(c) Let A in W and c in R
Show that cA in W
Then Trace(cA)
= (cA)ii +
...
= c * A11 + c * A22 .. c * ann
= c(A11 + ... Ann)
since A in W, = c(0) = 0
therefore, cA in w
therefore closure under scalar multiplication
a.18
example 5
does W contain the 0 matrix? yes!
Suppose A and B in W
(A + B)ij = Aij + Bij ≥0
closure of additon)
so the third property is suspect!
Let A be such that A11 = 1and Aij = 0 otherwise
Then Aii in W ..
Let c =-1
(cA)11 = cA11 = -1(1) = -1 < 0 .. so it has an etry that's less than 0
therefore cA is not in W
.. therefore W is not closed under scalar multiplication
suppose we take two vectors
x1→ and x2→
Span(S) .. any vectors that I could form by using vector additon, scalar multiplication, and those two vectors
c1x1→ + c2x21
.. you can get any vector in the plane with those two vectors
.. so the span would be R2
if we have two vectors on the same line ...
x1 and x2 .
then you can get any vector on the line
next homework:
end of chapter 2.
-------
Mon 02/02/08
D(R) = {f in F(R,R) | f is differentiable in every x in R}
(1) at O, constant f'(x) = 0; therefore, 0→ exists in D(R)
(2) Let f, g in D(R)
f'(x)+ g'(x) = (f+g)'(x) in R
therefore, we have closure under addition (f+g) is in D(R)
(3) Let c in R, f in D(R) c[f'(x)] = (cf)'(x)
therefore cf in D(R)
D(R) subset of C(R) ..
proved that it was a subspace of C(R)
which I proved is the subpsace of F(R,R)
..
theorum 2.4
Proof .. show W1 n w2 is a subspace of V)
(a) since w1 is a subspace, 0→ in W1
since w2 is a subspace, 0→ in W2
therefore 0→ in W1 n W2
(b) suppose x→ and y→ in W1 n W2
then x→ and y→ are in W1 and x→ annd y→ are in W2
x→ + y→ in W1 and x→ + y→ are in W2
therefore, x→ + y→ in W1 n W2
(c) Let x→ in W1 n W2 and c in R
show that cx→ in w1 n w2
x→ in w1 and x→ in w2
therefore cx in w1 and cx→ in w2
therefore cx→ in w1 n w2
therefore w1 n w2 is a subspace of V
chapter 3.
span of a set of vectors
take a set of vectors taat you're given and make any vector from those vectors
.. said to be in the span
if you tkae just a subset and you can take evryting in the vector space .. then we say it's a spanning set or a generating set
A.22
(1) S = empty
Span(S) = {0}
.. thm 2.1 b
(2) S ≠ empty
(a) ax→ + by→ where
x→, y→ in S
a, b scalars
let b=-a
ax→ + (-a)(x→) =0
alt. 0x→ = 0 .. and that's in the span
(b) closure under addition: Let x1→ x2→ in Span(s)
x1 = a1x1 + a1x2 ... anxn
x2 = b1z1 + b1z2 .... bnzn
x1 + x2 = a1x1 + a1x2 ... anxn + b1z1 + b1z2 ... bnzn
.. that's a finite linear combination of vectors fom s .. so it's in the span
(c) x1→ in Span(s) and b is arbitrary
let x1 = a1x1 + a2x2 .. anxn xi in S
bx→ = ba1x1 + ba2x2 .. banxn
. generalization of axium 7
(b in field)
next . . .
example 1, page 9
x^2-x-1 in the span of {x^2+1, x+2, 2x^2+x+4}
x^2+x-1 a(x^2+1) + b(x+2) + c(2x^2 + x +4) a,b, c in R
ax^2 + a + bx + b2 + c2x^2+ cx + 4c
(a+2c)x^2 + (b+c)x + (a+2b+4c)
=-1 =-1 =-1
[ 1 0 2 | 1 ]
[ 0 1 1 | -1]
[ 0 2 4 | -1]
//gauss-jordon method
[ 1 0 2 | 1 ]
[ 0 1 1 | -1]
[ 0 2 2 | -2]
[ 1 0 2 | 1 ]
[ 0 1 1 | -1]
[ 0 1 1 | -1]
[ 1 0 2 | 1 ]
[ 0 1 1 | -1]
[ 0 0 0 | 0 ]
a + 2c = 1
b + c =1
a = 1-2c
b = -1c
.. so choose any c!
x^2+x+1 not in the space of {x^2 +1, x+2, 2x^2 + x + 4}
show there does not exist a,b, c in R such that
x^2 + x+ 1 = a(x^2+1) + b(x+2) + c(2x^2 + x +4)
= ax^2 + a + bx+ 2b + 2cx^2 + cx + 4
= (a+2c)x^2 + (b+c) x + (a + 2b + 4c)
a +2c = 1
b + c = 1
a + 2b + 4c =1
[1 0 2 | 1 ]
[0 1 1 | 1 ]
[1 2 4 | 1]
r3 - r1 -> r3
[1 0 2 | 1 ]
[0 1 1 | 1 ]
[0 2 2 | 0 ]
r3 - 2*r2 -> r3
[1 0 2 | 1 ]
[0 1 1 | 1 ]
[0 0 0 | -2]
0 = -2
therefore, this system has no solution
therefore, x^2 + x +1 is not in the span
a.25 .. theorum 3.2
assume S1 subst s2
let V→ in span (s1)
let V→ = a1x1 + a2x2 ... anxn
xi→ in S1
since s1 subst s2,
xi→ in S2
because they're all in s2, all the linear combos would be in the span of s2
a1x1 + a2x2 .. anxn in Span(s2)
a.26
example 3 on page 10
generating set .. can produce every vector in your vector space
Suppose S1 = { [1,0,0], [0,1,0], [0,0,1]}
Let [a,b,c] be an arbitrary vector in R3
[a,b,c] = a[1,0,0] + b[0,1,0] + c[0,0,1]
therefore S1 spans R3
s2.
Suppose s2 = {[1,0,0], [1,1,0], [1,1,1]}
let [a,b,c] be an arbitrary vector in R3
Let x1, x2, x3 in R
then
x1[1,0,0]+x2[1,1,0] + x3 [1,1,1] = [a,b,c]
x1 + x2 + x3 = a
x2+ x3 = b
x3 = c
then x2 = b-c
x1 = a-b+2c
(a-b + 2c)[1,0,0]+(b-c)[1,1,0] + c[1,1,1] = [a,b,c]
linear independence
of a set of v3ectors
we want a set of vectors that produce other vectors but to produce that other vector, there's only one way of doing it
def 4.1 .. page 12 -
a set of vectors is linear independant if you can produce the 0 vecotr in only on eway
if you take n vectors,there' automatially 1 way to form the 0 vector 0x1+0x2
.. only way to do this is to choose - as the coefficeint ..
only on eway to get the 0 vector
then there's only one way to get any vector
that's the essense of linear independance
sorta the opposit of spanning
in spanning , if you have aalot of vctors, you can generate more vectors
but with independnce, the smaller the step, the more likely it is to achieve linear independance ... because there's more likely just one way to get the 0 vector
so if you have for example
x1 + x2 + -x3 = 0
.. non trival linear combination of x1 + x2 + x3 ... therefore, not linear independant
need 0 has only one way to be presprented
.. that implies that all vectors hve only one way to be represented
.. create a basis .. and the we'll get an abstract definition of dimension
for wed .. get through problems at the end of sec. 4
-------
Wed 02/04/08
test next friday .. last two sets due at that point .. problems 4 and 5
generating a vector space .. if you can take a couple of vectors and generate the whole space
linearly independence
dependant: existance of a non-trival linear combination of vectors that gives the zero vector
A.27 -
prove that
{ [1,0,2], [-1, 3, 0], [0, 1, 1]}
is linearly independant
assume a[1,0,2]+ b[-01, 3, 0] + c[0,1,1] = [0,0,0]
a-b =0
3b + c =0
2a + c=0
when you look at a homogenous system, thre's always one solution .. make the numbers all 0
[1 -1 0 | 0]
[0 3 1 | 0]
[2 0 1 | 0]
1. get a 1 in the upper left hand corner .. or, if 0, switch it with a row before it
2. get 0's in the rest of the first column
-2r1 + r3 -> r3
[1 -1 0 | 0]
[0 3 1 | 0]
[0 2 1 | 0]
3. move to column 2 .. try to get a 1 in 2,2
so, multiply row 2 by 1/3
[1 -1 0 | 0]
[0 1 1/3 | 0]
[0 2 1 | 0]
4. get a zero where the -1 is ..
r2+r1->r1
[1 0 1/3 | 0]
[0 1 1/3 | 0]
[0 2 1 | 0]
get rid of 2 in bottom of column 2
-2r2+r3->r3
[1 0 1/3 | 0]
[0 1 1/3 | 0]
[0 0 1/3 | 0]
next, get a 1 where I need to get it ..
3*r3 -> r3
[1 0 1/3 | 0]
[0 1 1/3 | 0]
[0 0 1 | 0]
get rid of 1/3
constants times the 3rd row to clean up the other rows
-1/3R3 + r1->r1
1 0 0 0
0 1 1/3 0
0 0 1 0
...
a=0
b=0
c=0
so, by row reduction, the solution has to be those numbers!
switching rows -- needed only occasionally .. if you're stuck in a certain row, swtich with the row below it
a.29 Prove theorum 4.1
Assume S in R2
S= {(0,0), (x,y) }
Let c1, c2 in scalars
Set c1(0,0) + c2(x,y) =
(0,0) or 0→
If c2 =0 such that 0(x,y) = (0x, 0y) = 0 by thm 1.3
then c1(0,0) + o = o
therefore c1(0,0) = (0,0) by thm 1.4
c1 = any non-zero scalar
therefore, 0→ + 0→ = 0→
problems: .. theorum is true in any vector space
so we cant use number pairs
we assumed that the set had two vectors .. we were only told about 0,0
a 0→
field has to contain two elements .. 0 and 1 .. and they can't be equal
so we know that 1's =0
and 1* 0→ = 0→
and 1 ≠0 because of the axiums for a field
a.30 - prove theorum 4.1 on page 12
If S={x→} and x→≠0→, then S is linearly independent.
Suppose S={x→} and x→≠0→
Let a x→ = 0 // because there's only one vector
Since x→≠0, the only way ax→=0 is if a=0(Them 1.7)
Since a =0, S is linearly independent
Theorum 4.3
if S={x1→,x2→, .. xn→}
is linearly dependent, then one of the xi→'s is a lnear combination of the other xi→'s
ie, show x1 = a2x2 + ... + anxn
a1x1 + a2x2 + ... anxn=0
wlog assume a1≠0
a1x1 = -(a2x2 + .. anxn)
multiply by -a, which exists because a1 is not 0
x1 = -a1(a2x2 + .. anxn)
x1 = -a1^-1 a2 x2 ....
-> gotta make it look like a linear combo with a bunch of axiums
theorum 4.4
Let S = {x1, x2, ... xn}
If some x-> is a lenear combination
of the other xi's, then s is linearly dep.
WLOG x1→ = a2x2 + a3x3 + .. anxn
x→ + -x→ = a2x2 + a3x3 + .. anxn +-x1
0→ = a2x2 + a3x3 + .. anxn -x1
-1 on x1 is not equal to zero!
so we have a non-trivial linear combination on the rigth side of the combination
independance is generally more a property of smaller sets
Theorum 4.5
(logically eq to 4.6)
Let s1 subst s2 subst V
If s2 is indep. then S1 is linearly indep.
proof
Let s1 subst s2 subst V
suppose s2 is linearly independent
Assume a1x1 + a2x2 + ... anxn = 0→
and xis in s1
since s1 subst s2
xi1s in s2
since s2 is lin indp
all of the ais in ax1x .. are 0
thereefore, S is linear indep.
Theorum 4.6
Let S1 subst s2 subst V
If s1 is linearly dep, then s2 is linearly dependant
assume S1 subst s2 subst V
and s1 is lin dep.
Let a1x1 + a2x2 .. + anxn
= 0→
xis in S1
and at least one of the ai's is non-zero
since s1 subst s2, xis are all in s2
so therefore, a1x1+ .. + anxn=0
therefore, s2 is linearly dep ..
because there exists a linear combination from s2 that is not trivial that equals the zero vector
for friday -- through inclass problems at end of section 5.
section 5, generaally speaking a vector space has a basis
have tt start wth a vector space that can be gnerated with a finite set of vectors
.. use the set to create a basis for it
ne way to tak a generatin gset and to make it a basis is to pull vectors out of it that may be perfenting it from being independant
other way, take that generating set and start with noting and pull out vectors on eaat a time .. ..
constucted so that it becomes generating and linear independant
for friday .. homework through section 4!
-------
Fri 02/06/08
Chapter 5 ..
Theorum 5.1
B = {x1, .. xn} is a basis iff every
v→ in V can be represented as a lenear comb of vectors from B and is unique
(=>)
assume left
Since B generates V, there exists ai st v→ = a1x1 .. +anxn where ai's in F
since B is linearly independant
let
v = a1x1+ ... anxn
= b1x1 .. + bnxn
(a1x1 + .. + anxn) -(b1x1 + bnxn) = 0→
(a1-b1)V + .. + (an - bn) xn = 0→
therefore ai = bi
becuase of linear independance .. those coefficients have to be 0's
v is unique
other direction
assume that for every v in V ....
since very vector v in V can be represented as a linear combination of vectors from V
.. this says that B generates V
Assume a1x1 + anxn = 0→ = 0x1 .. 0xn
but we have an assumption that every vector has a unique representation
so the only way you can aave those two representations is if they're the same
so all the ai's are equal to zero!
therefore B is a base
lemma 5.2
- if S1 subst Span(S2), then the Span(s1) subst span(S2)
Proof -
-----
subset inclusion
Let v in Span(S1)
Then V = a1x1 + a2x2 + a3x3 .. anxn , xi in S1
Since S1 subst Span(s2)
xi = aijy1 + .. + aim y m
wher yi in S2
v→ = ai(b11y1 + .. b1m ym) + a2(b21 y1 .. b2m ym) .. an(bn1y1 .. bnmym)
= (a1b11 + a2b21 .. anbn1) y1 + ... + (a1b1m + .. anbnm) ym
now v is a linear combination of the y vectors, but the y vectors come frm s2
therefore v in span of 2.
So we've proved it!
{x1 .. xn} generates V
Suppose xn in Span {x1 .. xn-1}
throw it away
(x1 .. x-> n-1}
.. still generates V
repeat this step .. is there a vector in that set that can be written as a linear combination .
that's an algorithm that has a limited number of steps . could not possibly take more than n steps .. you'd have to either stop or pull out a vector
so some subst of x1 .. xn would then be a basis
alternate approach .. create a baiss by starting with nothing and then building it up
{x1 ... xn} linear indep
assume
x not in Span {x1 . xn}
Prove .. taat if you merge this set with that element .. {x, x1, x2 , xn ... } that this set is linearly indep.
Let ax +a1x1 + a2x2 +..+ anxn =0
Then a = 0
If a ≠ 0, x = -a^-1(a1 x1) +..., would say x is in the span
contradictiong the idea that x not in the span (x1 .. xn)
So a=0
so the equation is really
a1x1 + ... anxn =0
now, all those coefficients are 0 because of linear independance
so we accomplished waht we wanted to do
starting with nothing .. creating a new set out of nothing
{x1}
{x1, x2}
can I find something that's not in the span of this set? then
i throw it in!
if I stop, at taht point, I still have a generating step , but it will b linearly indpet because of ti slemma we just proved
we have to stop somtime .. we can only add in n vectors at most
if you can get your hands a finite step of vectors that generates V , then you can get a basis
the s3cond half o the chapter .. any two bases have the same number of elements
section 6.
can't find a set of veactors (finite) taat can generateethe vector psace.. the t's an inifinte dimension vector space
there is a way in modern math to prove the existance of something w/o actually producing it
zorn's lemma .. axium of choice
axium of choice .. there exits a function that lets you take an item out of every set of a family of sets
partial order relation
{a,b,c}
{ab} {ac} {bc}
{a} {b} {c}
0
10
4 6 / 9
\/ / \ /
2 3 5 7
\ \ / /
1
maximal .. stuff at the top
if you folow a path up , at some point yo have to stop
if yo aave finite sets and you ask the queson are there elemtts at the top, the answer is yes
but what if your collection is infinite.
.. then you have top
what zorn's lemma says -- even if you have an infinite set .. and partial ordering
if you have one other property , you can guarentee there are things at th top
every inductive partially ordered set has a maxiumal element
what do we mean by inductive?
lines in the diagree
1 2 4 8 .. chains
if you lift them out, they're all strings
what an inductive orfder set would say is that every chaine has an upper bound
1 2 4 8
for is an upper bound and 8 is
so if every chain the instructure has somthing above ot, then there's a tleat one thing at the top
monday .. will finish in class problems through chap 6.
-------
Mon 02/09/08
chatper 5.
looked at two different proofs of theorum 5.5
.. garentees a basis for a finite vector space
two approaches
1. sift out from generating set
17 gregardless of how many different bases, they'll all have the same number of elements .. that common number is what we define to be the dimension.
.. number of vectors in any basis
theorum 5.7 replacement theorum
vector space v that's finitely generated
basis B consists of n vectors in it
we don't know whether another basis might have more than n.. we dn't know that yet
we just have one basis with n elements
then we hhve a set S over here
has vectors y in it y1 -- ym
and m ≤ n
S is linearly independant
conclusion .. we can augment S with n-m vectors from the first set
by doing that, we'll be buuilding S to N vectors
there exists a set Sin subst of B st S1 contains n-m vectors so that S u S1, this new set will generate B .. doesn't say it's a basis .. just says it will generate it
the strategy is induction on m .. the number of elements in S
we want to show that this is true for m=0,1,2 .. n
becuase m ≤ n
finite induction .. only for m values 0 ... n
usually we try to prove for all m's .. so not ding that
sorta borrowing the indcction idea but only for a finite values of m
to borrow the idea of induction, the proof needs two parts
1. Show that the theorum is true for m=0
2. Assume true for m <n and then prove that it's true for m+1
m=0 case
.. S = empty set
Let S1 = B #element in S1 = m = n-0
S1 u S = B, which generates V
now take an S that has m+1 vectors
hypothesis .. anytime I have a st of m fvectors that is lin indp , I can trhough in vectors and get a generating set
proof.
cut off m+1 element
look at the set S ... that's still linearly indpendent ..becuase subst of a linear indp
IH .. allows we to take the set of m elements and add on n-m
so
{y1 .. ym} u {x1 .. x[n-m]} is a generating set.
we want to have on more y and one less x!
this is a generating set, so it must generate y[m+1] = b1y1 + .. bmym .. + a1x1 + .. an-mxm-n
we don't know which coefficients are zero and which are non
want to say that at least that one of the a coeffeicents ware not zero
if we assume the contrary , then S is linearly dependent
therefore, at least one of the a coefficents has to be non-zero
let's assume that a1 is non-zero
that means I can solve for x1 in this equation
can write x1 as a linear combination of y1 .. ym+1 and x2 .. xn-m
x1 in span { y1 .. ym+1, x2 .. x2n-m}
also, y1 .. ym, x2 .. xn-m are all in span{ y1 .. ym+1, x2 .. x2n-m}
{y1 .. ym, x1 .. xn-m} is a subsept of the span {y1 - ym+1, x2 ... xn-m}
therefore
span{y1 .. ym, x1 .. xn-m} is a subset of the span {y1 - ym+1, x2 ... xn-m}
span{y1 .. ym, x1 .. xn-m} is v
therefore span {y1 - ym+1, x2 ... xn-m} is a generating set .. V
.. and that complete's the proof of the replacement theorum
this nice replacement theorum gets us at the very heart of what we mean by dimension.
Corollary 5.8
// you can really short curcuilt the process of determininng linear indep and generating .. if yo already know that ther''s a basis with n elements and taat set has n elmeents, then that set is linear independant iff it generates.
5.8 Proof. Let B be a basis for V
Let S be a linearly independant set of N vectors and is a basis for V
(proof that linear independenace of a set guarentees taht it'll generate the vector space)
it follows, from 5.7, that there is a subst S1 of B
Containing n-m elements
st S u S1 generates V
thus, S1 = {empty} .. they were the same size, then S generates V
since S is also linearly indpendent, S is a basis for V
Corollary 5.9
if you have more vectors to work with than the dimension, then they're linearly dependant
so assume we have a set S with more than n vectors in it and lopp off a set of n vectors
S has M vectors and m> n
dvidie into n and m-n
and m-n >0
let the set with n vectors become S1
there are two cases that could occur
one possibility is that S1 is linearly dependant
then, since S1 is a subst of S, S is linearly dependant
Suppose S1 is linearly indpendant. by correleary 5.8, s1 is a basis for V
let's take a vector y from S - S1
we know that y exists becaase m-n is creater than 0, so there has to be at least one vector y taht we can pull out of the set
I must be able to generate, becuase this is a generating set
y is a linear combination of vectors in S1, since S1 generates V
then, S1 u {y} is linearly dependant
S1 u {y} is a subst of S
therefore S is linearly dependant
in either case, the set S is linearly dependant
corollary 5.10 If v has a basis consisting of n vectors, then any basis for V contains n vectors.
Let B1 and B2 be basis of V
Let B1 have n vectors
and B2 have m vectors
Suppose B1 is the basis
then n ≥ m otherwise, B2 would be linearly dependant
Similarly,
Suppose B2 is the basis
then m ≥ n otherwise, B1 would be linearly dependant
therefore, n=m.
corollary 5.11
Suppose that V has a basis consisting of n vectors
and S is a set of n vectors taat generates V
Show that S is a basis for V
therefore, S is a finite set. since it has n vectors
also, S is a finite set that generates V
apply theorum 5.5
there exits B , a subset of S st B is a basis
therefore, since B is a basis of V, then B contains n vectors .. correleary 5.10
therefore B = S,
then S must be a basis as well.
"example" proof.
show that the dimiesnion of Pn(R) is n+1 by showing that a basis of P(R)
Show x^n, x^n-1 ..1 .. x1 generates Pn(R)
an x^n + an-1 x^n-1 .. + a0
= an(x^n) + an-1 (x^n-1) .. + an(0)
Show lin ind
Let anx^n + .. ao(1) = 0x^n + 0 x^n-1 + .. + 0
therefore, we have linear ind
therefore x^n, x^n-1, ... 1 is a basis
therefore, the dimension of Pn(R) is n+1
(show that P(R) ..
observe taht scalar mult does not change the degree and that the addition gives you a polynomial.. no higher than the highest degree of both of them
from taat we can say that any linear combination degree is ≤ to the maxium degree of the polynomial
so a inear combination of a set of polynomials, the dgree can't be any larger than the degree used
Let B be composed of polynomials
B is a generating set
and that they have degrees . .. 0 ,1, 2, ... k
this set of polynomials can generate polynomials up to degree k
x^k+1 , which we know is in P(R).. Therefore, B is not a generating set of the polynomials
therefore, a finite set does not generate P(R)
therefore, P(R) is finite dimensional
wednesday:
in class problems
theorum 6.1 by proving 6.2.
empty set .. linearly independant
singlton sets are also linearly independant
partial order reaation susing subst inclusion
maximal linealry independant .. can't add anything w/o destroing the linear inpdependence
might have time for 4 problems in chapter 7 .. through a 51.
tomorrow: homework: chapters 5 and 6.
partial order
reflexive
antisemetric
and transitive
-------
Wed 02/11/08
V = W1+W2
dim V = dim W1 + dim w2 - dim of intersection
often: 3 = 2+2 -1
test:
theory -- stuff that we've done either in homework or in class or in the notes themselves
axiums .. dont need numbers .. just identify it somehow
don't memorize theorums and lemmas in "book"
.. go over homework and in class
section 5.
A.#?
Let V be finite dmensionalLet W be a subspace of V
Let dim(V) = n
therefore, basis B = {y1, y2 .. yn}
Since W is a subspace of V, W pans V
select vectors {x1, ... xk} from W st
{x1 ... xk} is linearly independant lemma 5.6
but {x1 .. xk} u {xk+1} is linearly dependant
therefore V is finite
W is finite-dimensional
Since x1 .. xk u {xk+1} linearry depedant
{x1 .. xk) is a basis for W
therefore
dim(W) = dim(V)
you can't have a proper subspace .. that has the same dimesnion .. as soon as you do, it fills up the whole space.
Suppose dim V = n
dim W = n
Assume that W is a subspace of V
Let B = {x1 .. xn} be a basis for W
B subst V and B is linearly independant beause that's the def of bass
So B is a linearly independant subset of V containing n vectors
So B is a basis for W
so now we have to just make sure that V is a subset of W
so the final part of hte proof is to show that V is a subset of W
Let vector v→ in V
Then v→ = a1x1 .. anxn
where xi in B
Since xi in W, and W is closed under addition and scalar multiplication, a1x1 .. anxn must be in W
therefore, v is in W
showing that even an infinite vector space has a basis
L= collection of all linearly independant subsets of V
because V is inifnite dimension, there are probably an infinite number of ellemnts in some of the subsets.
partially ordered by subset inclusion
because there's ∞ of sets in the colection, it could grow without limit
our goal is to show that there are these maximally linearly independant
in order to prove this, we have to prove that L has the indictive property .. then we can use Zorn's lemma to say that there are maximal elements at the top.
We need to prove that L is inductive
chain .. would like something like this
A subst ... subst B subst C
so large we can't even tell you what comes after A
.. other than to tell you taat anything in there -- one is a subset of the other or vice versa
just like talking about the real numbers .. what comes after 1/2
butt the important thing is that this thing follows one chain .. any two elements are related by subset in one direction
want to prove that this chain has an upper bound
upper bound .. a set that contins everythnng in the chain and would be in L .. so linerly independant
going to take this chain (script C)
define B = union of all the C's in the chain
we can define B .. but how do I know taat B is linearly independant?
if B is linearly independnat .. then it belows to the collecton and it will be an upper bound of C
if you take an arbitrary union of a family of sets, each set in that class is a subset of theunion
so B would be an upper bound if it's part of the collection
Let a1x1 .. anxn = 0
assume that the xi's come from B
x1 has to belong becaase it blongs to the union
A1 subst A2
(wlog, assume on right .. can always resubscript)
A1 subst A2 subst A3 subst .. .. Am
therefore, all the xi's belong to An
we know that An s is script l .. therefoe linearly independant
therefore all ai's = 0
L is an inductive partially ordered set . Zorn's lemma says that it has maximal elements
M ..
we will see a basis without any elements .. axium of choice conundrum
examples
1. "we didn't need zorn's lemma"
P(R) is infinite-dimensional
can be shown {1, x, x1 .. } is a Hamel basis
anx^n+ an-1 x^n-1 ..
we can separate this ..
an(x^n) + an-1(x^n-1) ..
therefore, set generates P(R)
Let an(x^n) + an-1(x^n-1) .. =0
=0 (x^n + 0 x^n-1 .. 0(1)
therefore
an .. an-1 .. an-2 = 0
erefore linearly ind
therefore {x^n, x^n-1 ... x, 1} is a basis
P(R) is infinite dim
therefore set is a hamel basis
monday: first 9 problems on page 28
F(R,R) suppposedly has a hammel basis
.. there has to be a collection of functions such that that collecton of functions is linearly independant .. and has to generate every function that you could possibly think of
poisslbe approach
r in R
fr(x) = {1 if x=r ; 0 if x≠r}
this isnt gonna hack it .. what kind of functions could I possibly generate with these functions?
. you can only generate funcitions with some number of points in the air
.. this is probably linearly independant
the basis would probably be "very" uncountable
test! sections 1-6
-------
Mon 02/16/08
definition 7.1 part 1 .. tose two poluese are completely different .. v and w could be dramatically different vector spaces with dramatically different definitions of additons
2 .. same thing .. again, the first scalar multipl is scalar multiplcation in V but aT(X) is scalar multiplcaton in W
examples 1 and 2
example 1
Let V and W both be in R^2
and let T(a,b)) = [2a+b, a+3b]. thee T is a linear tranformation
1) T((a,b)+(c,d)) = T(a+c, b+d)
= (2(a+c) + 4b +d), (a+c) ....
= T(a,b) + T(c, d)
therefore
T((a,b) + (c, d)) = T(a,b) + T(c+d)
2. T(c(a,b)) = T(ca, cb)
= (2(ca+cb), ...
.. therefore, T is a lenear transofmraion
Example 2 - let V and W both be in R^2
Let T([a,b]) = [a,0]
Then T is a linear transformation
1. T(a, b+ (c,d) = t(a+c,b + d)
= (a+c, 0_
= (a, 0) + (c,0)
= T(a, b) + T(c,d)
...
2. T(c(a+b) = T(ca, cb)
= (ca, 0)
= c(a,0)
= c T(a,b)
= T(c(a, b
= cT(a,b)
example 3.
T(a,b) = (a^2, 0)
(1) T(a,b) + T(c,d) = (a^2, 0) + (c^2,0)
on the contrary
T (a+c, b+d) = (...
you should do a specific counter example for this one .. just pick something out
2) 3 * T(1,0) = (3,0)
T(3(,1)) = (9,0)
.. and that's suffeicient to demonstrate that the property is invalid
.. only way to get a transofrmation from r2 to r2 is to use linear expressions
.. so we used a ^2 in this case and it didn't work
example #4:
1) T[(f+g)] = f(f+g)(x) dx
= ∫( from 0 to 1 [f(x) + g(x)] dx
= ∫( 0 to 1 (fx) dx + ∫( 0 to 1 g(x) dx
= T(f) + T(g)
2) T(af) = ∫( 0 to 1 (af)) (x) dx
= ∫( af(x) dx
= a ∫( o to 1 f(x) dx
= a T(f)
.. definite integration is a linear operation
Example 5:
Let V = F(R,R) and let x0 be a fixed real number. Let T(f) = f(x0)
1) Let f, g in V
T(f+g) = (f+g)(x0)
= f(x0) + g(x0)
= T(f) + T(g)
2) Let f in V, a in F
T(af) = (af)(xo) = a(f(x0)) = a T(f)
example 6:
lET v = d(R) lET X0 BE A FIXED REAL NUMBER, lET t(F) = F'(X)
1) T(f+g)' = (f+g) (x0)
= f(xo) + f'(xo)
= T(f) + T(g)
T(f+g) = T(f) + TT(g)
2 T(cf = (cf)'(x)
= c(f'(x))
= cT(f)
example 7:
T(f+g) = (f+ g)
..
2 T(cf) = (cf)'
= c(f')
= cT(f)
t(cf) = cTf)
T: F(R, R) ---> R
S: D(R) → F(R,R)
(T o S)(f) = T(S(f)) = T(f') = f'(x0)
T o S : D(R) --> R:
T o S(f) = f'(x0)
compostion of linear tranformations is a linear tranformation
example 8:
Let v = Mnxn(r) and W = R Let T(a) = trace(A) = A11 + A22 + .. + Ann .. Then T: v→w is a linear transformation
(1) T(A+B) = (A11 + B11) + (A22 + B22) ...
rearrange
A11 + A22 + A33) .. B11 + B22 + B33
= T(a) + T(B)
could have put (A+B)11 + (A+B)22 ..
but not really neccesary
2) multiplication
- T(cA) = cA11 + cA22 + cA33
.. c(A11 + a...)
= cT(A)
tuesday b41-b46
problems:
end of 7 and first 2 on 8.
anything that maps to the zero vectors is considered part of the null psace
but the nullspace is a subspace of W
range of a function .. the set of all vectors in W that are images of vectors in V
Dimension Theorum
dim V = dim N(T) + dim R(T)
dim(N(T)) is called the nullity
dimension of the range in the rank
-------
Wed 02/18/08
a.54
1. add.
T(v1 + v2) = 0[w] .. becuase it maps evryone to the 0 vector
= 0[w] + 0[w]
= T(v1) + T(v2)
T(av→) = 0[w] = a0[w] = aT(v→)
A.56 -
Prove Theorum 7.1
Let a1, a2,,, an be in F and x1, x2, .. xn be in V
Let n=2
T(a1x1 + a2x2) = T(a1x1) + T(a2x2)
= aT(x2q) + a2T(x2)
Induction hypothesis
suppose
T(a1x1 + .. anxn) = a1T(x1) + ... + anT(xn)
is:
T(a1x1 + .. an+1xn+1) = T(a1x1 + .. anxn) + T(an+1xn+1) = // can include by additivity
using ih,
= a1T(x1) + a2T(x2) .. and homogeneity on the last term
and that completes the proof.
Def 7.2
Nullspace .. set of vectors that map to the zero vector in W
range -- .. set of all images
theorum 7.2
Let T: V → W be linear tranformation
Then T(0[v]) = 0[w]
T(0[v]) = t(0[v] + 0[v])
= t(0[v]) + t(0[v])
then = T(0v) + T(0v) - T(ov)
therefore 0 w = T(0v)
Theorum 7.3
Let T:V → W be a linear tranformation
prove N(T) is a subspace of V
(1) look at theorum 7.2 (0[v]) in N(T)
(2) Let a, b in N(T)
T(a+b) = T(a) + T(b)
= 0[w] + 0[w] = 0[w]
therefore a+b in N(T)
(3) Suupose c in F
T(ca) = cT(a)
= c0[w] = 0[w]
therefore, ca is still in N(T)
therefore, N(t) is a subspace of V
Theorum 7.4 Let T: V→W be a linear transformation
Then R (T) is a subspace of W
(1) by theorum 7.2,
T(0v) = 0w
therefore 0w in R(T)
(2) Let w1, w2 in R(T)
there exists v1, v2 in V st T(v1) = w1, and T(v2) = w2
T(v1+v2) = T(v)1 + T(v2) since T is linear trans
= w1 + w2 in R(T)
3. Let w be in R(T)
and c in F
Let v in V
where T(v) = w
cw = cT(v) = T(cv)
therefore, cw in R(T)
Theorum 7.5
Let T: V→W be a linear tranformation
Then T is 101 iff
N(T) = {0[v]}
(=>)
Suppose T is 1-1
Show that N(T) = {0[v]}
by Theorum 7.2, 0v subst N(T)
Let x in N(T)
Let T(x) = 0[w] = T(0[v])
x = 0[v] sinc T is 1-1
therefore N(T) subst {0}
therefore N(T) = {0[v])}
(<==)
Suppose N(T) = {0[v]}
show that T is 1-1
Let x in V and a in F where a=-1
therefore, sincee T is a linear transformation,
T(-1x) = -1T(x)
therefore T(-x) = -T(x)
Let x1, x2 in V
assume
T(x1) = T(x2)
show taht x1 = x2
T(x1) - T(x2) = 0
T(x1) + T(-x2) = 0
T(x1-x2) = 0
therefore x1-x2 = 0 .. using the assumptiin that N(t) = {0}
therefore x1 = x2
therefore T is 1-1
Theorum 7.6
Assumm T(xi) = yi for all i=1 .. n
So for T to be a linear tranformation
T(v) = T(a1x1 + ... + .. anxn)
= T(a1x1)+ ... +T(anxn)
= a1t(x1) + .. + anT(xn)
= a1y1 + ... + anyn
.. is it really linnar
(Show additive prop)
Let v1, v2 in V and
v1 = a1x1 + ... + anxn
V2 = a1x1 + .. + bnxn
T (v1+v2) = T(a1x1 + .. +naxn + b1x1 ... bnxn)
T (a1+b1) x1 + .. + (an+ bn) xn)
= (a1 + b1) y + .. + (an + bn) yn
by definition of T from before
= a1y1 + .. ++ anyn + b1 y1 + ,.. + bnyn
= T(v1) + T(v2)
.. additive property
(show homog)
T(cV1) = T(ca1x1 + ... + canxn)
= ca1y1 + .. ++ canyn
= c(a1y1 + .. + anyn)
= c T(v)
homogenity
Corolary 7.7
Assume T(x1) = U(x1) foo every i = 1,2, .. n
So T(v) = t(ax1 + anxn)
= T(a1x1 + .. + T(anxn)
= aT(x1 + .. anT(xn)
= a1U(x1) + .. + anU(xn)
= U(a1xi + .. U(an xn)
= U(a1x1 + .. + anxn)
= U(V)
therefore,
T(v) = U(v)
T = U
Theorum 8.1
- Let T: V → W b e alinear Tranformation
Lwet B = {x1, .. xn) be a basis for V
Show T(x1) , T(x2) .. T(xn) .. generates R(T)
Let w in R(T) be arbitrary
therfroe exists v in V such that T(v) = w
v1 = ax1 + .. anxn
take
T(v) = T(a1x1 ... anxn)
= a T(x1) + .. anT(xn)
Since w = a1 T(x1) + .. anT(xn), in R(T)
therefroe {Tx1, Tx2, .. t(xn) } generates R(T) .. because it bcan be written as a linear combination
Correlary
- A rank(T) ≤ rank(V)
..
let the set {T(x1) .. T(xn)}
Let {Tx) .. T(xn) b e a generating set of R(T)
by theorum 5.5
.. basis has some dim k .. which is ≤ n
therefore dim(R(tt) = k1, dim (V) = n
since x ≤n
going to a70
-------
Fri 02/20/08
V and W
nullspace as a circle inside V .. map to zero
t[k+1] .. tk+n generates w .. that's what I want to prove
.. i have to take an arbitrary element from W and show that I can write it as a linear combination
Let w be in R(T)
Let v in V be such that T(v→) = w→
Let v→ = a1x1 .. anxn , because B is a basis
w→ which is = T(v→) = T(a1x1 .. anxn)
= a1T(x1) .. akT(xk) .. ak+1T(xk+1) .. anT(xn)
x1 .. xk come from the nullspace
a10 + ak0 . = anT(an) + ...
= ak+1T(x[k+1]) .. a[n]T(x[n])
started with w→, and arbitrary vector in the range and I wrote it as a linear combinatio of T(x1) T(xn) ..
therefore, that set generates R(T)
Dimension Theorum
nullity(T) + rank(T) = dim(V)
dim R(T) + dim N(T) = dim(V)
even though the R(T) is in a different space
finite diminetional, so it has some diension
so N(T) has dim k, which is less than n, the dimension of V
B = {x1 .. xk} is a basis for N(T)
C = {xk+1 .. xn}
B u C is a basis for V
the quesion now boils down to this : we already know that theenull space has a dimention k .. we have to be convinced that the dimension of the range is n-k .
consider the following set.
{T(x[k+1]), T(xn)} subst R(T)
by theorum 8.3
(T (xk+1) , T(xn) }
generates R(T)
By theorum 8.4, {T(xk+1), .. T(xn)}
is linearly independant
there may be n-k vectors in there , but there is theeremote possibility that the vectors are the same
but the fvectors automatically have to be different if it's lineally independnat
here's why
if you had two vectors equal, you could form the linear combination
1xi - 1xj = 0
k + (n-k) = n, the dimension of V
trivial transformation
takees everything in V and maps it to the 0 vector in W
so the N(T) = V
so the dim(N(T)) = n
the rank(T)= dim(R(T)) = dim ({0}) .. empty set constitutes a basis because it has a span of {0}
so this becomes 0
and n+ 0 = n
Theorum 8.6 . .
if V and W are finte- dmensional vector spaces with dim(V) < dim(W), and if T: V→W is a inear tranformation, then T is not onto.
by dim tehorum Rank(T) ≤dim(V)
Rank of (T) ≤ dim(V) ≤ dim(W) (?)
by definition of onto,
rank(T) = dim(W) .. everything is in the range
therefore, rank(T) ≠ dim(W)
therefore, not onto
theorum 8.7
if v and w are finite-dimensional vector spaces with dim(V) > dim(W), and if T: V→W is a linear transofrmaton, then T is NOT one-to-one
Assume dim(V) > dim(W) and T is a linear transformation.
proof by contradiction
Suppose T is 1-1
Then N(T) = {0[v]} and nullity = 0
By dim Theorum, dim(V) = nullity(T) + rank(T)
therefore the dimesnio of (V) = 0 + rank(V)
since, R(T) is a subspace of W, the rank(T) ≤ dim(W)
dim(V) = rank
...
dim(V) ≤ dim(W)
--><---
therefore, T is not 1-1
theorum 8.8
Let V, W be finite dimensional
dim(V) = dim(W)
show T is 1-1 iff T is onto
T: V → W
(=>) Assume T is 1-1
N(T) = {0}
so the nullity = 0
by theorum 7.5
therefore 0 + rank(T) = dim(V) = dim(W)
therefore, dim(R(T)) = dim(W)
therefore, R(T) = W. the fact that wo fector spaces have the same dimension is not enough to cohnclude they're the same sppcee but weenow that R(T) is a subspet of W and we do have a theorum that says you cannot have a subpasc eof the vector saace that has the same dimesnin without it becing equal
5.13 on page 19
that gets us half way!
(<==)
assume onto
therefore R(T) = W
therefore dim(R(T)) = dim(W)
because of our given,
Rank(T) = dim(V)
therefore, nullity(T) = 0, by the dimension theorum
therefore, N(T) = {0}
by 7.5, therefore, 1-1.
section 9.
no matter how abstract a vector sapce you have, you can sorta euqate it with matricies
and then you can also take linear transofrmations and each can be equated with a amtrix and then applying a lin trans to a vector becaomes multiplying the matricies
and what will do for us is allow us to take any abstract finite dimentional program and translate it into the language of matricies
the way we do this
ordered basis
{x1, x2 .. xn}
.. order matters here
if we change the subscripting we get a different basis
once that's determined . we have a spcific first gector and so on
then
any vector v = a1x1 + .. + anxn
define a matrix of those numbers
a
a2
.
.
.
an
x * 1 matrix
= [v→][β]
if I change the basis, that changes theelinear combination of v and therefore, I have a different matrix.
[11, 4]
if you use the standard basis .. i and j,
then clearly you get 11[1,0] + v[0,1]
if β = {[1,0], [0,1]}
then [11,5][β] =
11
4
what if we change the basis
β = {[1,2], [4,-1]}
[[11,4
]][β] =
how can I produce [11,4] with a linear combo of [1,2], [4,-1]
actual combo requres a 3 and a 2
so thht's what goes in the mmtrix
[3]
[2]
some properties that are true.
theorum 9.1 let V be n dimensional with
ordered baasis β Let x1 and x2 =
v
and let c in F
A) [x1 + x2]β = [x1]β + [v2] B
Let x1 = a1x1 + a2x2 .. + an xn
Let v2 = b1x1 + b2x2 + .. _ bnxk
v1 + v2 = a1x1 .. anxn + b1x1 + .. bnxn
= v1 + v2 = a1x2 ..
b) [cv] =
ca1
[..
ca2
= a1
an
= c[v1]β
finish 36
script L .. all linear transofrmation ..
it tiself is a vector space.
sum of two linear tranformations .. is it linear
scalar mult -- is lit liear
yes ..
and then see if all the axiums ofr a vector space are true
get through 9.
for Tuesday problems on end of chapter 8.
-------
Mon 02/23/09
march 11th .. 2nd test
for friday 9 and 10
theorum 9.2 a
if T and U are linear, then T+U is linear.
Proof:
Suppose T:V->W and U:V→W are linear
from definition 9.4
Let T + U : V→W be defined by (T+U)(v→) = T(v→) + U(v→)
Let x→, y→ in V
then (T+U)(x→ + y→) = T(x→+y→) + U (x+y)
since both T and U are both linear, they have the addative property
= T(x) + T(y) + U(x) + U(y)
= T(x) + U(X) + T(y) + U(y)
= (T+U)(x) + (T+U)(y)
therefore, additive
homogenity
let x in V
and a in F
(T + U)(ax) = T(ax) + U(ax)
= aT(x) + aU(x)
= A(T(x) + U(x))
= a(T+U)(x)
therefore, homogenity property satisfied
therefore T + U is linear
Thm 9.2(b)
If T is linear, then cT is linear
Let v1, v2 in V
(cT)(v1+v2) = c(T(v1+v2))
= c(T(v1) + T(v2))
= cT(v1) + cT(v2)
= (cT)(v1) + (cT)(v2) def 9.4
additive property
a in F , v in V
(cT)(av) = c[T(av)]
caT(v)
= acT(v)
= a(cT)(v)
homogeneity
axiums 1.
Axium 1:
T + U = U + T
Let v in V
Then (T+U)(v) = T(v) + U(v) by theorum 9.4
= U(v) + T(v) by axium 1 for W
= (U+T)(v)
since v is arbitrary
T+U = U+T
axium 2
(S+T) + U = S + (T + U)
Let v in V
[(S+T) + U](v) = (S+T)(v) + U(v) by 9.4
= (S(v) + T(v)) + U(v) by 9.4 again
= S(v) + (T(v) + U(v)) because of axium 2 in W
S(v) + (T+U)(v) by 9.4
by 9.4 again
(S + (T + U))(v)
therefore (S + T) + U = S + (T + U)
Axium 3.
Let T[0] : V→ W be defined by T[0](v) = 0w→
By example 9 on page 25, T[0] in L(V, W).
Show for every U in L(v,w), show that U + T[0] = U
Let v in V
(U + T[0])(v) = U(v) + T[0](v)
= U(v) + 0w
= U(v) by definition of 0 vector
therefore, U + T[0] = U
axium 4.
negative vector.
Let T in L(V, W)
Let -T be defined by (-T)(v) = -(T(v))
prove that -T is linear
(-T)(v1 + v2) = -(T(v1) + T(v2))
= - T(v1) + -T(v2)
.. but we didn't prove that
-(x + y) = -x + -y
(x+ y) = +(-x + -y)
= x + -x + y + -y
= 0 + 0
= 0
boom .. thorsen's rule
back to main proof
(-T)(v1) + (-T)(v2)
.. therefore addative
-T(av) = -[T(av))
= -(aT(v))
= a[-T(v))
= a(-T)(v)
.. and that is the homogeniety property for -T
therefore, -T in L(V,W)
now show axium 4.
show that
T+(-T) = T[0]
[T+(-T)](v) = T(v) + (-T)(v)
= T(v) + -[T(v)]
= 0[w]
= T[0](v)
so the two are equal.
axiom 7 and 8 with nate.
axiom 7.
T, U in L(V,W)
v in V
show that a[T+U] = aT + aU
[a(T+U)](v) = [aT + aU](v)
...
by 9.4
a[(T+U) (v)]
a [T(v) + U(v)]
by axium 7 in W
aT(v) + a U(v)
by 9.4
(a
...
by 9.4 (aT + aU)(v)
satisfies axium 7.
Axium 8
-------
a i F v in V
show (a+b)T = aT + bT
show [(a+b)T](v) = [aT + bT)(v)
by 9.4,
(a+b)[T(v)] = aT (v) + bT(v) by axiom 8 in W
by 9.4 again
= (aT)(v) + (bT)(v)
= by 9.4 [aT + bT](v)
= satifies axium 8.
therefore, L(V, W) is a vector space.
------
T: V → W
β = {x1, ... xn} and ordered basis in V
in W, γ{y1 ..
yn]
[T] β to γ
= m *n matrix
built column by column
here's how you get column 1
taek first element in β and apply T to it
T(x1) ..
in W, any vecdtor is a linear combinatio of the y vectors
T(x1) = a11y1 + a21y2 + amym
first column
a11
a21
a31
T(x2) = a12y1 + a22y2 + am2ym
so second column:
a12
a22
a32
so that for each xi
so when you're all done, you'll have an n*m matrix
T(xn) = a1ny1 + .. + amnym
a1n
a2n
a3n
...
amn
Theorum 9.4a.
[T + U]β to γ = [T] β to γ + [U] β to γ
suppose [T]β to γ = (aij)
=
[a11 a12 a13 ..
and [U]β to γ = (aij)
= b11 b12 ...
then (xi) = a1jy1 + a2j y2 + .. + anjyn
and
U(xi) = bijy1 + b2jy2 ... bmjym
then (T+U)(x)
= T(x) + U(x)
= (a1jy1 + a2jy2 + ... ) + (b1jy1 + b2jy2 + ....
= = (aij + b1j) y1 + a2j_ b2j) y2 _ .. __ amj_+ bmj) ym
= a1j _ bij) =
a1j + b1j
a2j + bj2
.....
split into two columns
.. by definition of matrix addition
= (aij) + (bij)
= [T]β to γ + [U] β to γ
for jth column
for wednesday .. be prepared through end of 11.
-------
Wed 02/25/08
Theorum 9.4 b
recall that the jth column of [T] β to γ
=
a1j
a2j
amj
where these numbers correspond to the numbers in the following expression
T(xj) = a1jy1 + a2jy2 + amjym
cT(xj) = c * T(xj) = c(a1jy1 + .. + amjym) = ca1jym + .. + ca[m][j]y[m]
jth column of [cT]β to γ
= ca[1][j]
ca[2][j]
...
ca[m][j]
= c [rows a1j to amj]
therefore, the jth column of [cT]β to γ = c * jth column of [T]β to γ
but this argument does not specify what column j I"m looking at .. it's true for every j
therefore [cT]β to γ = c[T]β to γ
Chapter 10.
composition of linear transformations
10.1 standard def of composition -
does it turn out that if the two funciions you compose are linear .. do you get linear?
yes.
proof
If T:V→W is linear and U: W→Z is linear
prove that U o T : v-→ is linnar
Additive property:
UT(x+y) = u(T(x) + T(y)) since T is linear
= U(T(x) + U(T(y))
= UT(x) + UT(y) .. by derfintion of composition
homogeneity:
- (UT)(ax) =
U(T(ax))
U(aT(x) since T is linear
= aU(T(x)) since U is linear
= a(UT)(x)
Thm 10.2
from set theory
a) U(T1T2) = (UT1)T2
b) I[v]T = TI[v] = T
suppose all funciions are from V into V
Proof:
Let v in V
[U(T1T2)](vt) = U((T1T2)(v))
= U(T1(T2(v)))
= (UT1)(T2(v))
= (UT) o T2(v)
.. all four steps use teh definition 10.1
therefore, U(T1T2) = (UT1)T2
combination of functions is an associative operation
b) Let v in V
(Iv o T)(v) = I[v](T(v))
(T o Iv)(v) = T(Iv(v)) = T(v)
therefore, IvT = TIv = T
Thm 10.3(a)
U(T1 + T2) = UT1 + UT2 and (U1+U2)T = U1T + U2T
observe that they have the same domain.
1) by def 9.4 U(T1 + T2)(x) = U(T1(x) + T2(x))
U(t1(x)) = U(T2(x)) by def of compositions
= by definition of two functions, (UT1 + UT2)(X)
U(T1+T2) = UT1 + UT2
2)
((U1+U2)T)(x) = (U1 + U2)(T(x)) by 10.1
= U1(T(x) + U2(T(x) by 9.4
= (U1T)(x) + (U2T)(x)
= (U1T + U2T)(x)
therefore, (U1 + U2)T =U1T + U2T
Let T: V→W and U: W→Z be linear tranformatons
Show a(UT) = (aU)T= U(aT)
dom(W) = W dom(T) = V
dom [a(UT)) = dom[a(U o T)) = V
.....
(2) Let v in V be arbitrary
[a(UT)](v) = a(UT)(V)
= a U(T(v))
= (aU)(T(v))
= [(aU)T](v)
show 1st and 3rd are equal
[a(UT)(v) = aU(T(v)
=
U (aT(v))
= U((aT)(v))
therefore a(T) = U(aT)
QED
definition 10.2 .. definition of matrix multiplication .. we learned that.
the reason we define it the way we do is because of this theorum 10.4 becuase it takes the concpet of linear transformations and translates it into matrix multiplication
vector spaces V, W, and Z
V ---T---> W ---- U ---> Z
α .. alpha is the basis for V
β is th basis for W -- y1 through ym
V is n dimensional , W is m dimensional
γ is the basis for Z
z is p dimensional
this is the sematic of some of the assumptions
when we look at this picture, there essentially 3 different matricies
every transformaton hhs a matrix representation
[T]α to β
representation of T relative to α and β
that would be mxn
.. has to have n columns .. one for each fo the x1 vectors
and the lenght has to be m .. m numbers
[U]β to γ .. pxm matrix .. each vector in β used to construct the columns and each column is length p
composition -- goes from V to Z
defined to be UT
that's a tranformation from V to Z, so we could talk about the matrix representation of that
[UT]α to γ
have to build it column by column using the elmetns of alfpha .. so there woul have to be n columns
p rows
pxn matrix
the theorum says
[U] β to γ [T]α to β =
(refer to those as matrix A and B)
p x m and m x n = something p x n
= [UT] α to γ
figure out what would go in the jth column of the matrix representation of UT
UT(xj) = U(T(xj))
= U(Σ (Bkjyk)) k=1 to m
let m = 2
= b1jy1 + b2jy2.. a specific case just so you can see what's going on
the next step says I can write that U part inside:
= Σ k=1 to m BkjU(jk
) =
B1jU(y1) + B2jU(y2) .(sepcific case)
we're using the fact that U is linear
= Σ Bkj(Σ i=1 to p a[i][k]z[i])
suppose p=3
= B1j(a11z1 + A21z2 + A31z3) + B2j(A21z2 + A22z2 + A23z3)
Σ k=1 to m (Σ i=1 to p A[i][k]B[i][j]z[i])
we're going to slide B1j
in
= A11B1j)Z1 + A21B1)Z2 + A32B1j)Z3
+ A12B2j Z1 + AssB2jZ2 + A22B2j Z3
next step lets us rearrange ..
commutativity of addition
so the next expression ..
Σi= 1 to p ( Σ k=1 to n A
ex:
(A11B1jZ1 + A1jB2jZ1 .. ..
so it's just rearranging the terms
then factor out the Z
(A11B1j+A12B2j)Z1 .. and do the same thing with z2 and z2
every step is justified .. and usually the justifictions are relatively routine
theeonly thing that makes it a ittle scary is the copact sigma notation
so that's the main theorum of the section
you can obtain the matrix representaion of a copositon by thakeing the individual transformations and multiplying them together.
prepare in class problems up through 12.4 -- A.90
-------
Fri 02/27/09
Theorum 10.6
look at a typical entry of A(B+C)
[A(B+C)][i][j] = Σ A[i][k](B+C)[k][j] k = 1 to n
by definitin of matrix multiplication
= Σ A[i][k](Bkj + Ckj) by definition of matrix addition
= Σ AikBkj + AikCkj
= Σ AijBjk + Σ AikCkj) by commutativity of ordinary addition
= (AB)ij + (AC)ij by definition of matrix multiplication
= [AB + AC]ij definition of matrix addition
therefore A(B+C) = AB + AC.
Lemma 11.1
Let A be an mxn matrix and I^j be an nx1 matrix
show that AI^j = A^j
A is mxn and that I^j is nx1
therefore I^j is mx1 by def of matrix multiplication.
We know I^j is nx1
therefore both AI^j and A^j are mx1
so we've got a shot at it.
Let 1 ≤i ≤ m
therefore (A^j)i1 = a[i][j]
by matrix multiplication - (AI^j)i1 = Σ k=1 to n aik I^j [k][1]
we know that (I^j)k1 k=j I^j[i][j] = 1
therefore,
Σ k=1 to n aik I^j[k][1] = aij * 1 = aij
therefore (AI^j)[i][1] = a[i][j]
since (A^j)[i][1] = a[i][j] = (AI^j)i1
Since 1 ≤i ≤ n is arbitrary
therefore A^j = AI^j
Lemma 11.2
Let V be a vector space
and let W be an m dimensional vector space with ordered basis γ. Suppose T: V→W is linear transformation
Let φ : V→Mnxm(R) be defined bby
..
Proof
1 - additive
= Let v1, v2 in v
φ (v1 + v2) = [T(v1 + v2)[ by given
= [T(v1) + T(v2)}γ since T is linear
= [T(v1)]γ + [T(v2)]γ
by given
therefore addative
Show homogeniety -
Let v in V and c in F
φ(cv) = [T(cv)]γ by given
= [cT(v1)]γ since
T is linear
= c[T(v1)]γ by Thm 9.1b
= cφ(v1) by given
therefore homogeniety
therefore linear.
we've simply composed two linealr transformations
Theorum 11.3
V n-dimensional ordered basis β
suppose A is an mxn matrix
ψ: V→ Mmxn(R)
ψ (v) = A[v]
t
hen ψ is a lin trans
show additive
suppose v, u in V
ψ(v) + ψ(u) = A[v]β + A[u]β
= A([v]β + [u]β) 10.6
= A([v+u]β) 9.1
= A[v +u ]β
= ψ(v+u)
2. homogeniety -
cψ(v) = c(A[v]β)
= A(c[v]β) 10.7
= A([v]β) 9.1
= ψ (cv)
Theorum 11.4
Let V have ordere
d basis β and W have ordered basis γ Let T: V→W be a linear transformation
Then [T(v)]γ = [T]β to γ[v] β
Let β = {x1, x2 .. xn}
By def 9.3 the jth column of [T]β to γ = [T(xj)]γ
[x1]β = 1
0
0
0
.
.
.
]
so [xj]β = I^j
Since [T(v)]j is a linear tranformatin by Lem 11.2
and [T]β to γ [v]β is a linear tranformation by lemma 11.3
we se cor 7.7 to prove [T(v)]γ = [T]β to γ [v]β
[T]β to γ [x[j]]β = [T]β to γ I ^j
= ([T]β to γ)^j by Lem 11.1
= [T(xj)]γ by def 9.3
chapter 12.
L[A] : R^n --> R^m
A is an mxn matrix)
L(v) = Av→ in R^[m]
v in R^n .. v is an n-dimension column matrix
L[A] is linear
suppose basis β for R^n and basis γ for R^m
we should be able to form
[L[A]] β to γ
any connectin between these two matricies?
Show that L[A] is linear
additive -
Let x, y in R^n
L[A](x+y) = A(x + y) .. y def of L[A] // actually matrix multiplication .. not "A of"
by 10.6, = Ax + Ay
= L[A](x) + L[A](y)
Let x in R^n and c in F
L[A](cx) = A(cx) (def in L[A])
= c(Ax) Theorum 10.7
.. therefore homogeniety
therefore linear.
for tuesday .. problems at end of 11.
friday .. end of 13.
for in class on monday -- through A96.
-------
Mon 03/02/09
Thm 12.2
--------
assume β and γ are standard bases for Rn and Rm
[x1] = ( column: 1 0 0 . . . 0)
element in basis xj = I^(j) because β is the standard basis
So L[A](xj) = Axj = AI^j = A^j = in Rm
A^j = [A^j]γ since γ is a standard basis for R^n
Then, [L[A]]^j = [A^j]γ =
A^j
is true for all j
therefore
[L[A]]β to γ = A
[1,2,3] in R^3
then (1,2,3) has a matric representaion
[v]β .. if β happens to be the standard basis
theorum 12.3
L[A] = L[B] is saying that these transofrmaton are equal as functions
and the right hand says that they're equal as matricies
L[A] = L[B] iff A+B
=> Let L[A] = L[B].. Show A=B
Let β and γ be standard basis for R^n and R^m
therefore, [L[A]]β to γ = A, t