Linear Transformations on vector spaces Help-VBForums

# Thread: Linear Transformations on vector spaces Help

1. ## Linear Transformations on vector spaces Help

Hi, I have a resit exam coming up pretty soon so i'm going through some past papers for revision, and i've hit a brick wall with one of them as I missed the lecture where he went through some examples on the subject, i'll post the question up as an image, it would be awesome if any of you could go through the questions step by step to help me understand how to do it, Thanks in advance!

2. ## Re: Linear Transformations on vector spaces Help

a. A vector space is in particular a set, where vector addition and scaling "make sense". A subspace of a vector space is a subset, where vector addition and scaling continue to "make sense". Subsets (strictly speaking, nonempty subsets) inherit almost all of the necessary properties by default from the underlying vector space; only two additional checks are needed to ensure a subset is indeed a subspace: scaling an element of the subset gives you another element in the subset (closure under scaling), and adding two elements of the subset gives you another element in the subset (closure under addition). If these two are satisfied the subset is in fact a vector space in its own right, using the same addition and scaling operations as the original vector space used.

That said, (i) if f(x) is such that f(3) = f(0), scaling f(x) to g(x) = r*f(x) for some real number r gives g(3) = r*f(3) = r*f(0) = g(0), so g is in the subspace. Closure under scaling is satisfied. If p(x) and q(x) have p(3) = p(0), q(3) = q(0), then r(x) = p(x) + q(x) has r(3) = p(3) + q(3) = p(0) + q(0) = r(0), so closure under addition is satisfied. The set is a subspace (being obviously non-empty). (ii) I will not discuss.

b. ker T is by definition the set of elements (in U) mapped to 0 by the linear transformation T.
Closure under scaling: if x is in ker T, then T(x) = 0, so T(r*x) = r T(x) = r 0 = 0, so r*x is also ker T.
Closure under addition: if x and y are in ker T, then T(x + y) = T(x) + T(y) = 0 + 0 = 0, so x+y is in ker T.
Non-empty: the zero vector is in ker T since T(0 + 0) = T(0) + T(0) = T(0) => T(0) = 0.

Given a basis (ordered set) {b1, b2, ..., bn}, and given a vector v, the coordinate vector [v] relative to that basis is the coefficients of the (unique) linear combination of the basis elements which gives v. That is, there is a unique representation v = c1*b1 + ... + cn*bn for scalars c1, ..., cn, and [v] = [c1, c2, ..., cn]. By some trial and error (there are "exact" methods I won't discuss), one finds

v = 2t^2 - 3t + 4 = 2*(t-1)^2 + 1*(t-1) + 3*1, so that
[v] = [2, 1, 3]

3. ## Re: Linear Transformations on vector spaces Help

Thanks a lot mate, really do appreciate it, your knowledge is amazing to be fair! I have one more question that i'm having real trouble with as well, i have nothing in my notes about M2(R) or Ker T and its a struggle to find any useful examples online, be great if you could do through it like you did on the previous again, here it is:

4. ## Re: Linear Transformations on vector spaces Help

M2(R) is the vector space of 2x2 matrices with real entries where addition and scaling are done componentwise. There are several similar notations for similar notions. As I mentioned ker T (read "the kernel of T") is the set of elements mapped to 0 by T. As your previous question showed, ker T is a vector space itself (being a subspace of T's domain), so "taking the kernel" of a linear transformation is one nice way to generate new vector (sub)spaces. I showed that ker T always contains 0, but perhaps ker T is trivial and contains nothing more. A key question along these lines, then, is whether or not the kernel of a given linear transformation T contains more than just the zero vector. (The same type of reasoning pervades abstract algebra in more general contexts.)

(a) T(1,2,3,4) is the matrix {{1+2*2-3, 2}, {-2, 4-1}} = {{2, 2}, {-2, 3}}. The zero element of M2(R) is of course just the 0 matrix, {{0, 0}, {0, 0}}. Note that T is indeed a linear transformation (I will not verify it). To get T(a, b, c, d) to be {{0, 0}, {0, 0}}, we need...

a + 2b - c = 0
b = 0
-b = 0
d-a = 0

while not all of a, b, c, and d are 0. Solving the system above gives solutions if and only if
b=0, a=c=d
so (1,0,1,1) qualifies. Indeed (r, 0, r, r) is the form of all solutions--and the kernel is the one dimensional vector space {(r, 0, r, r) where r is real}.

(b) im T is the image of T, i.e. the set of all elements in T's codomain that are actually hit by applying T to some element of its domain. It's very similar to ker T in that this operation generates another subspace (proof omitted) and has general uses in algebra. Determining a basis for the image implicitly is extremely simple, since...
T(w, x, y, z) = w T(1, 0, 0, 0) + x T(0, 1, 0, 0) + y T(0, 0, 1, 0) + z T(0, 0, 0, 1)

That is, every element T(w, x, y, z) in im T is a linear combination of {T(e1), T(e2), T(e3), T(e4)} where the ei are V4's standard basis vectors. However, if for instance, T(e1) = T(e2) = 0, we could remove T(e1) and T(e2) from this list, so in general we must remove such redundancy--we must remove enough elements so that the resulting set is linearly independent (in general removing 0 or more vectors; which ones we remove turns out to be relevant). Here, the above (multi)set is

{(1, 1, 1), (-1, 0, 1), (1, 2, 3), (1, -1, -3)}

Removing linear dependence is in general somewhat difficult though in this case is easy--after our kernel computation, we will find that the basis has length 2, so we must remove 2 of the above elements, and which two is irrelevant in this particular case since any two of them are together obviously linearly independent. Answers are {T(e1), T(e2)}, {T(e1), T(e3)}, etc.

Computing a basis for ker T:
T(w,x,y,z) = (0,0,0)

iff

w-x+y+z = 0
w+2y-z = 0
w+x+3y-3z = 0

iff

z = w+2y
x = w+y+(w+2y) = 2w + 3y
w + (2w+3y) + 3y - 3(w+2y) = 0, i.e. 0=0

iff

z = w+2y
x = 2w+3y

Vectors in the kernel are then of the form (w, 2w+3y, y, w+2y) = w(1, 2, 0, 1) + y(0, 3, 1, 2) for w and y arbitrary, so the basis is just {(1, 2, 0, 1), (0, 3, 1, 2)} (note these are linearly independent). The rank-nullity theorem can be phrased as: for a linear transformation T, the dimension of the kernel and the dimension of the image sum to the dimension of the domain, i.e. dim(ker T) + dim(im T) = dim(dom T). (dim(ker T) is termed the "nullity", perhaps since ker T is termed "the null space"; dim(im T) is termed the "rank" of T, hence the name of the theorem.) Since dim(ker T) = 2 and dim(V4) = 4, we must have dim(im T) = 4 - 2 = 2. This justifies my comments at the end of the previous half of this problem.

(c) Given a matrix A, its characteristic polynomial is det(A - tI) where t is an indeterminate and I is the appropriately sized identity matrix. A has eigenvalues precisely at the roots of its characteristic polynomial. Usually characteristic polynomials are considered to be polynomials in a real variable (as in the previous sentence), but it is also possible to substitute in matrices in place of "t" (after computing the determinant in the usual way); the constant term is interpreted as multiplied by the appropriate identity matrix. Here...

det(A - tI) = det({{1-t, 3}, {4, 5-t}})
= (1-t)(5-t) - 3*4
= t^2 - 6t - 7

Plugging in t=A gives...
A^2 - 6A - 7(I)
= {{1, 3}, {4, 5}}{{1, 3}, {4, 5}} - 6{{1, 3}, {4, 5}} - {{7, 0}, {0, 7}}
= {{1+12, 3+15}, {4+20}, {12+25}} - {{6+7, 18}, {24, 30+7}}
= {{13-13, 18-18}, {24-24, 37-37}}
= {{0, 0}, {0, 0}}

so A is indeed a "root" in this sense. This is a special case of the Cayley-Hamilton theorem.

5. ## Re: Linear Transformations on vector spaces Help

you're the man jemidiah, thanks again, big help!

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•