Linear Transformations

Linear algebra is not so much about vector spaces as about the maps between vector spaces, which we call linear transformations. This is part of a general pattern in mathematics as we move up the ladder of abstraction, moving from talking about objects to structure-preserving maps between objects. Then we reify the new maps as objects, and think about maps among them, and so on. As mentioned in the lecture, it is sometimes useful to think about the simpler category of finite sets and relations between them as a kind of simplified analogy to the category of vector spaces and linear transformations.

We've defined vector spaces as sets with additional structure. Hence a linear transformation, as a structure-preserving map between vector spaces, will be a set function with some additional structure.

A vector space is an additive abelian group with a scalar multiplication. Thus a linear transformation should be a set function that preserves the addititve abelian group structure and the scalar multiplication.

Let \( V, W\) be vector spaces over a field \(\mathbb{F}\). A function \(f:V \rightarrow W\) is a linear transformation from \( V\) to \( W \) if for all \(u,v \in V\), and for all \(a \in \mathbb{F} \), we have
  • \(f\) is a group homomorphism: \(f(u+v) = f(u) + f(v) \), and
  • \(f\) respects the scalar multiplication: \(f(au) = af(u)\).

One can prove with these properties that a linear transformation \(f:V \rightarrow W\) sends \( 0_V \) to \( 0_W \).

It is sometimes convenient to combine the requirements in the definition by saying that \(f\) is a linear transformation iff \(f (au + v ) = a f(u) + f(v) \) iff f can be pushed through a linear combination, \( f( \sum_{i=1}^n a_i v_i ) = \sum_{i=1}^n a_i f(v_i) \).

The set of all linear transformations from a vector space to another vector space can also be given the structure of a vector space. It is important to understand how that works, and not to confuse the vector space structure on the space of linear transformations with the vector space structure on each space separately. For example, there are four different notions of zero at work: the zero scalar, the zero vector in the domain, the zero vector in the codomain, and the zero vector in space of linear transformations. Please review the way we define the operations on the space of linear transformations as explained in the lecture here.

If \(V\) is the domain vector space and \(W\) is the codomain, this vector space of linear transformations is denoted \(\mathcal{L}(V,W)\) or if \(V=W\), as \(\mathcal{L}(V)\). Other notations you may encounter for this space include \(Hom(V,W) \) and \(V^* \otimes W \).

When does it make sense to compose linear transformations? What is the identity linear transformation \({\rm id}_V\) and what properties does it satisfy with respect to composition? What can you say about the spaces \(\mathcal{L}(V,\mathbb{F})\) and \(\mathcal{L}(\mathbb{F}, V)\)?