Fourth Week BeyondResearch Insights, Year 1

Welcome to the fourth week of my BeyondResearch Insights! This time we return to Abstract Analysis.

At the beginning of the session, we discussed our solutions to the problem set, in particular one of the technical exercises that I struggled with. I had to show that linear functions from the reals to the reals are exactly of such a form that x maps to a constant multiplied by x. In other words, I needed to show that there exists a linear bijection between the real numbers and the set of all linear functions from \mathbb{R} to \mathbb{R}, that is a map that respects both addition and scalar multiplication, and is both injective and surjective. I managed to prove injection, the property that the map is one-to-one but had a hard time showing surjection, the property that the image of the map is the same as the codomain.

I ended up making a vague argument that because the mapping rule uniquely defines the linear function and is itself determined by a real number, the map will be surjective, as the domain is all real numbers. This was not the right approach; however, my intuition was in a sense correct…

I also stumbled upon an interesting fact: for linear maps in finite dimensions, injective, surjective, and isomorphic are equivalent properties. This is a consequence of an important result in linear algebra, the Rank Nullity Theorem. It states that the sum of the dimension of the image of a linear transformation L and the dimension of the kernel of L, the space of all vectors in the domain V that L maps to 0 in the codomain W, is equal to the dimension of the domain of L:

\text{dim(Im}(L)) + \text{dim(ker}(L)) = \text{dim(dom}(L)), \quad \text{where } L: V \to W

The dimension of a vector space is defined as the cardinality of its basis or the number of basis vectors it has. For example, we can describe all vectors in the plane as a combination of two linearly independent vectors, like \hat{i} and \hat{j}, so it is a two-dimensional vector space. Note that although \hat{i} and \hat{j} is the most commonly used basis, it is by no means the only one. It represents a special type, a so-called orthonormal basis, the basis vectors of which are both unit length and mutually orthogonal. However, there is an infinite number of valid bases with which we could describe vectors in the plane, but all would have exactly two basis vectors!

In the next post, I will elaborate on the rank nullity theorem, for both linear maps and matrices, and explore how it relates to solving systems of linear equations.

Since I introduced the Rank-Nullity Theorem and definition of a base, let’s see what this means for our linear map L: V \to W.

The theorem talks about the dimension of the image of L. We proved in a different exercise that the image is a subspace of the codomain of L. If and only if L is surjective, its image will have the same dimension as the codomain \text{dim(Im}(L)) = \text{dim}(W). In addition, if L is injective, it will map only one vector to the zero vector, therefore the kernel of L contains only one element, 0. The only linearly independent subset of {0} is the empty set, which will thus be the basis of the kernel of L. Because the cardinality of the empty set is zero, the kernel of L is zero-dimensional and \text{dim(ker}(L)) = 0.

Therefore, for a bijective linear map we can rewrite \text{dim(Im}(L)) + \text{dim(ker}(L)) = \text{dim(dom}(L)) as \text{dim}(W) + 0 = \text{dim(dom}(L)). Because we denoted the domain of L as V, we can simplify it to \text{dim}(W) = \text{dim}(V). This means that we can construct a bijective linear map (a linear isomorphism) between two vector spaces only if they have the same dimension.

Interestingly, the rank nullity theorem generalizes the Gauss-Jordan elimination, an algorithm for fully solving systems of linear equations. Suppose we relate the theorem to matrices instead of linear transformations. In that case, we get that the rank, which tells us how many linearly independent row vectors make up the matrix, and the nullity, which is again the dimension of the kernel, sum to the number of columns in a given matrix:

\text{rank}(M) + \text{nullity}(M) = n, \quad \text{where } M \text{ is an } m \times n \text{ matrix}

I struggled to see the link between the rank nullity theorem and solving systems of linear equations, but it turns out that the rank of a matrix in the form used in the Gauss-Jordan elimination is equal to the number of leading variables and its nullity to the number of free variables. This sums to the number of columns, which makes sense as each column represents a variable, which will either be leading or free!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top