35 MATRIX PROPERTIES VIA SVD
35.1. Nullspace
Finding a basis for the nullspace
The SVD allows the computation of an orthonormal basis for the nullspace of a matrix. To understand this, let us first consider a matrix of the form
The nullspace of this matrix is readily found by solving the equation . We obtain that
is in the nullspace if and only if the first two components of
are zero:
What about a general matrix , which admits the SVD as given in the SVD theorem? Since
is orthogonal, we can pre-multiply the nullspace equation
by
, and solve in terms of the ‘‘rotated’’ variable
We obtain the condition on
The above is equivalent to the first components of
being zero. Since
, this corresponds to the fact that
belongs to the span of the last
columns of
. Note that these columns form a set of mutually orthogonal, normalized vectors that span the nullspace: hence they form an orthonormal basis for it.
Theorem: nullspace via SVD
The nullspace of a matrix
where |
Example: Nullspace of a matrix.
Full column-rank matrices
One-to-one (or, full column rank) matrices are the matrices with nullspace reduced to . If the dimension of the nullspace is zero, then we must have
. Thus, full column rank matrices are ones with SVD of the form
35.2. Range, rank via the SVD
Basis of the range
As with the nullspace, we can express the range in terms of the SVD of the matrix . Indeed, the range of
is the set of vectors of the form
where . Since
is orthogonal, when
spans
, so does
. Decomposing the latter vector in two sub-vectors
, we obtain that the range is the set of vectors
, with
where is an arbitrary vector of
. Since
is invertible,
also spans
. We obtain that the range is the set of vectors
, where
is of the form
with
arbitrary. This means that the range is the span of the first
columns of the orthogonal matrix
, and that these columns form an orthonormal basis for it. Hence, the number
of dyads appearing in the SVD decomposition is indeed the rank (dimension of the range).
Theorem: range and rank via SVD
The range of a matrix
where |
Full row rank matrices
An onto (or full row rank) matrix has a range . These matrices are characterized by an SVD of the form
Example: Range of a matrix.
35.3. Fundamental theorem of linear algebra
The theorem already mentioned here allows to decompose any vector into two orthogonal ones, the first in the nullspace of a matrix , and the second in the range of its transpose.
Fundamental theorem of linear algebra
Let
In particular, we obtain that the condition on a vector
|
35.4. Matrix norms, condition number
Matrix norms are useful to measure the size of a matrix. Some of them can be interpreted in terms of input-output properties of the corresponding linear map; for example, the Frobenius norm measures the average response to unit vectors, while the largest singular (LSV) norm measures the peak gain. These two norms can be easily read from the SVD.
Frobenius norm
The Frobenius norm can be defined as
Using the SVD of
, we obtain
Hence the squared Frobenius norm is nothing else than the sum of the squares of the singular values.
Largest singular value norm
An alternate way to measure matrix size is based on asking for the maximum ratio of the norm of the output to the norm of the input. When the norm used is the Euclidean norm, the corresponding quantity
is called the largest singular value (LSV) norm. The reason for this wording is given by the following theorem.
Theorem: largest singular value norm
For any matrix
where |
Example: Norms of a matrix.
Condition number
The condition number of an invertible matrix
is the ratio between the largest and the smallest singular values:
As seen in the next section, this number provides a measure of the sensitivity of the solution of a linear equation to changes in .